[Openmcl-devel] More Intel: Rosetta & Universal Binaries
David Steuber
david at david-steuber.com
Fri Jun 10 06:26:31 PDT 2005
Hi Gary & Co,
Thank you for you quick and thoughtful response. I have a few other
comments/questions. I have less information than you, so please pardon
any stupid comments/questions.
On Jun 10, 2005, at 7:57 AM, Gary Byers wrote:
>> The second question is how complicated will Universal Binaries be?
>> The C portion of OpenMCL has never given me any compile troubles. So
>> it seems fair to speculate that with Xcode 2.1 or later, the kernel
>> portion may build with the flick of a switch, so to speak, so that
>> part is a Universal Binary. As for the image file, could there
>> perhaps be one image each for PPC and Intel? Or would it be better if
>> they were Universal as well? And what about the .dfsl files
>> produced?
>
> OpenMCL's kernel (the C part of it) mostly sits there and handles
> exceptions
> that occur during the execution of compiled lisp code (including the
> "exception" of not being able to allocate memory until the GC's had a
> chance to run.) It knows a lot about the state (register contents) of
> running threads, and that code's currently very PPC-specific.
This sounds like you need two entirely separate code bases. Perhaps
there can be a PPC specific section of code and data and an Intel
section of code and data. If there is any intersection, then that's
less work to do. If there is no intersection, then so be it. Just
have two programs in one binary. The PPC section could be just modeled
after the existing code, possibly with some updates.
Hopefully Rosetta will run OpenMCL just fine. That would allow more
time to deal with this nasty transition.
> The idea behind Universal Binaries seems to be that a single executable
> file contains X86 and PPC code and shared data; that's complicated by
> the fact that the X86 is little-endian and the PPC is natively
> big-endian;
> the saving grace is that (as a broad generalization) most C programs
> contain relatively little static, initialized data, and it's practical
> (if a bit Byzantine) to define "byte-swapping callbacks" as the UB
> Porting Guide describes. A typical lisp image contains about as
> much data as it does code, and I have some difficulty convincing myself
> that it'd be a good idea to page in a few megs of data a byte-swap it;
> I get a bad headache when I start thinking about incrementally
> compiling
> code in a Universal Binary version of a lisp development environment
> (do
> we compile FACTORIAL for both targets ?), ad infinitum.
If the Universal Binaries provide the appropriate functionality to
query the architecture, then perhaps the Lisp portions can just be kept
separate as in separate image files. Ultimately, the binaries are a
deployment solution. So it really only matters that you can produce
image sets for each architecture for save-application and compile-file.
Perhaps cl:compile-file can just default to building fasls for the
host architecture and defer to a ccl:compile-file-ppc or
ccl:compile-file-x86 call based on the host architecture (or a
cross-compile flag). cl:load would then just defer to ccl:load-ppc or
ccl:load-x86. Incremental compilation shouldn't need to be Universal
at all. That's just for development, right?
> I wind up thinking that there are lots of other straightforward ways
> to deliver bundled applications that'll run on both platforms, and
> find myself strongly tempted (at this point) to dismiss Universal
> Binary technology as being an irrelevant distraction. (Unfortunately,
> it's one of the few things for which any concrete information is
> currently available.)
Universal Binaries do seem like a kludge. From the keynote, I gathered
that Xcode 2.1 is hiding the horrible mess of that behind those two
little checkboxes. To elaborate a bit more on what I said above, it
seems to me that only the kernel needs to have the kludge. I don't
know how long PPC support will be needed, but I'm guessing the time
frame is in years. Perhaps enough years to justify going with the
Universal Binaries approach just for the kernel. The kernel will know
which image files it needs to load based on the host and can then just
run native. Sure, that is more image files to produce for an app
bundle, but having two in Contents:MacOS is probably already one more
than is typical. Multiple architecture support only matters for
distribution of apps and other binaries. Development doesn't need that
capability. At least not app development. Compiler development is
another story. I have no ideas there at the moment.
All that of course is an end user perspective.
While watching the keynote, I couldn't help noticing the obscene bias
for Xcode. Telling Metroworks users they have to switch to Xcode!
Really! Jobs clearly isn't even considering vendors of other language
implementations.
I hope I'm not coming off as someone who is all too willing to assign
homework. The above are just my ideas with whatever implicit questions
you infer.
Thanks. Do at least enjoy your vacation and try not to think about
this mess :-)
More information about the Openmcl-devel
mailing list