[Openmcl-devel] llvm backend

Gary Byers gb at clozure.com
Mon Mar 28 16:25:24 PDT 2011


I don't want to spend all day discussing this, but LLVM's notion of
"precise GC" is a bit different from CCL's.  (Well, maybe the
implementation is more significantly different than the general notion.)

LLVM-generated code can perform a GC at "safe points"; at a safe
point, the GC can scan the stack (and possibly the machine registers)
and reliably know which things are pointers to GCable things and which
things aren't.  This is certainly a good thing, especially when
compared to a situation where all roots are ambiguous and GC must be
conservative (must retain anything that might be a pointer, never move
anything unless you're somehow sure that all references to it are in
fact pointers.)

In CCL, there's (nearly) no such thing as a "safe point" (no point in 
the execution of lisp code is more or less safe than any other point.)
Because activity in another thread may trigger a GC, compiled lisp
code has to be GC-safe (all roots - registers and stack locations - have
to be unambiguous at essentially every instruction boundary.)

I confess that I haven't looked into it in a long time (probably over
a year), but I haven't heard that the situation with LLVM has changed,
and the LLVM scheme as I understand it and try to present it above wasn't
intended to provide precise GC for multithreaded programs. If the situation's
changed or if I'm portraying it incorrectly, please let me know.

If I'm correct about that, I personally think that precise GC in the
presence of native threads is too important to seriously consider the
idea of using an LLVM-based backend (and I have other concerns about that
as well.)  Someone else might evaluate those tradeoffs differently, but
if you did something that was this pervasive, you'd likely find that a 
lot of other things would need to change as well.  (You'd be replacing
a precise GC with a conservative one and changing a lot of the runtime,
or replacing native threads with cooperative ones; whatever you changed,
the result would be something that I think would be visibly different from
what CCL is and has been.)

Whether those differences are good or bad or both could be debated, but what
you'd wind up with would probably be more different than you may expect.

On Mon, 28 Mar 2011, Jason E. Aten wrote:

> Thanks guys.? Those were some very interesting discussions to read through.?
> I'm convinced that C is not such a great target, and the Haskell
> implementers I think concluded the same (producing C-- along the way).
> 
> But I'm not clear on how the discussion of C being a bad target for lisp is
> applicable to LLVM.? LLVM is meant to support dynamic language; their intro
> Kaleidoscope tutorial shows how to construct and compile a lisp-like one.
> 
> Granted LLVM does have a C and C++ compiler (and these are the most
> production quality), but it also has (in various states of maturitiy) Ruby,
> Python, C#, Pure (term re-writing language), Haskell, Java, D, and Lua
> implementations, most of which try to take advantage of the lovely
> Just-in-time compilation features (for x86/64 and PPC/64). There are hooks
> for precise GC, and the biggest win of all is that there are so many
> backends (although the C backend for LLVM is reported decaying for lack of
> use).? If you don't need JIT, then you get x86, x86_64, PPC, PPC64, ARM,
> Thumb, Sparc, Alpha, CellSPU, MIPS, MSP430, SystemZ, and XCore, all for free
> if you target LLVM.
> 
> This is fairly academic, but I would note that GPGPU (General Purpose GPUs)
> such as the latest nVIDIA cards sport, have come a long way; each of the 384
> cores in my $250 PCIe graphics card is a fully capable processor; the only
> limitations are really memory bandwidth for getting data onto the card.? But
> again, there's no need to think about this directly, because several
> projects are working on producing LLVM targets (OpenCL, PTX), which means if
> one can produce LLVM IR, then you get GPGPU support for free.
> 
> Massively parallel development isn't easy, and I suspect that a Lisp REPL
> based development environment (there is a PyCUDA now), could be a big win.?
> But then again, since I'm looking at doing this work myself, I was just
> trying to get a sense of how abstracted the IR-> backend stage is to begin
> with. If there's a clean separation, maybe it's straightforward?? I know,
> probably wishful thinking ;-)? But I'd love to be wrong.
> 
> Thanks guys.?
> 
> Jason
> 
> On Mon, Mar 28, 2011 at 2:56 PM, Ron Garret <ron at flownet.com> wrote:
>
>       On Mar 28, 2011, at 12:45 PM, Gary Byers wrote:
>
>       > Porting a lisp to a GPU makes only slightly more sense to me
>       than porting
>       > a lisp to an FPU. ?(Yes there are differences and a GPU may be
>       a lot closer
>       > to it, buy neither is exactly a general-purpose computer. ?You
>       can download
>       > GPU code into the unit and execute it - so the FPU analogy
>       breaks down -
>       > but you'd probably find that that code can do some things
>       incredibly quickly
>       > and that other things are incredibly awkward.) ?You might be
>       able to do
>       > something that allows you to express certain kinds of
>       algorithms in Lisp,
>       > compile that code into machine code for a GPU, download that
>       code, execute
>       > it, and find that the GPU was fast enough to make that all
>       worthwhile; that's
>       > probably easier than it would be to figure out how to
>       implement CONS reasonably
>       > or how to implement OPEN at all.
> 
> This is not quite as outlandish as Gary is making it sound. ?See:
> 
> http://vorlon.case.edu/~beer/Software/FPC-PPC/FPC-PPC-DOC-0.21.txt
> 
> for inspiration.
> 
> rg
> 
> 
> 
> 
> --
> Jason E. Aten, Ph.D.
> 
> 
> 
>



More information about the Openmcl-devel mailing list