[Openmcl-devel] llvm backend

Jason E. Aten j.e.aten at gmail.com
Mon Mar 28 15:01:57 PDT 2011


Thanks guys.  Those were some very interesting discussions to read through.
I'm convinced that C is not such a great target, and the Haskell
implementers I think concluded the same (producing C-- along the way).

But I'm not clear on how the discussion of C being a bad target for lisp is
applicable to LLVM.  LLVM is meant to support dynamic language; their intro
Kaleidoscope tutorial shows how to construct and compile a lisp-like one.

Granted LLVM does have a C and C++ compiler (and these are the most
production quality), but it also has (in various states of maturitiy) Ruby,
Python, C#, Pure (term re-writing language), Haskell, Java, D, and Lua
implementations, most of which try to take advantage of the lovely
Just-in-time compilation features (for x86/64 and PPC/64). There are hooks
for precise GC, and the biggest win of all is that there are so many
backends (although the C backend for LLVM is reported decaying for lack of
use).  If you don't need JIT, then you get x86, x86_64, PPC, PPC64, ARM,
Thumb, Sparc, Alpha, CellSPU, MIPS, MSP430, SystemZ, and XCore, all for free
if you target LLVM.

This is fairly academic, but I would note that GPGPU (General Purpose GPUs)
such as the latest nVIDIA cards sport, have come a long way; each of the 384
cores in my $250 PCIe graphics card is a fully capable processor; the only
limitations are really memory bandwidth for getting data onto the card.  But
again, there's no need to think about this directly, because several
projects are working on producing LLVM targets (OpenCL, PTX), which means if
one can produce LLVM IR, then you get GPGPU support for free.

Massively parallel development isn't easy, and I suspect that a Lisp REPL
based development environment (there is a PyCUDA now), could be a big win.
But then again, since I'm looking at doing this work myself, I was just
trying to get a sense of how abstracted the IR-> backend stage is to begin
with. If there's a clean separation, maybe it's straightforward?  I know,
probably wishful thinking ;-)  But I'd love to be wrong.

Thanks guys.

Jason

On Mon, Mar 28, 2011 at 2:56 PM, Ron Garret <ron at flownet.com> wrote:

>
> On Mar 28, 2011, at 12:45 PM, Gary Byers wrote:
>
> > Porting a lisp to a GPU makes only slightly more sense to me than porting
> > a lisp to an FPU.  (Yes there are differences and a GPU may be a lot
> closer
> > to it, buy neither is exactly a general-purpose computer.  You can
> download
> > GPU code into the unit and execute it - so the FPU analogy breaks down -
> > but you'd probably find that that code can do some things incredibly
> quickly
> > and that other things are incredibly awkward.)  You might be able to do
> > something that allows you to express certain kinds of algorithms in Lisp,
> > compile that code into machine code for a GPU, download that code,
> execute
> > it, and find that the GPU was fast enough to make that all worthwhile;
> that's
> > probably easier than it would be to figure out how to implement CONS
> reasonably
> > or how to implement OPEN at all.
>
> This is not quite as outlandish as Gary is making it sound.  See:
>
> http://vorlon.case.edu/~beer/Software/FPC-PPC/FPC-PPC-DOC-0.21.txt
>
> for inspiration.
>
> rg
>
>


-- 
Jason E. Aten, Ph.D.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clozure.com/pipermail/openmcl-devel/attachments/20110328/99220df1/attachment.htm>


More information about the Openmcl-devel mailing list