[Openmcl-devel] Lisp Comparison
craig.t.lanning at gmail.com
Wed Apr 19 10:00:07 PDT 2017
On Tue, 2017-04-18 at 23:22 -0700, David McClain wrote:
> Holy Cow! I took a look at your code repository… Genera !? How old is
> this thing?
I learned Lisp on a Symboics in 1985.
I've been working in STEP and EXPRESS since the 90's. I took over
development of this program in the 2000's.
I can run the code in Genera. I've used to it track down problems that
caused other lisps to crash out to the OS. With a CPU running at 30 or
40 MHz it is extremely slow, so I don't use if unless I must.
> The repository is quite a bit more labyrinthine than I’m accustomed
> to, but then again, I’m not a typical production programmer. I did
> finally track down some actual compilable code, not mere build
> scripts and system definition files. And it appears that this engine
> is a gigantic (recursive descent?) parser from what appears to be XML
> to some internal AST representation, from which you can translate to
> whatever form you actually want.
> The number of parsing terms is huge. I haven’t found your parser
> description files yet - the equiv of YACC and LEX, or BNF or
> something files. Hopefully you didn’t actually have to craft all
> these parser terms by hand.
There are BNF files in the "docs" directory.
iso-10303-11--2004 is the BNF for the EXPRESS parser in plugins/p11/
iso-10303-21--2002 is the BNF for the Data File parser in plugins/p21/
Yes, the parser was crafted by hand. I may eventually build some code
to automate creating the method stubs.
> But I am beginning to see the possibility that there may be a ton of
> backtracking that could be happening on the way to final parser
> productions. I don’t know your SETP schema language. Is it a
> reasonable beast? or is it really crufty the way most web stuff is?
> So this thing may also be churning out garbage memory in prodigious
No, the parser is recursive descent without backtracking. Following
the BNF exactly would involve backtracking, but would make the parser
much more complicated.
> I also notice a bunch of stream read-tokens and such, buried in
> macrolets inside of each of those zillion parser terms. So stream
> handling could be killing you too. I don’t know if you have a front-
> end for buffering up the physical reads from external media or
> networks. But if backtracking in parsing is happening, then too there
> will likely be put-back of tokens onto the input streams, however you
> manage that.
Any "unread" tokens are stored in the stream object in an ordered list.
READ is only called if no tokens are available in the stream object.
> The style of parsing you choose will have a big impact on performance
> here. Shift / Reduce or recursive descent, what kind of grammar you
> are operating in (e.g. LALR, LR1, LL, xxxxxx) And then the decision
> to run deterministically or nondeterministically. Yikes!
> So do you enjoy this kind of programming? I would certainly hope so.
In college, I took courses in finite automata and compiler writing as
part of my Computer Science degree. At that point I vowed that I would
never build a parser/compiler that way.
My parser makes heavy use of CL's multiple dispatch in methods. And it
uses EQL specializers, too.
It has actually been a lot of fun developing this application.
In the future, I intend to put a CLIM GUI on it, hence, my interest in
getting the Franz' CLIM code ported to Clozure CL. From my brief look
at CCL's foriegn function interface, it looked like it might be the
best candidate for building CLIM backends.
> - DM
> > On Apr 18, 2017, at 14:44, Craig Lanning <craig.t.lanning at gmail.com
> > > wrote:
> > I have been working on a rather large application that runs from
> > the
> > command line. It reads schema files that are part of the ISO 10303
> > STEP family of product data standards. It can also read the
> > corresponding Product Data Population files.
> > Recently someone gave me a script that runs a schema comparison
> > (using
> > my application) across several schemata. (In this case, the script
> > processed 17 schema pairs.) The act of processing one pair of
> > schema
> > files will mean that additional schema files are pulled from a
> > special
> > repository. The schema files being processed "include" other
> > schema
> > files which also may "include" even more schema files.
> > I can build my application using LispWorks CL 6.1.1, Clozure CL
> > v1.11,
> > and SBCL 1.3.16.
> > I can build with LispWorks CL 6.1.1 in 32-bit only.
> > I can build with Clozure CL 1.11 in both 32-bit and 64-bit.
> > I can build with SBCL 1.3.16 in 64-bit only. (No easy way to get
> > both
> > the 32-bit and 64-bit versions at the same time.)
> > The source code for my application is stored on SourceForge
> > (http://exp-engine.sourceforge.net/) as the original development
> > was
> > intended to be an open source project.
> > LWL 6.1.1(32) SBCL 1.3.16(64) CCL 1.11(32) CCL
> > 1.11(64)
> > App Compile 10.323 sec 18.002 sec 10.242
> > sec 10.587 sec
> > App Deliver 4.306 sec 6.379 sec 1.418
> > sec 1.875 sec
> > App
> > Filesize 37,429,248 57,409,584 24,719,376 33,460,
> > 464
> > 17 schemata 8.320 sec 7.506
> > sec 23:49.054 23:44.190
> > The machine used was
> > Dell Inspiron 3558 Laptop
> > Intel Core i3 2.1GHz CPU
> > 4GB Memory
> > As you can see in the chart above, CCL 1.11 took over 23 minutes to
> > process the 17 schema pairs. Not a good showing.
> > This application does not allocate and deallocate large amounts of
> > memory so I have no information about which Lisp handles memory the
> > best. None of the Lisps tested ran out of memory.
> > LispWorks and Clozure CL both start with a small amount of memory
> > and
> > grow and shrink the dynamic space as needed so I suspect that they
> > handle memory the best.
> > SBCL needs to be told what its maximum dynamic space size is. It
> > then
> > allocates all of that memory.
> > My purpose in posting this message was to give a reference for how
> > different Lisps support real applications.
> > I was curious about whether CCL's time has improved in successive
> > releases so I downloaded CCL 1.10 and 1.9. I was unable to run
> > 1.9,
> > but was able to run 1.10. 1.11 produced a slower executable than
> > 1.10.
> > Craig Lanning
> > _______________________________________________
> > Openmcl-devel mailing list
> > Openmcl-devel at clozure.com
> > https://lists.clozure.com/mailman/listinfo/openmcl-devel
> Openmcl-devel mailing list
> Openmcl-devel at clozure.com
More information about the Openmcl-devel