[Openmcl-devel] Lisp Comparison

Andrew Shalit alms at clozure.com
Wed Apr 19 08:10:48 PDT 2017


I remembered more details of the previous case.

The user code added a dispatching macro character for every character.  In most cases, the dispatching function just performed the default behavior, but in one or two cases it did something else.  This worked fine in a world of 8-bit ascii but had magnificent performance problems in a world of unicode.

The user decided to change their code (which had probably been written by an undergraduate 40 years ago) rather than have us make changes to the CCL reader.

I don’t know whether Craig’s performance problem is caused by the same issue in the reader, but it seems like a good candidate to look at.

> On Apr 19, 2017, at 7:24 AM, Andrew Shalit <alms at clozure.com> wrote:
> 
> Does the code add hundreds or thousands of dispatching macro characters? That won’t work very well. I don’t have details at the tip of my keyboard, but I believe they’re just stored in a list.  We ran into this with a client a few years ago, and my memory is that he opted to change his implementation (which was trivial in his case) rather than having us change how the reader stores the set of dispatching macro characters associated with a read table. Changing the way CCL stored the dispatching macro characters also was pretty trivial, but involved some tradeoffs, as I recall.
> 
> 
> 
> 
>> On Apr 19, 2017, at 2:22 AM, David McClain <dbm at refined-audiometrics.com> wrote:
>> 
>> Holy Cow! I took a look at your code repository… Genera !? How old is this thing?
>> 
>> The repository is quite a bit more labyrinthine than I’m accustomed to, but then again, I’m not a typical production programmer. I did finally track down some actual compilable code, not mere build scripts and system definition files. And it appears that this engine is a gigantic (recursive descent?) parser from what appears to be XML to some internal AST representation, from which you can translate to whatever form you actually want.
>> 
>> The number of parsing terms is huge. I haven’t found your parser description files yet - the equiv of YACC and LEX, or BNF or something files. Hopefully you didn’t actually have to craft all these parser terms by hand.
>> 
>> But I am beginning to see the possibility that there may be a ton of backtracking that could be happening on the way to final parser productions. I don’t know your SETP schema language. Is it a reasonable beast? or is it really crufty the way most web stuff is? So this thing may also be churning out garbage memory in prodigious amounts.
>> 
>> I also notice a bunch of stream read-tokens and such, buried in macrolets inside of each of those zillion parser terms. So stream handling could be killing you too. I don’t know if you have a front-end for buffering up the physical reads from external media or networks. But if backtracking in parsing is happening, then too there will likely be put-back of tokens onto the input streams, however you manage that.
>> 
>> The style of parsing you choose will have a big impact on performance here. Shift / Reduce or recursive descent, what kind of grammar you are operating in (e.g. LALR, LR1, LL, xxxxxx) And then the decision to run deterministically or nondeterministically. Yikes!
>> 
>> So do you enjoy this kind of programming? I would certainly hope so.
>> 
>> - DM
>> 
>> 
>>> On Apr 18, 2017, at 14:44, Craig Lanning <craig.t.lanning at gmail.com> wrote:
>>> 
>>> I have been working on a rather large application that runs from the
>>> command line.  It reads schema files that are part of the ISO 10303
>>> STEP family of product data standards.  It can also read the
>>> corresponding Product Data Population files.
>>> 
>>> Recently someone gave me a script that runs a schema comparison (using
>>> my application) across several schemata.  (In this case, the script
>>> processed 17 schema pairs.) The act of processing one pair of schema
>>> files will mean that additional schema files are pulled from a special
>>> repository.  The schema files being processed "include" other schema
>>> files which also may "include" even more schema files.
>>> 
>>> I can build my application using LispWorks CL 6.1.1, Clozure CL v1.11,
>>> and SBCL 1.3.16.
>>> 
>>> I can build with LispWorks CL 6.1.1 in 32-bit only.
>>> 
>>> I can build with Clozure CL 1.11 in both 32-bit and 64-bit.
>>> 
>>> I can build with SBCL 1.3.16 in 64-bit only.  (No easy way to get both
>>> the 32-bit and 64-bit versions at the same time.)
>>> 
>>> The source code for my application is stored on SourceForge
>>> (http://exp-engine.sourceforge.net/) as the original development was
>>> intended to be an open source project.
>>> 
>>>              LWL 6.1.1(32)   SBCL 1.3.16(64)   CCL 1.11(32)   CCL 1.11(64)
>>> App Compile    10.323 sec      18.002 sec        10.242 sec     10.587 sec
>>> App Deliver     4.306 sec       6.379 sec         1.418 sec      1.875 sec
>>> App Filesize   37,429,248      57,409,584        24,719,376     33,460,464
>>> 17 schemata     8.320 sec       7.506 sec         23:49.054      23:44.190
>>> 
>>> The machine used was
>>>       Dell Inspiron 3558 Laptop
>>>       Intel Core i3 2.1GHz CPU
>>>       4GB Memory
>>> 
>>> As you can see in the chart above, CCL 1.11 took over 23 minutes to
>>> process the 17 schema pairs.  Not a good showing.
>>> 
>>> This application does not allocate and deallocate large amounts of
>>> memory so I have no information about which Lisp handles memory the
>>> best.  None of the Lisps tested ran out of memory.
>>> 
>>> LispWorks and Clozure CL both start with a small amount of memory and 
>>> grow and shrink the dynamic space as needed so I suspect that they
>>> handle memory the best.
>>> 
>>> SBCL needs to be told what its maximum dynamic space size is.  It then
>>> allocates all of that memory.
>>> 
>>> My purpose in posting this message was to give a reference for how
>>> different Lisps support real applications.
>>> 
>>> I was curious about whether CCL's time has improved in successive
>>> releases so I downloaded CCL 1.10 and 1.9.  I was unable to run 1.9,
>>> but was able to run 1.10.  1.11 produced a slower executable than 1.10.
>>> 
>>> Craig Lanning
>>> _______________________________________________
>>> Openmcl-devel mailing list
>>> Openmcl-devel at clozure.com
>>> https://lists.clozure.com/mailman/listinfo/openmcl-devel
>> 
>> _______________________________________________
>> Openmcl-devel mailing list
>> Openmcl-devel at clozure.com
>> https://lists.clozure.com/mailman/listinfo/openmcl-devel
> 
> _______________________________________________
> Openmcl-devel mailing list
> Openmcl-devel at clozure.com
> https://lists.clozure.com/mailman/listinfo/openmcl-devel




More information about the Openmcl-devel mailing list