[Openmcl-devel] Lisp Comparison

Ron Garret ron at flownet.com
Wed Apr 19 07:39:00 PDT 2017


This is why everyone should use abstract associative maps unless there’s a compelling reason not to :-)

Another CCL efficiency quirk I just remembered: with-output-to-string is veeery sloooow:

Welcome to Clozure Common Lisp Version 1.12-dev-r16804M-trunk  (DarwinX8664)!
...
? (require :ergolib)
...
:ERGOLIB
("ERGOBASE" "GLOBALS" "MAPPERS" "ERGOCLOS" "BINDING-BLOCK" "ITERATORS" "ERGODICT" "ERGOUTILS" "ERGOLIB”)

? (time (length (with-output-to-string (s) (dotimes (i 1000000) (princ #\x s)))))
(LENGTH (WITH-OUTPUT-TO-STRING (S) (DOTIMES (I 1000000) (PRINC #\x S))))
took 381,459 microseconds (0.381459 seconds) to run.
       9,019 microseconds (0.009019 seconds, 2.36%) of which was spent in GC.
During that period, and with 4 available CPU cores,
     376,102 microseconds (0.376102 seconds) were spent in user mode
      10,864 microseconds (0.010864 seconds) were spent in system mode
 34,155,760 bytes of memory allocated.
 2,422 minor page faults, 0 major page faults, 0 swaps.
1000000

? (time (length (with-char-collector collect (dotimes (i 1000000) (collect #\x)))))
(LENGTH (WITH-CHAR-COLLECTOR COLLECT (DOTIMES (I 1000000) (COLLECT #\x))))
took 24,073 microseconds (0.024073 seconds) to run.
      1,028 microseconds (0.001028 seconds, 4.27%) of which was spent in GC.
During that period, and with 4 available CPU cores,
     29,126 microseconds (0.029126 seconds) were spent in user mode
        890 microseconds (0.000890 seconds) were spent in system mode
 8,388,960 bytes of memory allocated.
 113 minor page faults, 0 major page faults, 0 swaps.
1000000


On Apr 19, 2017, at 4:24 AM, Andrew Shalit <alms at clozure.com> wrote:

> Does the code add hundreds or thousands of dispatching macro characters? That won’t work very well. I don’t have details at the tip of my keyboard, but I believe they’re just stored in a list.  We ran into this with a client a few years ago, and my memory is that he opted to change his implementation (which was trivial in his case) rather than having us change how the reader stores the set of dispatching macro characters associated with a read table. Changing the way CCL stored the dispatching macro characters also was pretty trivial, but involved some tradeoffs, as I recall.
> 
> 
> 
> 
>> On Apr 19, 2017, at 2:22 AM, David McClain <dbm at refined-audiometrics.com> wrote:
>> 
>> Holy Cow! I took a look at your code repository… Genera !? How old is this thing?
>> 
>> The repository is quite a bit more labyrinthine than I’m accustomed to, but then again, I’m not a typical production programmer. I did finally track down some actual compilable code, not mere build scripts and system definition files. And it appears that this engine is a gigantic (recursive descent?) parser from what appears to be XML to some internal AST representation, from which you can translate to whatever form you actually want.
>> 
>> The number of parsing terms is huge. I haven’t found your parser description files yet - the equiv of YACC and LEX, or BNF or something files. Hopefully you didn’t actually have to craft all these parser terms by hand.
>> 
>> But I am beginning to see the possibility that there may be a ton of backtracking that could be happening on the way to final parser productions. I don’t know your SETP schema language. Is it a reasonable beast? or is it really crufty the way most web stuff is? So this thing may also be churning out garbage memory in prodigious amounts.
>> 
>> I also notice a bunch of stream read-tokens and such, buried in macrolets inside of each of those zillion parser terms. So stream handling could be killing you too. I don’t know if you have a front-end for buffering up the physical reads from external media or networks. But if backtracking in parsing is happening, then too there will likely be put-back of tokens onto the input streams, however you manage that.
>> 
>> The style of parsing you choose will have a big impact on performance here. Shift / Reduce or recursive descent, what kind of grammar you are operating in (e.g. LALR, LR1, LL, xxxxxx) And then the decision to run deterministically or nondeterministically. Yikes!
>> 
>> So do you enjoy this kind of programming? I would certainly hope so.
>> 
>> - DM
>> 
>> 
>>> On Apr 18, 2017, at 14:44, Craig Lanning <craig.t.lanning at gmail.com> wrote:
>>> 
>>> I have been working on a rather large application that runs from the
>>> command line.  It reads schema files that are part of the ISO 10303
>>> STEP family of product data standards.  It can also read the
>>> corresponding Product Data Population files.
>>> 
>>> Recently someone gave me a script that runs a schema comparison (using
>>> my application) across several schemata.  (In this case, the script
>>> processed 17 schema pairs.) The act of processing one pair of schema
>>> files will mean that additional schema files are pulled from a special
>>> repository.  The schema files being processed "include" other schema
>>> files which also may "include" even more schema files.
>>> 
>>> I can build my application using LispWorks CL 6.1.1, Clozure CL v1.11,
>>> and SBCL 1.3.16.
>>> 
>>> I can build with LispWorks CL 6.1.1 in 32-bit only.
>>> 
>>> I can build with Clozure CL 1.11 in both 32-bit and 64-bit.
>>> 
>>> I can build with SBCL 1.3.16 in 64-bit only.  (No easy way to get both
>>> the 32-bit and 64-bit versions at the same time.)
>>> 
>>> The source code for my application is stored on SourceForge
>>> (http://exp-engine.sourceforge.net/) as the original development was
>>> intended to be an open source project.
>>> 
>>>              LWL 6.1.1(32)   SBCL 1.3.16(64)   CCL 1.11(32)   CCL 1.11(64)
>>> App Compile    10.323 sec      18.002 sec        10.242 sec     10.587 sec
>>> App Deliver     4.306 sec       6.379 sec         1.418 sec      1.875 sec
>>> App Filesize   37,429,248      57,409,584        24,719,376     33,460,464
>>> 17 schemata     8.320 sec       7.506 sec         23:49.054      23:44.190
>>> 
>>> The machine used was
>>>       Dell Inspiron 3558 Laptop
>>>       Intel Core i3 2.1GHz CPU
>>>       4GB Memory
>>> 
>>> As you can see in the chart above, CCL 1.11 took over 23 minutes to
>>> process the 17 schema pairs.  Not a good showing.
>>> 
>>> This application does not allocate and deallocate large amounts of
>>> memory so I have no information about which Lisp handles memory the
>>> best.  None of the Lisps tested ran out of memory.
>>> 
>>> LispWorks and Clozure CL both start with a small amount of memory and 
>>> grow and shrink the dynamic space as needed so I suspect that they
>>> handle memory the best.
>>> 
>>> SBCL needs to be told what its maximum dynamic space size is.  It then
>>> allocates all of that memory.
>>> 
>>> My purpose in posting this message was to give a reference for how
>>> different Lisps support real applications.
>>> 
>>> I was curious about whether CCL's time has improved in successive
>>> releases so I downloaded CCL 1.10 and 1.9.  I was unable to run 1.9,
>>> but was able to run 1.10.  1.11 produced a slower executable than 1.10.
>>> 
>>> Craig Lanning
>>> _______________________________________________
>>> Openmcl-devel mailing list
>>> Openmcl-devel at clozure.com
>>> https://lists.clozure.com/mailman/listinfo/openmcl-devel
>> 
>> _______________________________________________
>> Openmcl-devel mailing list
>> Openmcl-devel at clozure.com
>> https://lists.clozure.com/mailman/listinfo/openmcl-devel
> 
> _______________________________________________
> Openmcl-devel mailing list
> Openmcl-devel at clozure.com
> https://lists.clozure.com/mailman/listinfo/openmcl-devel




More information about the Openmcl-devel mailing list