[Openmcl-devel] Lisp versus JavaScript (was: list - vector)
Joakim Sandgren
joakim at joakimsandgren.com
Tue May 19 01:25:03 PDT 2009
I just wanted to say that I just made a new mailbox in mail to keep
these kind of answers from Gary. That is like having a master jedi
answering with a lecture to your own little stupid question
empiriquement found in the try and error world...
I will do some other "try's" and will come back on this to know more...
Very Sincerely
Joakim
Le 19 mai 09 à 04:08, Gary Byers a écrit :
>
>
> On Sun, 17 May 2009, Alexander Repenning wrote:
>
>> Computational complexity. Random array access is done in constant
>> time
>> whereas random list access is done in linear time. If you need random
>> access then lists are generally a bad idea. Of course, that constant
>> time can vary a lot. svref can be pretty fast but aref is a
>> notoriously slow access function for multidimensional arrays compared
>> to, say, C.
>
> Among the reasons that SVREF tends to be faster than AREF are:
>
> - it asserts that the array in question is SIMPLE (not displaced,
> not
> adjustable, etc.)
>
> - it asserts that the array in question has an element type of T
> (not
> BIT, not CHARACTER, not some subtype of FLOAT or INTEGER ...)
>
> A fully safe SVREF can basically validate these assertions (which
> in this
> case amount to checking that its first argument is a SIMPLE-VECTOR)
> and
> validate the implicit assertion that its second argument is a valid
> vector
> index for the first arg, then execute a simple LOAD instruction.
> In most
> CL implementations,
>
> (defun foo (a i)
> (svref a i))
>
> (defun bar (a i)
> (declare (type simple-vector a))
> (aref a i))
>
> will compile to exactly the same code (and that code will test the
> implicit and explict assertions about A and I and then do a simple
> LOAD instruction.)
>
> A third case:
>
> (defun baz (a i)
> (aref a i))
>
> will have to discover (at runtime) whether or not A is some sort of
> vector or not (since that's all that's being asserted), whether or not
> it's "simple" or not (and what this means is somewhat implementation
> dependent), what it's element-type is, and what sort of LOAD
> instruction ("load word", "load byte", "load bit", "load signed byte",
> "load single-float", ...) needs to be executed. If BAZ's A argument
> is in fact a SIMPLE-VECTOR, we'll wind up executing the same simple
> LOAD instruction as in the other cases (where that's asserted), but
> it'll take us a while to realize that. ("a while" is probably "a few
> times the cost of the LOAD" in this case.)
>
> If CL had only a single array type (say, something like a
> SIMPLE-VECTOR, but possibly with a single level of displacement to
> support "growing", and with vectors of vectors used to simulate
> multidimensional arrays), then AREF would always be a simple,
> easily-inlined operation whose naive use would offer C-like
> performance. (Unfortunately, we'd all be stuck with nothing
> but Javascript-like arrays in that case.)
>
> The traditional way of trying to optimize things like AREF (when
> nothing much is known about the array) is to cache information
> about the array (outside of a loop, for instance) and transform
> the AREF into someting simpler that uses that cached information.
> (I believe that the Lisp Machines did this and called the cache
> an "array register"; Fortran implementations (Fortran arrays tend
> to be much higher-level things than C arrays) often call the cache
> a "dope vector." Hiding some of the details, the idea would be to
> transform something like:
>
> (let* ((sum 0))
> (dotimes (i (length a)) (incf sum (aref a i))))
>
> into something like
>
> (let* ((sum 0)
> (f (%get-1d-aref-function-for a)))
> (dotimes (i (length a)) (incf sum (funcall f a i))))
>
> If A turns out at runtime to be a SIMPLE-VECTOR, then
> %GET-1D-AREF-FUNCTION-FOR might return something like SVREF; if it's a
> SIMPLE-STRING, then a simple version of SCHAR may be returned (and the
> "function" might be something that's not quite a full-blown lisp
> function, called by something not quite as general as FUNCALL.)
>
> Whether and how much this wins depends (on things like what type of
> array A actually turns out to be, on how many times the loop actually
> executes, and other things.) If it does win (compared to just a naive
> AREF), it's still not likely to produce code that's nearly as good as
> that in which the compiler's involved a bit more (e.g., where there
> are declarations about the array's simplicity and element-type.)
>
> This (the cache/dope-vector/array-register idea) is certainly worth
> exploring (as are similar things which remove redundant operations
> from
> loops.) There are probably limits to how much better you can make
> things
> when the compiler's not aware of the array's element-type (at least.)
>
> The general way in which one makes calls to AREF faster in CL is to
> let the compiler transform those calls into something simpler (SVREF
> or SCHAR or SBIT or %SIMPLE-UNSIGNED-BYTE-16-REF or
> %SIMPLE-DOUBLE-FLOAT-REF or ...), and doing that transformation at
> compile-time (on the basis of declarations and/or inferred type
> information) is preferable to doing it at runtime (and obviously
> better than doing it repeatedly at runtime, in the middle of a loop.)
> CCL does a fairly good job of getting to and sometimes beyond that
> starting point if AREF's first argument is declared to be a
> SIMPLE-ARRAY of specified element-type (and I'd like to be able to do
> some things involving arrays not explicitly declared to be SIMPLE
> arrays in the next version.)
>
> If you're starting from the assumption that #'CL:AREF (without any
> compile-time assertions) is somehow analogous (performance-wise or
> otherwise) to an array-reference operator in a language with one array
> type, you probably won't get very far very fast; that's simply not
> a reasonable assumption. In my mind, this is the sort of thing that
> every serious CL programmer should know (though I admit to not being
> entirely clear on how this knowledge is to be acquired; I imagine that
> most of what's written about it tends to be excruciatingly
> implementation-
> dependent.) I don't think that anyone who's programmed in CL thinks
> that #'CL:+ is directly analogous to C's addition operator (on
> explicitly
> typed operands), and CL programmers somehow learn that declaring the
> types of CL:+'s operands (and in some cases the type of the result)
> can
> greatly improve performance (especially when the type in question is
> FIXNUM.)
>
> If people honestly don't realize that there are similar issues with
> AREF,
> that's certainly surprising.
>
>
>>
>> So far, that did not bug me too much, but recently, but thanks to
>> some
>> Apple involvement, even the "slow" scripting languages such as
>> JavaScript are getting amazingly fast. For instance, I was really
>> quite shocked to see the performance of JavaScript in game of life
>> here:
>>
>> http://o3d-life.googlecode.com/svn/trunk/life.html
>>
>> You need to have the newest JavaScript engines, e.g. the one in
>> Safari
>> 4 to get the full speed.
>>
>> I wondered if Lisp (CCL) can still match this. I intentionally did
>> not
>> use any declarations and optimizations to roughly approximate the
>> JavaScript. You can find the result as "game of life.lisp" XMLisp
>> example here: http://code.google.com/p/xmlisp/
>>
>> At any rate, Lisp had a very hard hard time to match the JavaScript
>> performance. The COUNT-LIFE function including many Aref array
>> accesses is the main bottleneck. If you replace the call to (count-
>> life Self i j) with (random 9) the frame rate shoots up 4x. This is
>> not surprising but the fact that JavaScript can about match the Lisp
>> performance is.
>>
>> Alex
>>
>>
>>
>> On May 17, 2009, at 3:42 PM, Joakim Sandgren wrote:
>>
>>> Hi,
>>> It seems to me that the speed relations list/simple-vector has
>>> changed in ccl in comparison with mcl.
>>>
>>> in mcl, the list was always faster until you wanted a element quite
>>> deep into a long list.
>>>
>>> now, in ccl I did a dotimes test where I took the 100s element of a
>>> list using nth
>>> and the 100 element of a simple-vector using svref.
>>> The vector was much faster.
>>> This was the normal relation in mcl too.
>>>
>>> Then I took the 4th element in the same list (still with nth)
>>> and the 4th element of the same vector with svref
>>> and the vector is still clearly faster!!
>>> This is new to me.
>>>
>>> Should I abandon lists? And start to do everything in vectors?
>>> Sometime in my project code I have some "heavy" searching fetching
>>> elements with search criteria in many quite long lists.
>>> And this do take some time sometimes.
>>> Again, should I start to use vectors, in ccl?...
>>>
>>> Sincerely
>>> Joakim
>>>
>>>
>>>
>>>
>>> Joakim Sandgren
>>> joakim sandgren musik
>>> 42, rue de Maubeuge
>>> 75009 Paris
>>> France
>>> +33 (0)1 45 26 43 90
>>> info at joakimsandgren.com
>>> http://www.joakimsandgren.com
>>>
>>> _______________________________________________
>>> Openmcl-devel mailing list
>>> Openmcl-devel at clozure.com
>>> http://clozure.com/mailman/listinfo/openmcl-devel
>>
>> Prof. Alexander Repenning
>>
>> University of Colorado
>> Computer Science Department
>> Boulder, CO 80309-430
>>
>> vCard: http://www.cs.colorado.edu/~ralex/AlexanderRepenning.vcf
>>
>>
>> _______________________________________________
>> Openmcl-devel mailing list
>> Openmcl-devel at clozure.com
>> http://clozure.com/mailman/listinfo/openmcl-devel
>>
>>
>
Joakim Sandgren
joakim sandgren musik
42, rue de Maubeuge
75009 Paris
France
+33 (0)1 45 26 43 90
info at joakimsandgren.com
http://www.joakimsandgren.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clozure.com/pipermail/openmcl-devel/attachments/20090519/dde48b04/attachment.htm>
More information about the Openmcl-devel
mailing list