[Openmcl-devel] just when you thought it was safe to go back in the CCL trunk

Gary Byers gb at clozure.com
Mon Sep 14 18:30:59 PDT 2015


the idiom may be something like
(defun foo (a i j k) (declare (type (simple-array (unsigned-byte 64) 
(*)) a))
   (setf (aref a k) (the (unsigned-byte 64)(+ (aref a i) (aref a j)))))

it involves lying a bit, since the sum of two (UNSIGNED-BYTE 64)s is an 
(UNSIGNED-BYTE 65 ) in general.
I -think that it would be a relatively simple and localized change to 
believe the lie in this case but AFAIK
that doesn't happen and may need to happen here.

There may be other issues involved here,




On 09/14/2015 06:16 PM, Ron Garret wrote:
> On Sep 14, 2015, at 2:09 PM, Gary Byers <gb at clozure.com> wrote:
>
>> Here is the threatened documementation.
> Thanks Gary, that’s actually very interesting reading!  (At least it was to me, but I guess that makes me a pretty hard-core geek.)
>
> As long as the compiler is on the table, I have a question that I’ve been meaning to ask for a long time but never quite found the right moment.  I’m not sure this is the right moment, but here goes anyway…
>
> I’m working on cryptography nowadays, and that involves doing a lot of operations on quantities that are constrained to be 32 or 64 bits wide, and it is important that those operations happen in constant time in order not to leak secrets through timing and other side-channel attacks.
>
> Straightforward Lisp code is not suited for this because some N-bit values on an N-bit machine are fixnums and others are bignums.  But there is a way to express the desired semantics cleanly, and that is to declare things as signed-byte and unsigned-byte values.  In particular, vectors whose element types are (un/signed-byte 32) or (un/signed-byte 64) can hold 32-bit and 64-bit values cleanly with no boxing.  In principle, this should allow Lisp code operating on such values and putting the results back in such places to be as fast as C code.
>
> But there is a fly in the ointment: Common Lisp does not include modulo 2^N arithmetic as primitives.  You can only do mod 2^N math as an idiom, i.e. (setf (some-place-that-has-been-declared-to-be (type (un/signed-byte 32)) (mod (op v1 v2) #.(ash 1 32))).  It is asking an awful lot of the compiler to recognize such idioms as optimizable into one or two machine instructions.  And yet, it would be so tremendously useful, not just for crypto, but for image processing as well.
>
> How hard would be to tweak the compiler so that it could make this optimization?  Specifically, how feasible would it be for the compiler to recognize the above as a special case and optimize it to (more or less) what a C compiler would produce?  AFAICT it shouldn’t be too difficult, and if it were done, I think it would let CCL kick some serious benchmarking tushie.
>
> Is such a thing feasible?
>
> rg
>




More information about the Openmcl-devel mailing list