[Openmcl-devel] binding with threads

Gary Byers gb at clozure.com
Sun Mar 24 01:13:52 PDT 2013


There's no significant semantic difference between the version of your
code that does a few SETFs of free variables and the version that does
a few lexical bindings.  Because of the way that EVAL works in CCL,
there's at least one practical difference and that may be significant.
The first version will be processed by a simple interpreter, and the
second will be processed by doing something like:

(funcall (compile nil `(lambda () (let  ((e (futures:make-pool-executor 2))) ...))))

The effect of this is that the time interval between the calls to your
functions is at least a little different (it's going to be shorter in
the case where things are compiled.)  I have no idea what the rest of
your code is doing, but if it's making assumptions about the order in
which things happen small changes can invalidate those assumptions.

(As a crude example, consider:

(defparameter *a* 0)
(defparameter *b* 0)

(progn
   (process-run-function "a" (lambda () (incf *a*)))
   (process-run-function "b" (lambda () (incf *b*)))
   (cons *a* *b*))

Whether this returns (0 . 0), (0 . 1), (1 . 0), or (1 . 1) isn't
generally possible to predict; it's not hard to mistakenly think that
it is.)

Threads running on different processors/cores on x86 machines don't
always have an identical view of memory, but the latencies involved
are very small (cycles/nanoseconds) and the x86 isn't as aggressive
about this sort of thing as other architectures are.  If you're depending
on using shared memory to synchronize operations between threads (if you're
implementing locks or semaphores), you have to be aware of this; if your
needs are less fine-grained than that, then you can assume that two threads
couldn't have a different view of memory if they wanted to.  (How would or
could they ?)



On Thu, 21 Mar 2013, Vijay Mathew wrote:

> Consider this code snippet which demonstrates a "futures" package for
> executing computations asynchronously:
> 
> (setf e (futures:make-pool-executor 2))
> (setf f1 (futures:executor-submit e #'(lambda (x) (* x 2)) 100))
> (setf f2 (futures:executor-submit e #'(lambda (x y) (+ x y)) 200 300))
> (format t "~A~%" (futures:future-result f1))
> (format t "~A~%" (futures:future-result f2))
> (futures:executor-shutdown e nil)
> 
> `pool-executor' internally uses a pool of ccl threads for executing the jobs
> submitted to it. Now if I rewrite the same sample using `let' bindings, the
> call to `future-result' just hangs. The threads actually run, but it seems
> `future-result' somehow do not see the updated value of the internal
> `result' slot.
> 
> If I load this script, it just hangs the CCL REPL:
> 
> (let ((e (futures:make-pool-executor 2)))
> ? (let ((f1 (futures:executor-submit e #'(lambda (x) (* x 2)) 100))
> ??????? (f2 (futures:executor-submit e #'(lambda (x y) (+ x y)) 200 300)))
> ??? (format t "~A~%" (futures:future-result f1))
> ??? (format t "~A~%" (futures:future-result f2))
> ??? (futures:executor-shutdown e nil)))
> 
> Also I would like to learn about the best way to debug programs that use CCL
> threads.
> 
> I am new to Common Lisp and the kind of support I am getting from this
> mailing list is very encouraging!
> 
> Thank you,
> 
> --Vijay
> 
> 
>



More information about the Openmcl-devel mailing list