[Openmcl-devel] Speed, compilers and multi-core processors

Dan Corkill corkill at cs.umass.edu
Thu May 21 11:49:03 PDT 2009


Ranier wrote:

> Also think about blackboards that have some dimensions as matrices  
> mapped to the GPU (example of a blackboard system in Lisp: http://gbbopen.org/ 
>  ).
>
> As an example see this GBBopen function:  map-instances-on-space- 
> instances
>
>   http://gbbopen.org/hyperdoc/ref-map-instances-on-space- 
> instances.html
>
>   ' Apply a function once to each unit instance on space instances,  
> optionally selected by a retrieval pattern. '
>
> Then see  FIND-INSTANCES:  http://gbbopen.org/hyperdoc/ref-find-instances.html
>
> I could imagine that SOME uses of these function could be speed up a  
> lot by running in parallel on a GPU with, say, 256 compute elements.

Indeed.  Beyond nearly uncoupled large-grained computations (such as  
DanW has been describing) and obviously vectorize-able computations,  
providing application-specific higher order operators that are  
implemented to take advantage of these computing elements will become  
the norm.  (An CL is so very good at providing application-specific  
language constructs!)

As for parallel/distributed AI blackboard systems, I'll shamelessly  
plug my old book chapter as a good read:

Design Alternatives for Parallel and Distributed Blackboard Systems,  
Daniel D. Corkill. In V. Jagannathan, Rajendra Dodhiawala, and  
Lawrence S. Baum, editors, Blackboard Architectures and Applications,  
pages 99-136. Academic Press, 1989.

which is on-line at http://dancorkill.home.comcast.net/~dancorkill/pubs/parallel-distributed-chapter.pdf

An important take away from this AI work in the late 80s is that using  
parallelism to perform concurrent, speculative search was less  
advantageous than using finer-grained parallelism to make the  
individual steps faster.  This surprised some, but in hindsight it  
seems obvious that obtaining information to refine/invalidate search  
threads sooner (by having faster operators) would win over a less- 
informed shotgun search.  It was fairly easy to write large-grained  
loosely coupled parallel programs and it was also fairly easy to write  
more fine-grained parallel application-specific operators.  Writing  
general parallel programs that worked somewhere between these two  
extremes was where things became much harder. With parallel blackboard  
systems, using parallelism to make individual knowledge-source (KS)  
executions faster was always preferable to running multiple KS  
executions concurrently.









More information about the Openmcl-devel mailing list