[Openmcl-devel] thread pool question

Erik Pearson erik at adaptations.com
Fri Jul 14 04:04:07 PDT 2006


The new approach is working fine. It will take some more tinkering to  
determine the optimal number of threads to try to run for the  
application.

I can report, though, that with regular threads, I can get about 240   
threads per second created and run to completion  (running just a  
trivial function), but with the job queue / worker thread design I'm  
getting 2500-3000 runs per second (1-10 threads).

Thanks!

Erik.

On Jul 13, 2006, at 1:51 PM, David L. Rager wrote:

> Hi Erik,
>
> Sorry if I confused you, but I'm not sure where the idea for part A  
> came
> from.
>
> You part B approach sounds right.  As you suggest, I wouldn't make a
> dedicated master thread.
>
> David
>
>> -----Original Message-----
>> From: Erik Pearson [mailto:erik at adaptations.com]
>> Sent: Thursday, July 13, 2006 3:46 PM
>> To: David L. Rager
>> Cc: 'Gary Byers'; openmcl-devel at clozure.com
>> Subject: Re: [Openmcl-devel] thread pool question
>>
>> So you guys are suggesting a system of a controlling thread and
>> multiple worker threads, each with their own "job" queue? If so, what
>> do you see as the advantage of this over a single queue?
>>
>> a. job queue per worker thread
>>
>> A new job (i.e. function to be run) is allocated via a sever thread
>> or at least some sort of server api to the worker thread which has
>> the least number of jobs queued, or to the first thread with 0 jobs.
>> This would entail polling worker thread queues (at least a variable)
>> for the current size, and a lock and a semaphore per worker thread.
>>
>> b. single thread queue
>>
>> The job is pushed onto a single queue (while locking), and a single
>> semaphore is incremented. Each worker waits on this semaphore, and
>> will try to obtain the next job in the queue when it wakes up,
>> obtaining a lock created in the server thread. There doesn't seem to
>> be any need for a master thread, other than the main thread in which
>> the application is running, since the work of adding a new job to the
>> queue is trivial (er, unless there ends up being a long queue of
>> locks waiting on the job queue...)
>>
>> It seems like the worker thread with queues design is a bit more
>> resource intensive (more locks and more semaphores) and involves more
>> work when a job is added (polling for a queue to add the job to.) The
>> single queue seems simpler in that regard, but introduces contention
>> for the single job queue, and a bit of randomness about which threads
>> will get access to the queue.
>>
>> Erik.
>>
>>
>> On Jul 13, 2006, at 12:19 PM, David L. Rager wrote:
>>
>>> Hello Erik,
>>>
>>> We have an application that does exactly this, recycle threads.
>>> Well not
>>> formally "recycle", but we prevent the threads from dying and then
>>> they grab
>>> more work when it's available.  If you want threads to expire after
>>> they've
>>> waited 60 seconds for work, you should consider timed-wait-on-
>>> semaphore
>>> instead of wait-on-semaphore.
>>>
>>>>> My first attempt at getting this to work relied on preventing the
>>>>> threads from going dead (exhausted) by resetting them just before
>>>>> the
>>>>> thread's function exits (basically the thread runs an anonymous
>>>>> function whose job is to wrap the target function so that it  
>>>>> doesn't
>>>>> exit with errors.). I think I found that exhausted threads could
>>>>> sometimes be reused and sometimes not, but the docs say not to do
>>>>> this.
>>>
>>> Sounds a complicated.
>>>
>>>>> Or perhaps I'm going about this wrong -- the thread could run a
>>>>> function whose job is to just wait (i.e. on a semaphore.) for a
>>>>> function to be provided to it (i.e. setting some variables or
>>>>> slots),
>>>>> then run that function, then when it is done re-enter the wait  
>>>>> loop.
>>>
>>>>
>>>> I think that that's what I'd recommend;  PRESET/RESET involve some
>>>> handshaking and synchronization.
>>>
>>> I agree with Gary.  This is what we do.  I think you get the
>>> benefit of
>>> knowing that each piece of work is large enough to warrant spawning
>>> off.
>>> Therefore, you can have a pretty straightforward implementation.
>>>
>>> David
>>>
>>>
>
> _______________________________________________
> Openmcl-devel mailing list
> Openmcl-devel at clozure.com
> http://clozure.com/mailman/listinfo/openmcl-devel




More information about the Openmcl-devel mailing list