[Openmcl-devel] Fine-grained locking

Gary Byers gb at clozure.com
Wed Mar 18 21:50:39 PDT 2009



On Tue, 17 Mar 2009, Ian Eslick wrote:

> I'm considering re-implementing aspects of the Elephant object
> persistence solution to move much of its functionality natively into
> lisp.  The current persistence method leverages the Berkeley DB
> locking library to control concurrent access to basic blocks in a
> binary heap.
>
> Clozure is my performance target and I would like some input on the
> relative costs of two approaches to concurrency: fine-grained locking
> and copy-on-write.
>
> The relative cost of these two approaches is highly dependent on
> access patterns and conflicts, of course, but I am interested in the
> developer's opinions on the relative low-level costs of grabbing and
> releasing locks on a per-object basis vs. copying objects and in
> particular how to perform highly efficient locking patterns.  Also,
> has anyone on the team looked closely at an implementation of a
> persistence solution on ClozureCL?
>
> Thank you,
> Ian
>

I actually think that locking overhead in CCL is fairly good/low, but
as you say everything's relative.

Gail implemented the lock-free hash tables that're in 1.3 (RC1 ...)
because she noted that locking and unlocking a hash table around a
call to GETHASH (on an EQ hash table that was sparsely populated)
accounted for a very percentage of the total execution time.  (The
locking/unlocking overhead was 3-4X more costly than the fairly simple
work that GETHASH did in that case (some simple arithmetic, a couple
of SVREF-like probes, a comparison.)  If we guess and say that ths
simple collision-free, contention-free path through GETHASH involves
executing 25 instructions (probably more, but let's pretend), then the
locking/unlocking might be on the order of 100.  For a fairly typical
usage scenario - where the hash table's read frequently by many
threads and only occasionally written to, that locking overhead wss a
high price to pay to ensure consistency.  I think that her changes
were (a) possibly a little paranoia in GETHASH to ensure that whatever
values it returned were sane, though there may not have been much of
that and (b) some additional complexity in the case of things like
(SETF GETHASH) and REMHASH, where some of that complexity involved
using atomic memory primitives (STORE-CONDITIONAL and that sort of
thing.)  Those atomic primitives are themselves much more expensive
than a simple SETF would be, but if writes to the hash table are
relatively rare, getting rid of the locking overhead (and paying
occasionally higher cost to write to the hash table) can be a big win.

The locks that had this overhead were what CCL calls READ-WRITE-LOCKs
(multiple-reader, single writer.)  If obtaining/releasing one of
them in the non-contended case has a cost of 100 (relative to a
simple GETHASH's cost of 25), then the cost of obtaining/releasing
a simple exclusive lock (a RECURSIVE-LOCK in CCL) is probably
around 40 or so.  If there's actual contention from multiple
reader threads, then avoiding the cost of blocking by doing the
extra bookkeeping wins.  If there's no contention, the cost
of doing that bookkeeping is obiously greater than the cost
of not doing it.

All that said, I think that CCL's locks are actually not -that-
expensive to use; they only involve the OS if there's some kind
of contention going on.  (On some PPC32 versions of OSX, the
"pthread_self()" function - which returns an identifier for
the current thread and is sort of the first step taken by
any C library locking primitive - used to involve a system
call.)  Whether their overhead is significant or not depends
on the context; in the context of a simple hash lookup, it was
very signigicant (3-4X); in the context of a database access,
it may be much less significant.

Anyway ... I think that saying that locking/unlocking a READ-WRITE-LOCK
is a few times more expensive than the simple case of GETHASH and
locking/unlocking a RECURSIVE-LOCK is probably a little less than
2X the cost of the cost of that simple GETHASH call.



More information about the Openmcl-devel mailing list