[Openmcl-devel] Undocumented heap size limit?
Bill St. Clair
billstclair at rayservers.net
Mon Jul 26 13:21:02 PDT 2010
We've got a live server that spends its time pulling a bunch of XML
feeds and parsing them into an in-memory "database". It also uses
Weblocks to provide a web site on the information from the feeds. It
runs nicely for periods of at least a few days, but still has a few
memory leaks that require it to be occasionally restarted.
We have it set up with the default GC thresholds. It's 64-bit, so it
reserves 512 gigs of virtual memory from the OS. It fairly quickly fills
up to about 160 megs of lisp heap, at which point it does a full GC
about every 40 seconds, taking 1/3 to 2/3 of a second for that GC.
This morning we discovered it with a heap size of about 1.6 gigs. It was
spending most of its time in the GC, taking 6 to 12 seconds in each full
GC to recover about 40 megs.
Are we running up against some undocumented size limit, or is it more
likely that a full GC in that big a heap just takes that long, so we've
got to increase our ephemeral generation sizes (we're currently using
the defaults) and/or eliminate our memory leaks and make the code cons
less if we're not going to spiral into that black hole?
-Bill
More information about the Openmcl-devel
mailing list