[Openmcl-devel] process-run-function and mach ports usage
gb at clozure.com
Thu Feb 24 17:11:10 PST 2011
Just to clarify: most OSes distinguish between the ideas of "reserving address
space" and "committing resources - like physical memory - to a set of pages."
On Windows, they're separate operations; on Unux-like systems, the mmap system
call (with different sets of options) can reserve, commit, or do both at the
Actually making those resources (physical pages) available usually happens
lazily: when a committed page is first touched (sometimes, that means
"when it's first written to", other times is means "read from or written
to"), a physical page is allocated.
A system that doesn't overcommit will make sure that N physical pages are
or can be made available when an application asks for N logical pages to
be committed; if it can't guarantee this, the commit operation will fail.
A system that uses an overcommit strategy will allow some "commit N
pages" operations to succeed even if N physical pages aren't
available; it's gambling that enough physical pages will be available
by the time the application starts touching the logical pages. (This
strategy should be familiar to anyone who's written a rent check the
day before payday: if you get to the bank before the landlord does,
great, and if not you were probably going to have to sleep in your car
The analogy (to check fraud) isn't perfect, unfortunately. If you say
? (make-array 1-gazillion)
and you get an error that says "either get serious or get 1 gazillion
bytes more physical memory", that's likely to be more tractable and easier
to recover from than it would be to be told that the large array could
be created (and then get a mysterious memory fault when setting/accessing
the array's contents.)
Confession/disclaimer: until about a year ago, there was a bug in CCL
that caused it to treat some (maybe all) memory commit failures as successes,
and this caused the same sort of symptoms (a fault trying to access memory
that'd supposedly been committed) that an overcommit failure could. I stared
at the buggy code dozens of times before I saw that it was simply and totally
wrong, and finally saw that when I saw that the same problems happened on
Solaris (which doesn't overcommit) as on systems that do.
On Thu, 24 Feb 2011, Shannon Spires wrote:
> NT did it. I don't know if it was first, but you did say "popularized."
> And at least they provided two different allocation functions; VirtualAlloc() which overcommitted and malloc() which didn't, so you knew what you were getting.
> Jeez. I'm defending NT. Must be losing it.
> On Feb 24, 2011, at 2:23 PM, Tim Bradshaw wrote:
>> On 24 Feb 2011, at 19:55, Gary Byers wrote:
>>> "What OS introduced/popularized the concept of memory overcommit ?"
>> I think it was AIX, but it might have been some earlier IBM OS (mainframe OS).
More information about the Openmcl-devel