[Openmcl-devel] Fun With Data: Historical Benchmarks

Gary Byers gb at clozure.com
Mon Oct 16 17:42:38 PDT 2006


Sorry for not responding to your original message.

One of the things that stuff like this (tracking changes to compliance, tracking
performance between OpenMCL releases) more meaningful and useful is if those
releases is if those releases were more frequent; it's been over a year since
1.0, 1.1 is nearing feature-completness (some Unicode support!) but will need
some testing, and that's pretty bad.

On Mon, 16 Oct 2006, Brent Fulgham wrote:

> It turns out this is quite easy to do.  I've run the benchmark compared to SBCL on my G5 iMac (OpenMCL 1.0), and will post the results on the Wiki.
>
> I'll then try it with OpenMCL 1.1 and add the column.  Perhaps over time this will show useful information about system performance and comparisons with other packages.

I'm sure that this'd be useful (thanks), but it sort of goes without
saying that benchmarking's a black art; I'd tend to trust results
that're consistently bad (like I/O in 1.0) more than those that appear
good.  I was just timing something a few minutes ago, and tended to
get better results when I introduced code that should have slowed
things down slightly.

It's tempting to say that when results are bad, you're measuring
compiler stupidity or runtime stupidity or poor choice of
algorithm/data structures and/or other Usual Suspects; when things are
good, you're often measuring cache behavior or OS scheduler latency or
sunspot activity or something else that you may not have much control
over.  Sometimes, increasing the number of test iterations reduces
those effects; other times, it may exaggerate them (e.g., your benchmark
may wind up running out of the cache, and it may be very difficult to
get real-world code to do so.  What does the benchmark tell you in that case ?)

Apple's CHUD metering tools don't yet have support for measuring the
effects of sunspot activity, but they can help to identify cache and
pipeline issues that the implementation may be able to exercise
control over.

(Just the same, things that we know about - like I/O performance - may have gotten
gradually worse over time.  If that's true, timing results over time would have
shown that trend and might have caught the problem earlier, and having that
sort of thing set up may keep other things from drifting into the realm of the
very bad.)

>
> The only other Lisp I have is SBCL.  SBCL (current version) is slightly slower in a few cases, but is generally pretty close.  I did take the step of turning off the Crash Reporter dialog that displays when SBCL runs, though I am not sure if this completely removes any performance penalty caused by this bug in the Crash Reporter.
>
> -Brent
>
>
>
> _______________________________________________
> Openmcl-devel mailing list
> Openmcl-devel at clozure.com
> http://clozure.com/mailman/listinfo/openmcl-devel
>
>



More information about the Openmcl-devel mailing list