[Openmcl-devel] Fun With Data: Historical Benchmarks

Brent Fulgham bfulg at pacbell.net
Mon Oct 16 22:56:00 PDT 2006

On Oct 16, 2006, at 5:42 PM, Gary Byers wrote:

> Sorry for not responding to your original message.
> One of the things that stuff like this (tracking changes to  
> compliance, tracking
> performance between OpenMCL releases) more meaningful and useful is  
> if those
> releases is if those releases were more frequent; it's been over a  
> year since
> 1.0, 1.1 is nearing feature-completness (some Unicode support!) but  
> will need
> some testing, and that's pretty bad.

Well, I guess you just need some help!

> I'm sure that this'd be useful (thanks), but it sort of goes without
> saying that benchmarking's a black art; I'd tend to trust results
> that're consistently bad (like I/O in 1.0) more than those that appear
> good.  I was just timing something a few minutes ago, and tended to
> get better results when I introduced code that should have slowed
> things down slightly.

I posted the results of my quick run of the benchmark (http:// 
openmcl.org/openmcl-wiki/HowFastAreWe#preview).  I'll run it again  
with OpenMCL 1.1 soon to see where things stand...

> (Just the same, things that we know about - like I/O performance -  
> may have gotten
> gradually worse over time.  If that's true, timing results over  
> time would have
> shown that trend and might have caught the problem earlier, and  
> having that
> sort of thing set up may keep other things from drifting into the  
> realm of the
> very bad.)

Sure.  We all know [Twain/Disraeli/?]'s famous quote.  But it's  
mostly sudden changes that are interesting.  Even the funny tests on  
http://shootout.alioth.debian.org/ create their own drama, etc.  But  
it's useful to see trends, and sometimes highlights true problems.

At any rate, it gives me an excuse to play with the compiler and  
hopefully it provides some utility (even if only entertainment).  :-)



More information about the Openmcl-devel mailing list