<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
</head>
<body text="#000000" bgcolor="#ffffff">
<br>
<br>
R. Matthew Emerson wrote:
<blockquote type="cite" cite="mid:C03A16E8-BDB3-461B-A81E-07BE51BCB5B1@clozure.com">
<pre wrap="">On Nov 16, 2010, at 10:16 AM, Jon Anthony wrote:
</pre>
<blockquote type="cite">
<pre wrap="">This is some good information. Thanks for the pointers. But it also
highlights an issue I've thought about from time to time: with modern
processor architectures (especially pipelines, caches, and now cores)
how does one _not_ write naive code for these things? Sure, 90+% of the
worry on this goes to the compiler writers, but it can be easy to
accidentally write something that defeats their efforts.
</pre>
</blockquote>
<pre wrap=""><!---->
On modern x86, I've all but given up. I just write
naive and straightforward code, and assume (or hope) that
the hardware guys have optimized for that. In my experience,
measurements typically show that the difference in execution
time between "clever" and naive code is negligible.
</pre>
</blockquote>
Indeed, it is extrmely difficult, if not impossible,<br>
to anticipate the speed of code running on<br>
a modern processor. In the Good Old Days,<br>
we would just count the number of instructions.<br>
These days, what's going on down in that processor<br>
is ridiculously complicated, even before you think<br>
about cache hits and misses. Don't even think<br>
about trying. Just measure it and measure other<br>
things.<br>
<blockquote type="cite" cite="mid:C03A16E8-BDB3-461B-A81E-07BE51BCB5B1@clozure.com">
<pre wrap="">
Intel has an optimization guide (you should be able to
find it at <a href="http://www.intel.com/products/processor/manuals/" class="moz-txt-link-freetext">http://www.intel.com/products/processor/manuals/</a>).
</pre>
</blockquote>
Yes, Intel tries to help people and what they<br>
say is good to know. Intel has an extremely<br>
clear, and actually simple, multiprocessor<br>
memory model (which AMD follows as well).<br>
We had a lecuture about it here at ITA from<br>
an Intel architecture guy.<br>
<blockquote type="cite" cite="mid:C03A16E8-BDB3-461B-A81E-07BE51BCB5B1@clozure.com">
<pre wrap="">
Clearly you can win big by writing cache-aware (or at least
virtual memory-aware) code; I remember a fairly ecent article in
ACM Queue about this.
<a href="http://queue.acm.org/detail.cfm?id=1814327" class="moz-txt-link-freetext">http://queue.acm.org/detail.cfm?id=1814327</a>
One interesting quotation:
The speed disparity between primary and secondary storage on the Atlas Computer was on the order of 1:1,000. The Atlas drum took 2 milliseconds to deliver a sector; instructions took approximately 2 microseconds to execute. You lost around 1,000 instructions for each VM page fault.
On a modern multi-issue CPU, running at some gigahertz clock frequency, the worst-case loss is almost 10 million instructions per VM page fault. If you are running with a rotating disk, the number is more like 100 million instructions.
</pre>
</blockquote>
Yeah, for high performance systems, dealing with rotating<br>
disks is SO twentieth-century!<br>
<br>
By the way, we use Oracle RAC. :( :( :(<br>
<br>
-- Dan<br>
<br>
<br>
-- Dan<br>
<blockquote type="cite" cite="mid:C03A16E8-BDB3-461B-A81E-07BE51BCB5B1@clozure.com">
<pre wrap="">
_______________________________________________
Openmcl-devel mailing list
<a href="mailto:Openmcl-devel@clozure.com" class="moz-txt-link-abbreviated">Openmcl-devel@clozure.com</a>
<a href="http://clozure.com/mailman/listinfo/openmcl-devel" class="moz-txt-link-freetext">http://clozure.com/mailman/listinfo/openmcl-devel</a>
</pre>
</blockquote>
</body>
</html>