[Openmcl-devel] Octal-core Mac Pro

Phil pbpublist at comcast.net
Mon Sep 18 04:26:16 PDT 2006


On Sep 18, 2006, at 4:45 AM, Lawrence E. Bakst wrote:

> If you had a choice of 2 x 2 GHz cores or 4 x 1 GHz cores, on a  
> MacBook Pro, which would you order assuming the same price? I  
> suspect must folks would order the machine with the faster cores.

I'd go for more cores but suspect that you are right: currently, most  
people would probably go for faster as that is what marketing has  
taught them is important.  But that will be a short-term effect as I  
suspect a growing number of users will appreciate what multi-core  
gets them (without changes in their apps... excluding things like  
games and media authoring apps which need to change sooner rather  
than later) until they get to 4 or even 8 cores on a mainstream  
system.  Then, yes some apps (or at the very least, the OS and the  
services it provides) will need to change significantly to recognize  
further benefits.

> A fundamental change in the nature of programing is occurring; the  
> programmer now has easy access to concurrently executing threads.  
> What will we do with them and what help will we get from our  
> favorite programming languages?

The fundamental rethinking of programming being predicted (and I've  
seen this from several sources) is likely overstating the situation a  
bit for the general case as I've lived through the never-ending  
address-the-scarce-resource battle for longer than I care to recall.   
While there are several significant app categories that need to be  
reworked around the reality of multi-processing for the masses, much  
of the code written (still serial in nature... for many types of  
computation this is unavoidable) will continue along blissfully  
ignorant of what is going on.  As an example, 10-15 years ago multi- 
tasking went mainstream on Personal Computers (funnily enough, the  
same time that the server room was going SMP) with apps doing nothing  
more than no longer making the assumption that they owned the system  
and playing nice with the OS... very little to no code to take  
advantage of the fact that processing was now a logical abstraction.   
This has allowed the OS to provide a value add by performing tasks  
like handling process scheduling across multiple CPUs without the  
apps being aware of what is going on or that multiple processors  
might actually exist.

So now we actually have physical cores to host said processes which  
results in less gear grinding by the OS and improved responsiveness  
for the user.  Not terribly different than the increases of physical  
memory making VM more of an exception than the rule today: a  
previously scarce resource becomes a commodity and many apps benefit  
by doing nothing in the short term and not treating it as scarce  
(i.e. no longer concerned about spawning more threads/processes) in  
the long term.  There is a point of diminishing returns when multiple  
cores alone not provide additional benefits even for the largest,  
most scaleable, complex conventional apps currently conceived and, as  
has been seen over and over... something else will become the new  
bottleneck.

> Everyone should start thinking about what in means to write an  
> application that can make use of say 128-256 simultaneously  
> executing threads. If you were writing an editor like emacs today,  
> it isn't enough to decide the best way to decompose the problem  
> into objects and methods, but rather how to control (or maybe  
> trigger is a better word) the execution of as many useful  
> simultaneous methods as possible while synchronizing and  
> coordinating everything. That's hard to do. The decomposition to  
> classes and methods is mostly static, but threads can be dynamic as  
> well. Also to be considered is how much thread creation and  
> destruction happens. Thread creation could be all static or almost  
> all dynamic. The paradigm bar must be raised or programmers are  
> going to drown.

Why do you think this is necessary?  Using your emacs example  
(setting aside the fact that it's largely single-threaded, which is a  
problem), what does making emacs use 100+ cores get you that a  
handful of processes wouldn't?  Don't you just create the inverse of  
the context switching problem uni-processors had? (i.e. now you're  
creating overhead to ensure that you're using every available  
processor for an unclear benefit)   At least, that's what I'm  
inferring from the massive paradigm shift you seem to be pointing at.

I'm particularly interested given my recent questions re: SIMD  
support.  I'd much rather tell a system service that I need to  
perform an operation across multiple vectors and let it determine  
whether it would be best to run on a single processor, using SIMD on  
a single processor, across multiple processors, etc. in the same way  
I can (mostly) let Lisp handle memory allocation and release.  I  
think multi-core is a great thing and long overdue for the  
mainstream.  However, the concept has been around for quite a long  
time on the server (including the scale on which you are talking  
about ala Thinking Machines 20 years ago) and is just another  
resource to be allocated once your key apps/services (i.e. DB, web  
server, whatever) support it appropriately.  No radical paradigm  
shift for most applications/developers though, just another tool in  
the box.

Phil



More information about the Openmcl-devel mailing list