S
Skybuck Flying
- Jan 1, 1970
- 0
As the number of cores goes up the watt requirements goes up too ?
Will we need a zillion watts of power soon ?
Bye,
Skybuck.
Will we need a zillion watts of power soon ?
Bye,
Skybuck.
Skybuck said:As the number of cores goes up the watt requirements goes up too ?
Will we need a zillion watts of power soon ?
Bye,
Skybuck.
John Larkin said:Not necessarily, if the technology progresses and the clock rates are
kept reasonable. And one can always throttle down the CPUs that aren't
busy.
I saw suggestions of something like 60 cores, 240 threads in the
reasonable future.
This has got to affect OS design.
Chris said:I can see it now... A mega-core GPU chip that can dedicate 1 core
per-pixel.
Since the ATI Radeon™ HD 4800 series has 800 cores you work it out.
John Larkin said:Why? All it has to do is grant run permissions and look at the big
picture. It certainly wouldn't do I/O or networking or file
management. If memory allocation becomes a burden, it can set up four
(or fourteen) memory-allocation cores and let them do the crunching.
Using multicore properly will require undoing about 60 years of
thinking, 60 years of believing that CPUs are expensive.
Why multi-thread *anything* when hundreds or thousands of CPUs are
available?
Chris M. Thomasson said:message news:[email protected]... [...]Using multicore properly will require undoing about 60 years of
thinking, 60 years of believing that CPUs are expensive.
The bottleneck is the cache-coherency system.
Why? All it has to do is grant run permissions and look at the big
picture. It certainly wouldn't do I/O or networking or file
management. If memory allocation becomes a burden, it can set up four
(or fourteen) memory-allocation cores and let them do the crunching.
Why multi-thread *anything* when hundreds or thousands of CPUs are
available?
Using multicore properly will require undoing about 60 years of
thinking, 60 years of believing that CPUs are expensive.
John
Then stop thinking about optimum speed. Start thinking about a
computer system that doesn't crash, can't get viruses or trojans, is
easy to understand and use, that not even a rogue device driver can
bring down.
Think about how to manage a chip with 1024 CPUs. Hurry, because it
will be reality soon. We have two choices: make existing OS's
unspeakably more tangled, or start over and do something simple.
Speed will be a side effect, almost by accident.
NV55 said:Each of the 800 "cores", which are simple stream processors, in
ATI RV770
(Radeon 4800 series) are not comparable to the 16, 24, 32 or 48
cores that will be in Larrabee. Just like they're not comparable to
the 240 "cores" in Nvidia GeForce GTX 280. Though I'm not saying
you didn't realize that, just for those that might not have.
"General purpose" GPU's are not really general purpose, but theyTrue, but they seem to be positioning Larrabee in the same tech segment
as video cards. Which makes sense since a SIMD system is the easiest to
program. If they want N general purpose cores doing general purpose
computing the whole thing will bog down somewhere between 16 and 32. A
lot of the R&D theory was done 30+ years ago.
Maybe they will try something radical, like an ancient data flow
architecture, but I doubt it.
Nick said:In theory, the kernel doesn't have to do I/O or networking, but
have you ever used a system where they were outside it? I have.
Actually, doing I/O or networking in a "main" CPU is waste of resources. Any
sane architecture (CDC 6600, mainframes) has a bunch of multi-threaded IO
processors, which you program so that the main CPU has little effort to
deal with IO.
This works well even when you do virtualization. The main CPU sends a
pointer to an IO processor program ("high-level abstraction", not the
device driver details) to the IO processor, which in turn runs the device
driver to get the data in or out. In a VM, the VM monitor has to
sanity-check the command, maybe rewrites it ("don't write to track 3 of
disk 5, write it to the 16 sectors starting at sector 8819834 in disk 1,
which is where the virtual volume of this VM sits").
The fact that in PCs the main CPU is doing IO (even down to the level of
writing to individual IO ports) is a consequence of saving CPUs - no money
for an IO processor, the 8088 can do that itself just fine. Why we'll soon
have 32 x86 cores, but still no IO processor is beyond what I can
understand.
Basically all IO in a modern PC is sending fixed- or variable-sized packets
over some sort of network - via SATA/SCSI, via USB, Firewire, or Ethernet,
etc.
That's the IBM "channel controller" concept: add complexm specialized
dma-based i/o controllers to take the load off the CPU. But if you
have hundreds of CPU's, the strategy changes.
John
I think the trend is to have the cores surround a common shared cache;
a little local memory (and cache, if the local memory is slower for
some reason) per CPU wouldn't hurt.
Cache coherency is simple if you don't insist on flat-out maximum
performance. What we should insist on is flat-out unbreakable systems,
and buy better silicon to get the performance back if we need it.
I'm reading Showstopper!, the story of the development of NT. It's a
great example of why we need a different way of thinking about OS's.
Silicon is going to make that happen, finally free us of the tyranny
of CPU-as-precious-resource. A lot of programmers aren't going to like
this.
John
John Lennon:
'You know I am a dreamer'
....
' And I hope you join us someday'
(well what I remember of it).
You should REALLY try to program a Cell processor some day.
Dunno what you have against programmers, there are programmaers who
are amazingly clever with hardware resources.
I dunno about NT and MS, but IIRC MS plucked programmers from
unis, and sort of brainwashed them then.. the result we all know.
John said:For small N this can be made work very nicely.John said:On Thu, 7 Aug 2008 07:44:19 -0700, "Chris M. Thomasson"
message [...]
Using multicore properly will require undoing about 60 years of
thinking, 60 years of believing that CPUs are expensive.
The bottleneck is the cache-coherency system.
I meant to say:
/One/ bottleneck is the cache-coherency system.
I think the trend is to have the cores surround a common shared cache;
a little local memory (and cache, if the local memory is slower for
some reason) per CPU wouldn't hurt.Existing cache hardware on Pentiums still isn't quite good enough. TryCache coherency is simple if you don't insist on flat-out maximum
performance. What we should insist on is flat-out unbreakable systems,
and buy better silicon to get the performance back if we need it.
probing its memory with large power of two strides and you fall over a
performance limitation caused by the cheap and cheerful way it uses
lower address bits for cache associativity. See Steven Johnsons post in
the FFT Timing thread.If it is anything like the development of OS/2 you get to see veryI'm reading Showstopper!, the story of the development of NT. It's a
great example of why we need a different way of thinking about OS's.
bright guys reinvent things from scratch that were already known in the
mini and mainframe world (sometimes with the same bugs and quirks as the
first iteration of big iron code suffered from).
Yes. Everybody thought they could write from scratch a better
(whatever) than the other groups had already developed, and in a few
weeks yet. There were "two inch pipes full of piss flowing in both
directions" between graphics groups.
Code reuse is not popular among people who live to write code.
NT 3.51 was a particularly good vintage. After that bloatware set in.
CPU cycles are cheap and getting cheaper and human cycles are expensive
and getting more expensive. But that also says that we should also be
using better tools and languages to manage the hardware.
Unfortunately time to market advantage tends to produce less than robust
applications with pretty interfaces and fragile internals. You can after
all send out code patches over the Internet all too easily ;-)
NT followed the classic methodology: code fast, build the OS,
test/test/test looking for bugs. I think there were 2000 known bugs in
the first developer's release. There must have been ballpark 100K bugs
created and fixed during development.
Since people buy the stuff (I would not wish Vista on my worst enemy by
the way) even with all its faults the market rules, and market forces
are never wrong...
Most of what you are claiming as advantages of separate CPUs can be
achieved just as easily with hardware support for protected user memory
and security privilige rings. It is more likely that virtualisation of
single, dual or quad cores will become common in domestic PCs.
Intel was criminally negligent in not providing better hardware
protections, and Microsoft a co-criminal in not using what little was
available. Microsoft has never seen data that it didn't want to
execute. I ran PDP-11 timeshare systems that couldn't be crashed by
hostile users, and ran for months between power failures.
There was a Pentium exploit documented against some brands of Unix. eg.
http://www.ssi.gouv.fr/fr/sciences/fichiers/lti/cansecwest2006-duflot.pdf
Loads of physical CPUs just creates a different set of complexity
problems. And they are a pig to program efficiently.
So program them inefficiently. Stop thinking about CPU cycles as
precious resources, and start think that users matter more. I have
personally spent far more time recovering from Windows crashes and
stupidities than I've spent waiting for compute-bound stuff to run.
If the OS runs alone on one CPU, totally hardware protected from all
other processes, totally in control, that's not complex.
As transistors get smaller and cheaper, and cores multiply into the
hundreds, the limiting resource will become power dissipation. So if
every process gets its own CPU, and idle CPUs power down, and there's
no context switching overhead, the multi-CPU system is net better off.
What else are we gonna do with 1024 cores? We'll probably see it on
Linux first.