Maker Pro
Maker Pro

Never reinstall XP again

A

Archimedes' Lever

Jan 1, 1970
0
I haven't used "Scat Man", but it certainly fits as perfectly as the
others. I was the one who said you didn't have a degree, then when
you said that you were degreed, asked where from.


No, you didn't pull the baby bullshit name crap, but you did refer to
his claim. Recently too.

And no, it was NOT you that asked, it was John, and you followed,
dipshit. Sorry that you are so stupid that you cannot follow the group
well enough to know that.
 
A

Archimedes' Lever

Jan 1, 1970
0
Not only are you a Dim Bulb, DimBulb, but you're an impatient
ignoramus, as well.

**** off, KeithTard. **** off and die.
 
K

krw

Jan 1, 1970
0
No, you didn't pull the baby bullshit name crap, but you did refer to
his claim. Recently too.

And no, it was NOT you that asked, it was John, and you followed,
dipshit. Sorry that you are so stupid that you cannot follow the group
well enough to know that.

As I said, Archimedes' Liar, you're an ignoramus, too.
 
C

Corbomite Carrie

Jan 1, 1970
0
Poor bastardo! Flail away. Typical response of a soon-to-die
BASTARDO ;-)

...Jim Thompson


Is that a threat, fuckhead? You know, like the HOLLOW threat you made
about involving your retarded Arizona lawyers?

Is what you are doing called pussyfooting? Naaahhh... you don't even
qualify for that, ya dopey little nerd.
 
A

Archimedes' Lever

Jan 1, 1970
0
As I said, Archimedes' Liar, you're an ignoramus, too.


You're the goddamned liar, fucktard. Is your name John?



QUOTE of post by JOHN LARKIN:

John, You are an idiot.

A MACHINED ROUND cross section is reduced when a flat is made on (cut
out of) it. A PRESSED ROUND cross section gets a flat top WITHOUT
reducing cross section.

If you cannot do that simple mechanical math, you have no business
jacking off at the mouth about what you THINK others may or may not know
about ANY other math, you pathetic asshole.

So either **** off and die or **** off and die.

Either way, mother fucker, you have no business in the argument any
longer. Now, THOSE ARE WORDS, you stupid twit.

Words you cannot refute.

Where did you get that engineering degree?

John
 
K

krw

Jan 1, 1970
0
You're the goddamned liar, fucktard.

Nope, that's your domain, Archimedes' Liar. He likely read my post
from the previous day. You can't read, so...
Is your name John?

Nope. That would be my brother. Two siblings with the same name
would be confusing. ...though you're quite familiar with "confused".


<snipped irrelevant quote from JL>
 
H

Herbert John \Jackie\ Gleason

Jan 1, 1970
0
My apologies, BASTARDO. I'm changing the designation to "TURD", a
designation which seems far more appropriate.

As for "hollow", think "hollow point" ;-)

...Jim Thompson

Think about getting back in your face what you are too stupid to aim
properly. Mine will hit its mark. I guarantee. I can shoot from the hip
more accurately than you can attempting to aim.

So, PUSSY... bring your retarded ass over the border and bring it on,
boy! I would love to put you in the dirt early for breaching my
threshold. You sumbitch!
 
A

Archimedes' Lever

Jan 1, 1970
0
Nope, that's your domain, Archimedes' Liar. He likely read my post
from the previous day. You can't read, so...


Nope. That would be my brother. Two siblings with the same name
would be confusing. ...though you're quite familiar with "confused".


<snipped irrelevant quote from JL>


It was totally relevant because it proves you wrong. It was posted
BEFORE your retarded post, and therefore it is YOU that is the fucking
liar, boy. What you really need is a fast moving hunk of lead inserted
into your head.
 
M

MooseFET

Jan 1, 1970
0
@a12g2000pro.googlegroups.com>, [email protected] says...>






Please explain. The ALU is *part* (actually, many parts) of the
CPU.  How are you going to move it "elsewhere" and where is
"elsewhere".

Consider incrementing a word in memory. If the memory in question can
be told to increment, the value doesn't have to travel, only the
command does. This is a simple case, but is common enough that the
saving on bus trips could be real.

Graphics cards now contain a lot of the stuff that used to be done by
CPUs. This could be extended. Painting a transparent image over an
existing picture requires some multiplies and adds. The graphics card
could handle this in real time.
 
K

krw

Jan 1, 1970
0
Consider incrementing a word in memory. If the memory in question can
be told to increment, the value doesn't have to travel, only the
command does. This is a simple case, but is common enough that the
saving on bus trips could be real.

The difference in size between the "command" and the data is? Someone
has to be the master.
Graphics cards now contain a lot of the stuff that used to be done by
CPUs. This could be extended. Painting a transparent image over an
existing picture requires some multiplies and adds. The graphics card
could handle this in real time.

Only because CPUs were really *bad* at it. GPUs still need a little
bit of memory, and bandwidth to same.
 
S

Sylvia Else

Jan 1, 1970
0
krw said:
The difference in size between the "command" and the data is? Someone
has to be the master.


Only because CPUs were really *bad* at it. GPUs still need a little
bit of memory, and bandwidth to same.

Still, other than on-MB GPUs, they use their own dedicated memory, thus
avoiding loading the CPU-memory bus. They do therefore illustrate to
some extent the memory bandwidth benefits that accrue from
decentralising computation.

Sylvia.
 
K

krw

Jan 1, 1970
0
Still, other than on-MB GPUs, they use their own dedicated memory, thus
avoiding loading the CPU-memory bus. They do therefore illustrate to
some extent the memory bandwidth benefits that accrue from
decentralising computation.

True, though this is also done with SMP (ex. Opteron). This isn't
what MF is suggesting, though. Well, since he's in hiding...
 
S

Sylvia Else

Jan 1, 1970
0
krw said:
True, though this is also done with SMP (ex. Opteron). This isn't
what MF is suggesting, though. Well, since he's in hiding...

It's not. I'd say that what he was thinking of was something closer to
some kinds of array processor, where the distinction between memory and
processing becomes somewhat blurred.

Which is not to say I expect any movement in that direction. Such
techniques are useful for special problems, but most of the work done by
a typical PC (graphics apart) is not of that type.

I suspect that any benefit that could be achieved by decentralised
processing for a typical PC workload could be achieved at a lower cost
by increasing the size of the processor cache.

Sylvia.
 
H

Herbert John \Jackie\ Gleason

Jan 1, 1970
0
Oooooooh! I can drop your ass at 20 paces so fast you won't be able
to blink, baby-ass BASTARDO!

You can't fathom how much experience I have at pistol shooting.

Bye! Bye! BASTARDO!

...Jim Thompson


20 paces? Bwuahahahahahahahah!

I can nail your pussy ass on the run at 30 yards, asshole!

Especially since "you-on-the-run" isn't much faster than granny getting
on the bus with her walker. Bwuahahahaha!

Fathom? Bwuahahahahahah! You a dumb sumbitch too!
 
S

Sylvia Else

Jan 1, 1970
0
krw said:
Think about how memory is split across modules. Then think about
all the interactions you've just added. ...to *slow* memory.

There are undoubtedly implementation issues. Whether the approach would
speed processing up overall depends on the nature of the work to be
done, and the technologies used to implement it. As I've indicated in my
response elsewhere, I doubt that it would be useful in practice.

My purpose was merely to support MooseFET's position on a theoretical
basis. His proposal is not technically absurd, even though it does not
accord with the way current systems work, and likely has limited
application.

Sylvia.
 
S

Sylvia Else

Jan 1, 1970
0
Bart! said:
Bwuahahahahaha! "Painting" ALSO would require INSTRUCTIONS.

A graphics card unloading what USED to be CPU functions is not the same
in any way. It ALSO requires accelerated "drivers", which are
essentially an OS stimulated (read external) instruction set extension,
as it were. It is called directx or direct draw, or one of various other
methods to handle the data that GOES THROUGH RAM at some point before it
gets passed to the GPU for FURTHER processing and RENDERING functions.

Not the same as what you suggest at all.

Typically the data for the image is not passed across the main memory
bus each time a new frame needs to be rendered. So the ability of the
GPU to do calculations that the CPU would otherwise have to do does
significantly reduce the main memory bandwith requirements.

Sylvia.
 
M

MooseFET

Jan 1, 1970
0
The difference in size between the "command" and the data is?
Count the number of trips over the bus. Its one vs two.

 Someone
has to be the master.


Only because CPUs were really *bad* at it.  GPUs still need a little
bit of memory, and bandwidth to same.

It takes load off the CPU and reduces the number of trips in and out
of the CPU.
 
M

MooseFET

Jan 1, 1970
0
krw said:
krw wrote:
On Tue, 17 Feb 2009 19:52:04 -0800 (PST), MooseFET



It's not. I'd say that what he was thinking of was something closer to
some kinds of array processor, where the distinction between memory and
processing becomes somewhat blurred.

Consider the transputer as an example of one type of array processor.
Which is not to say I expect any movement in that direction. Such
techniques are useful for special problems, but most of the work done by
a typical PC (graphics apart) is not of that type.

Most PCs spend most of their time play Freecell. The games are very
graphic intense. Moving processing power into the the memory that
holds the image makes sense for those.
I suspect that any benefit that could be achieved by decentralised
processing for a typical PC workload could be achieved at a lower cost
by increasing the size of the processor cache.

You may be right in all but the graphics or sound areas. When actual
outputs must be made the data from the cache must go to the device.
 
M

MooseFET

Jan 1, 1970
0
Modern SDRAM memories have a sort of array-oriented internal
architecture, with relatively little logic "near" each individual
memory cell.  The process of accessing the memory contents is actually
rather sequential.  In order to gain access to a specific word, it's
necessary to perform a bunch of setup (e.g. activate a whole "column"
of the memory) and then shift the contents of that column out of the
array.  The same is true when writing data back into the memory - a
multi-word chunk is shifted back in and rewritten at once.  This
column-selection and access process takes time - in a fair number of
modern systems, the memory access itself is slower than the
CPU-to-memory bus.

If you want to have arithmetic (e.g.) done within the memory chip
itself, you'll have to make a hard decision:

-  If the memory chip has one arithmetic unit (or a small number)
   you'll still have to select and access the memory much as is done
   today, in order to present it to the chip's ALU, and then put it
   and its peers back into the memory array.  This is not likely tobe
   terribly faster than what happens today.

This could be faster if the number of ALUs that are kept busy is
high. It would mean that there would have to be many ALUs and many
chunks of memory associated with them.
 
Top