Maker Pro
Maker Pro

Kids can do math better than x86 cpu's.

K

Ken Smith

Jan 1, 1970
0
software is doing and you'd be right, but it would still be an
implementation in hardware.

Intel are bad teachers.[/QUOTE]

The CDP1802 did its math 1 bit at a time. The Z80 did internal stuff in
groups of 4 bits. The 8080 did some things in 8 or 16 bits. The PDP-8
did 12 bits. The 68000 did 16 bits. Everyone does a limited number of
bits at a time.

[....]
We programmers got screwed big time..

We writing *fixed-limited-based-shit* :) ;)

What you mean "we".


If a processor did everything required in one instruction, there would be
no need for programmers to write multiple instruction programs.
 
K

krw

Jan 1, 1970
0
It's not that stupid:

Really? One times one is three? One times zero is one? I'd say
that's rather stupid.
When multiplieing 10 * 10 = 100 the 1 seems to move.

Where is that in your table above.
Why wouldn't that happen at binary level ?!?

Just because 2+2=4 and 2*2=4 doesn't mean the '+' and '*' operator
are the same.
In fact it probably happens as well ;) when the numbers are finally added
together.

Ten plus ten is twenty?
 
M

me

Jan 1, 1970
0
They do it better than intel's algorithm which is implemented in their
hardware <- fixed precision yak !

Kids <- arbitary precision ! ah nice ! =D

Bye,
Skybuck.

so run your programs on kids and quit bitchin'...
 
A

AZ Nomad

Jan 1, 1970
0
Try telling them it's the ammount of money they can win ;) or candy lol.
I will *bet* you they will do it =D

How much money are you willing to supply? I can guarantee that anything less
than six or seven hundred dollars will get you told to **** yourself,
especially when a minimum wage job will make more money and won't be nearly
as tedious.
 
T

Terje Mathisen

Jan 1, 1970
0
Nicholas said:
We could just kill file him as I've been tempted to do many times.

The _only_ problem with this approach is that these days google makes
posts last forever, which means that X years from now some unfortunate
soul might hit upon a Sky post in response to a search for real
information. With zero gainsayers said soul might even believe it. :-(

Terje
 
J

JimKeo

Jan 1, 1970
0
Terje,

Your x86 BCD reference jogs my memory back to 80's on BYTE
Magazine's BIX (Byte Information eXchange) teleconferencing system
(cosy).

Some of us processor-bigots, in the CPU or CPUs confernece, had an
informal my-cpu-is-better-than-yours speed competition involving
multiplication of 72-bit unsigned binary integers. The 9 bytes were
picked, in theory, so as not to give arbitrary advantage to cpus of
particular word sizes.

My entry for Apple /// (8-bit cpu Mostek 6502B) just used the
lowest level bit bit-test/add/shifting/carry instructions. The code
first determined the 6 10-byte results from one of the multipliers
(the one on the left {smile}) when multiplied by 2, 4, 8, 16, 32, 64
and 128. This actually meant 8 results were precalculated in that
multiplying by 0 was obvious and multiplying by 1 was already given.
The 6 values were done with simple bitshifts/carry/add. Once that was
done then the other multipler was processed bit-by-bit right-to-left
and answer adjusted/accumulated accordingly just like a school child
would do in decimal on paper. Choosing which multiplier (OK,
multiplier, multiplicand) to process based on the lower number of 1
was sometimes useful.

This simple-minded low-level approach actually outperformed other
cpus including 8086, 8088, z80, 8080, etc. lovingly handcrafted in
assembler making use of, ahem, higher instructions.

The code used maybe a half dozen rather simple MACROs. Of
course, the fact that there was ample room in the 6502B's "zero page
of 256 bytes) to do all the work was a big boost. In effect, the zero
page's faster access meant you could treat it as 256 registers.

Cheers, - Jim

Jim Keohane, Multi-Platforms, Inc.
 
S

Skybuck Flying

Jan 1, 1970
0
AZ Nomad said:
Good question. It looks to me like skybuck hasn't been limiting his
usenet sessions to times when he isn't stinking drunk.

Multiplying numbers in any base isn't a big deal to anybody older than
about
seven or for any CPU made in the last fifty years. Take a first semester
CS
class and it'll be obvious how easy the algorithms are. Pick any base, I
don't
care. However, if you're getting out the special characters than you've
obviously gone past base 36. (0-9,'A'..Z) What have we? Base 192? Gonna
get
into the ASCII control codes too for base 256? Who the **** cares? It's
the
same algorithm and trivial easy to accomplish.

Your CS class must have been cheating by using multipliers.

Try a real class, a skybuck class !

No multipliers available:

Everything must be done with lookup tables and bytes, no words allowed.

I am sure you will find it more then challenging.

P.S.: your cs class can go into the waste basket.

Bye,
Skybuck ;)
 
S

Skybuck Flying

Jan 1, 1970
0
I want to learn what's called "long hand multiplication".

The method thought to kids in high-school.

Which uses look-up tables.

It's highy unlikely such a software library already exists because it would
be quite slow.

Bye,
Skybuck.
 
S

Skybuck Flying

Jan 1, 1970
0
I want to learn and see an implementation of what's called "long hand
multiplication".

The method thought to kids in high-school.

Which uses look-up tables.

It's highy unlikely such a software library already exists because it would
be quite slow.

Bye,
Skybuck.
 
S

Skybuck Flying

Jan 1, 1970
0
Does it use the kiddy method ? (long hand multiplication as thought in
school)

Bye,
Skybuck.
 
J

John L

Jan 1, 1970
0
I want to learn what's called "long hand multiplication".

OK.
The method thought to kids in high-school.

Please contact your local high school.

R's,
John
 
J

joseph2k

Jan 1, 1970
0
Nick said:
|> |>
|> > It should be noted, all modern CPU I know of have these primitives,
|> > even the lowly PIC. So bignum math is supported by hardware
|> > *already*. The problem is higher level languages like C doesn't
|> > support these low level hardware features. Like, how would you detect
|> > the carry bit in C?

Not that I can see. How many have a 64x64->128 multiply and a
128/64->64,64 quotient, remainder?

Lets see, there is the MIPS 4500, 5000, and 10000 series, some later
Transputers, Sun Ultrasparc (64 bit), DEC Alpha, HP PA-RISC 8900, some late
model VAX's, PowerPC G5, and Opterons all have this capability in hardware.
Some of this is 20 years old, where have you been?
|> Using add with carry is easy in C, various compilers can detect it:

Some compilers may be able to, IF you use a code form that they can
recognise; but, if they don't document that and exactly what code
forms are acceptable, you are just hoping.

And, in any case, that is not "in C" - it is "in Intel C" or "in gcc"
etc. There is no way of doing it in standard or portable C, let alone
standard AND portable C.


Regards,
Nick Maclaren.

BTW implementing wide math is not all that difficult. Usually cost or IP
issues prevent libraries from being used. I have done the basics (+, -, /,
and X) for 6502 and 8051; stretching them from 8 to 32 bits was
straightforward is a bit laborious to test. All the necessary assembler
(hardware) instructions are present in x86, 68K, PPC, Alpha, and many other
architectures (Even including IBM 360/370/390).
 
J

joseph2k

Jan 1, 1970
0
Wilco said:
Widening multiplies are quite common on CPUs, division is far less
common (ie. on RISC), especially the narrowing form. Luckily you don't
need a division primitive for multi precision arithmetic.


The idioms for rotate, widening multiplies, carry, multiply+accumulate,
divison+modulo and so on are pretty standard. It is trivial to recognise


Actually these idioms are both standard AND portable (the code I showed
works on all compilers irrespectively of integer size or whether the
target even has a carry flag!). On the other hand, inline assembler is
non-standard and not portable between different compilers for the same
architecture.

Wilco

You use yours and i'll use properly parmeterized libraries for each target
and call them from "C" or whatever HLL; and we can test to see which
performs better.
 
K

krw

Jan 1, 1970
0
I want to learn and see an implementation of what's called "long hand
multiplication".

The method thought to kids in high-school.

That's the way I'd do a multiple-precision multiply. I'm sure there
is another way, but I don't know it.
Which uses look-up tables.

Ever hear of an FPGA? ;-)

As I said before, the IBM 1620 used lookup tables for its arithmetic
(add included). It had no native arithmetic operators at all.
It's highy unlikely such a software library already exists because it would
be quite slow.

They do exist, for a number of reasons.
 
K

krw

Jan 1, 1970
0
Intel are bad teachers.

The CDP1802 did its math 1 bit at a time. The Z80 did internal stuff in
groups of 4 bits. The 8080 did some things in 8 or 16 bits. The PDP-8
did 12 bits. The 68000 did 16 bits. Everyone does a limited number of
bits at a time.[/QUOTE]

There was at least one PDP-8 model (8S?) that did bit serial
arithmetic too.
[....]
We programmers got screwed big time..

We writing *fixed-limited-based-shit* :) ;)

What you mean "we".


If a processor did everything required in one instruction, there would be
no need for programmers to write multiple instruction programs.
 
K

krw

Jan 1, 1970
0
Your CS class must have been cheating by using multipliers.

What's the difference between a multiplier and a lookup table? If
you poke one can you tell the difference?
Try a real class, a skybuck class !

No thanks. I couldn't stop laughing.
No multipliers available:

Are there multipliers that work?
Everything must be done with lookup tables and bytes, no words allowed.

Fine, that's easy.
I am sure you will find it more then challenging.

Nope. It's trivial.
P.S.: your cs class can go into the waste basket.

DOn't need a CS class. I learned how to do the above in fifth grade
(in any base that floats your boat).
 
K

Ken Smith

Jan 1, 1970
0
There was at least one PDP-8 model (8S?) that did bit serial
arithmetic too.

Yes. it was the "S". Also the 8051 was serial internally.

There was also the MC14500.

If you had enough 2900s, you could do any number of bits.
 
K

krw

Jan 1, 1970
0
[....]
software is doing and you'd be right, but it would still be an
implementation in hardware.

Intel are bad teachers.

The CDP1802 did its math 1 bit at a time. The Z80 did internal stuff in
groups of 4 bits. The 8080 did some things in 8 or 16 bits. The PDP-8
did 12 bits. The 68000 did 16 bits. Everyone does a limited number of
bits at a time.

There was at least one PDP-8 model (8S?) that did bit serial
arithmetic too.

Yes. it was the "S". Also the 8051 was serial internally.

Never knew the 8051 was serial, though it makes sense when I think
about it (12/24 clocks per cycle).
There was also the MC14500.
If you had enough 2900s, you could do any number of bits.

There's the answer to Skynut's multiple precision problem!
 
J

John Larkin

Jan 1, 1970
0
[....]
software is doing and you'd be right, but it would still be an
implementation in hardware.

Intel are bad teachers.

The CDP1802 did its math 1 bit at a time. The Z80 did internal stuff in
groups of 4 bits. The 8080 did some things in 8 or 16 bits. The PDP-8
did 12 bits. The 68000 did 16 bits. Everyone does a limited number of
bits at a time.

There was at least one PDP-8 model (8S?) that did bit serial
arithmetic too.

With discrete transistors!
Yes. it was the "S". Also the 8051 was serial internally.

There was also the MC14500.

If you had enough 2900s, you could do any number of bits.

The Data General NOVA was a 16-bit machine that used nibble math, with
a single 4-bit ALU, as I recall.

John
 
Top