Maker Pro
Maker Pro

RC Transmission Lines (Wafer-Scale)

G

Guy Macon

Jan 1, 1970
0
I recently got into a conversation in comp.arch concerning
how fast signals propagate and how far they can travel in
microprocessor wiring.

Some of the posters seem to think that wafer-scale traces/wires
are a lot slower than PWB-scale and system-scale traces/wires
because they are RC transmission lines, not LC.

I did a few crude simulations and it seems to me that the
RC slows down the risetime on single edges and cuts the
amplitude way down on high frequency clock signals, but
I can't see any reason to think that the propagation would
be a lot slower than the usual 60%-80% of C rule of thumb.

I am familiar with normal board and system level transmission
lines such as ECL, stripline, coax, etc., but have never done
any work with chip-scale electronics. Does anyone here know
how fast and how far one can move a signal across a die? Thanks!
 
B

BobW

Jan 1, 1970
0
Guy Macon said:
I recently got into a conversation in comp.arch concerning
how fast signals propagate and how far they can travel in
microprocessor wiring.

Some of the posters seem to think that wafer-scale traces/wires
are a lot slower than PWB-scale and system-scale traces/wires
because they are RC transmission lines, not LC.

I did a few crude simulations and it seems to me that the
RC slows down the risetime on single edges and cuts the
amplitude way down on high frequency clock signals, but
I can't see any reason to think that the propagation would
be a lot slower than the usual 60%-80% of C rule of thumb.

I am familiar with normal board and system level transmission
lines such as ECL, stripline, coax, etc., but have never done
any work with chip-scale electronics. Does anyone here know
how fast and how far one can move a signal across a die? Thanks!

I can't answer your question directly, but I recently asked a very
knowledgeable Xilinx employee (on comp.arch.fpga) whether their latest FPGAs
used termination techniques for any of their internal signals, and his
response was "yes". So, if this is true (I have no reason to doubt this
particluar individual) then this means that their internal signal edge rates
are fast enough and their path lengths are long enough to warrant the cost
and complexity of on-chip termination (and I'm not talking about their I/O
termination features).

Bob
 
G

Guy Macon

Jan 1, 1970
0
Content-Transfer-Encoding: 8Bit


Guy Macon <http://www.guymacon.com/> wrote:

I have been doing a web searches on this, and found these:

_The Future of Wires_
http://www.cs.utah.edu/classes/cs6943/papers/horowitz99future.pdf

_The Wire_
http://www.vlsi.uwaterloo.ca/~manis/ece730/lecture2.pdf

Parasitic Extraction and Performance Estimation from Physical Structure
http://lsiwww.epfl.ch/LSI2001/teaching/webcourse/ch04/ch04.html#4.2
http://lsiwww.epfl.ch/LSI2001/teaching/webcourse/toc.html

I also found this in an abstract (the actual paper isn't online):

A 3Gb/s/wire Global On-Chip Bus with Near Velocity-of-Light Latency

"We successfully show the practical feasibility of a purely
electrical global on-chip communication link with near velocity-
of-light delay. The implemented high-speed link comprises a 5mm
long, fully shielded, repeaterless, on-chip global bus reaching
3Gb/s/wire in a standard 0.18 µm CMOS process. Transmission-line-
style interconnects are achieved by routing signal wires in the
thicker top metal M6 layer and utilizing a metal M4 ground return
plane to realize near velocity-of-light data transmission. The
nominal wire delay is measured to 52.8ps corresponding to 32%
of the velocity of light in vacuum.

http://www.google.com/search?hl=en&...Chip+Bus+with+Near+Velocity+of+Light+Latency"
 
J

John Larkin

Jan 1, 1970
0
I recently got into a conversation in comp.arch concerning
how fast signals propagate and how far they can travel in
microprocessor wiring.

Some of the posters seem to think that wafer-scale traces/wires
are a lot slower than PWB-scale and system-scale traces/wires
because they are RC transmission lines, not LC.

I did a few crude simulations and it seems to me that the
RC slows down the risetime on single edges and cuts the
amplitude way down on high frequency clock signals, but
I can't see any reason to think that the propagation would
be a lot slower than the usual 60%-80% of C rule of thumb.

I am familiar with normal board and system level transmission
lines such as ECL, stripline, coax, etc., but have never done
any work with chip-scale electronics. Does anyone here know
how fast and how far one can move a signal across a die? Thanks!

Prop delay versus distance is a serious issue with layout on a Xilinx
FPGA. If the chip is 1 cm square, and you can easily get a ns of
routing delay, that's c/30 right there.

John
 
L

LVMarc

Jan 1, 1970
0
Guy said:
I recently got into a conversation in comp.arch concerning
how fast signals propagate and how far they can travel in
microprocessor wiring.

Some of the posters seem to think that wafer-scale traces/wires
are a lot slower than PWB-scale and system-scale traces/wires
because they are RC transmission lines, not LC.

I did a few crude simulations and it seems to me that the
RC slows down the risetime on single edges and cuts the
amplitude way down on high frequency clock signals, but
I can't see any reason to think that the propagation would
be a lot slower than the usual 60%-80% of C rule of thumb.

I am familiar with normal board and system level transmission
lines such as ECL, stripline, coax, etc., but have never done
any work with chip-scale electronics. Does anyone here know
how fast and how far one can move a signal across a die? Thanks!
The RLC transmission occurs when the series loss component is too high
to "ignore" also adds delay to the propagation time. The classic wave
equation and simpliciation all use transmission constant, with rsitive
looss it is a complex ni,ber, hance larger for the same L and c and
hence slower...

Marc
 
R

Robert Baer

Jan 1, 1970
0
Guy said:
I recently got into a conversation in comp.arch concerning
how fast signals propagate and how far they can travel in
microprocessor wiring.

Some of the posters seem to think that wafer-scale traces/wires
are a lot slower than PWB-scale and system-scale traces/wires
because they are RC transmission lines, not LC.

I did a few crude simulations and it seems to me that the
RC slows down the risetime on single edges and cuts the
amplitude way down on high frequency clock signals, but
I can't see any reason to think that the propagation would
be a lot slower than the usual 60%-80% of C rule of thumb.

I am familiar with normal board and system level transmission
lines such as ECL, stripline, coax, etc., but have never done
any work with chip-scale electronics. Does anyone here know
how fast and how far one can move a signal across a die? Thanks!
Better look at it as an RLC transmission line or as a lossy LC line.
See if you can then calculate the speed, in the same way one
alculates for a coax and then a PCB trace with ground plane.
Then do some measurements to see how the theory matches practice.
 
R

Robert Baer

Jan 1, 1970
0
Guy said:
Content-Transfer-Encoding: 8Bit


Guy Macon <http://www.guymacon.com/> wrote:

I have been doing a web searches on this, and found these:

_The Future of Wires_
http://www.cs.utah.edu/classes/cs6943/papers/horowitz99future.pdf

_The Wire_
http://www.vlsi.uwaterloo.ca/~manis/ece730/lecture2.pdf

Parasitic Extraction and Performance Estimation from Physical Structure
http://lsiwww.epfl.ch/LSI2001/teaching/webcourse/ch04/ch04.html#4.2
http://lsiwww.epfl.ch/LSI2001/teaching/webcourse/toc.html

I also found this in an abstract (the actual paper isn't online):

A 3Gb/s/wire Global On-Chip Bus with Near Velocity-of-Light Latency

"We successfully show the practical feasibility of a purely
electrical global on-chip communication link with near velocity-
of-light delay. The implemented high-speed link comprises a 5mm
long, fully shielded, repeaterless, on-chip global bus reaching
3Gb/s/wire in a standard 0.18 µm CMOS process. Transmission-line-
style interconnects are achieved by routing signal wires in the
thicker top metal M6 layer and utilizing a metal M4 ground return
plane to realize near velocity-of-light data transmission. The
nominal wire delay is measured to 52.8ps corresponding to 32%
of the velocity of light in vacuum.

http://www.google.com/search?hl=en&...Chip+Bus+with+Near+Velocity+of+Light+Latency"
I sure as all heck do not call 32%" of C(vac) as being anyway "near
velocity-of-light delay".
 
P

Phil Hobbs

Jan 1, 1970
0
Robert said:
I sure as all heck do not call 32%" of C(vac) as being anyway "near
velocity-of-light delay".

Lines with repeaters run at about c/10.

You can get more bandwidth on the fatwire levels (high in the stack),
basically because the capacitance per unit length is the same, and the
resistance goes down. Of course, the number of wires available goes
down too, and you have to get through all the lower wiring levels to get
up to the fatwires. This leads to wireability problems.

In something like a highly multicore processor, you need lots and lots
of fast wires. There's a current DARPA program called TELL (for terabit
electrical links at low power, or something like that). TELL is all
about finding out how far electrical links can be pushed (and therefore
at what point you have to go to optical links or accept reduced
performance). Since wiring capacitance tends to be independent of size
scaling, it's really hard to get below 2 pF/cm, so to save power, links
need to use very low voltage, at which point you get into nasty problems
with noise, drift, offset voltages, and crosstalk.

This gets especially difficult starting with the 32 nm node (iirc
again), because the threshold voltages of the FETS are hard to
control...it turns out that you have to worry about statistical
variations in the number of dopant atoms in the FET channel. A 30-nm
cube of silicon, doped to 10**20 per cc, contains 2700 dopant atoms. If
a chip has a billion transistors, you'll have lots of 6-sigma outliers,
which will be off by +- 300 atoms, or 11% of nominal, which causes a
nasty threshold voltage shift. Smaller devices get worse fast.

The wire guys think they can overcome these problems, and maybe they
can...but my money says that we'll see on-chip optical signalling in the
next 10 years. (I also have to try keeping my management convinced.) ;)

Cheers,

Phil Hobbs
 
R

Robert Baer

Jan 1, 1970
0
Phil said:
Lines with repeaters run at about c/10.

You can get more bandwidth on the fatwire levels (high in the stack),
basically because the capacitance per unit length is the same, and the
resistance goes down. Of course, the number of wires available goes
down too, and you have to get through all the lower wiring levels to get
up to the fatwires. This leads to wireability problems.

In something like a highly multicore processor, you need lots and lots
of fast wires. There's a current DARPA program called TELL (for terabit
electrical links at low power, or something like that). TELL is all
about finding out how far electrical links can be pushed (and therefore
at what point you have to go to optical links or accept reduced
performance). Since wiring capacitance tends to be independent of size
scaling, it's really hard to get below 2 pF/cm, so to save power, links
need to use very low voltage, at which point you get into nasty problems
with noise, drift, offset voltages, and crosstalk.

This gets especially difficult starting with the 32 nm node (iirc
again), because the threshold voltages of the FETS are hard to
control...it turns out that you have to worry about statistical
variations in the number of dopant atoms in the FET channel. A 30-nm
cube of silicon, doped to 10**20 per cc, contains 2700 dopant atoms. If
a chip has a billion transistors, you'll have lots of 6-sigma outliers,
which will be off by +- 300 atoms, or 11% of nominal, which causes a
nasty threshold voltage shift. Smaller devices get worse fast.

The wire guys think they can overcome these problems, and maybe they
can...but my money says that we'll see on-chip optical signalling in the
next 10 years. (I also have to try keeping my management convinced.) ;)

Cheers,

Phil Hobbs
Forgive me for a rather nasty question.
What is this phoney push for more junk on a piece of silicon to
support what used to be relatively simple applications?
What was wrong with KISS?
In a personal computer, one does not need 2^10 core CPUs, or even
dual core; any CPU speed over 1Ghz is wasted, and for 99+% uses Win98Se
is more than good enough.
Now, if one gets into graphics (read: games, design PCBs or other
complex artwork), then more speed becomes useful and Win2K becomes a
better choice.
Oh, you say, we "need" dual (or quad) core for graphics.
What the hell is that large graphics chip on the fancy video card
for? Boat anchor?
In fact, what good was the MMX instruction set for, since the sound
card already supported those functions.
On a cell phone guess what - its purpose is to send and receive
calls, period.
Want to do something else like portable music - players have been
around for over 10 years that do that; they just get smaller and store more.
Etc etc and etc (courtesy of Yul Brynner in the King and I).
 
G

Guy Macon

Jan 1, 1970
0
Robert said:
Forgive me for a rather nasty question.
What is this phoney push for more junk on a piece of silicon to
support what used to be relatively simple applications?
What was wrong with KISS?
In a personal computer, one does not need 2^10 core CPUs, or even
dual core; any CPU speed over 1Ghz is wasted, and for 99+% uses Win98Se
is more than good enough.

You are assuming that every computer is a PC. Some computers
are not PCs. Some computers are routers with 48 separate gigabit
ethernet ports and a requirement to inspect every packet coming
in from every port. Some are part of large web server farms that
handle all of the searches on Google or all of the bids on eBay.
Some are doing simulations of complex phycical systems. Some are
doing the rendering for the next Jurassic Park movie. And some
are at Supernews, handling your posts and the post of millions
of others.
 
J

Joel Koltner

Jan 1, 1970
0
Robert Baer said:
Forgive me for a rather nasty question.
What is this phoney push for more junk on a piece of silicon to support
what used to be relatively simple applications?

It's not a phoney push; there are plenty of applications where there will
simple never be enough CPU horsepower available to solve them... at least not
using technology that resembles anything like what we have today.

Ever for "simple" problems such as PCs running word processors, web browsers,
etc., CPU horsepower helps a great deal: The average person today can walk up
to pretty much any modern PC and get it to do "useful" things, whereas 20
years ago even if you were a skilled, e.g., IBM PC user you couldn't just walk
to an Apple II or Commodore 64 and do "useful" things without a fair amount of
instruction.
In a personal computer, one does not need 2^10 core CPUs, or even dual
core; any CPU speed over 1Ghz is wasted, and for 99+% uses Win98Se is more
than good enough.

That may be for you, but not everyone feels the same. 20 years ago I'm sure I
could have found someone saying that any CPU speed over 10MHz was wasted, no
needed more than 640KB (hmm... there's an infamous quote!), and for 99+% uses
DOS worked fine.
On a cell phone guess what - its purpose is to send and receive calls,
period.

The signal processing in a modern cell phone is quite impressive -- hundreds
of MIPS go into it.

---Joel
 
P

Phil Hobbs

Jan 1, 1970
0
Robert said:
Phil Hobbs wrote:

Forgive me for a rather nasty question.
What is this phoney push for more junk on a piece of silicon to
support what used to be relatively simple applications?

Now, Robert, be nice. ;)

For one thing, IBM (where I work) doesn't even make PCs anymore.
What was wrong with KISS?
In a personal computer, one does not need 2^10 core CPUs, or even dual
core; any CPU speed over 1Ghz is wasted, and for 99+% uses Win98Se is
more than good enough.

Depends on your problem set. I have a 14-processor Opteron cluster in
my lab, used only by me, for electromagnetic simulation and device
design. I put it together for a song (about $12 altogether), but some
of the problems I use it for can't be solved on a machine with less than
30 GB of RAM. Google probably isn't going to try to run on a single
uniprocessor box of any speed whatsoever.

Now, if one gets into graphics (read: games, design PCBs or other
complex artwork), then more speed becomes useful and Win2K becomes a
better choice.
Oh, you say, we "need" dual (or quad) core for graphics.
What the hell is that large graphics chip on the fancy video card for?
Boat anchor?
In fact, what good was the MMX instruction set for, since the sound
card already supported those functions.
On a cell phone guess what - its purpose is to send and receive calls,
period.
Want to do something else like portable music - players have been
around for over 10 years that do that; they just get smaller and store
more.
Etc etc and etc (courtesy of Yul Brynner in the King and I).

I'm glad you're happy with what you have. I use computers of varying
ages too...my office machines are 4 and 10 years old respectively, and
they're both dual-processor SMPs, because I push them pretty hard
sometimes. I've been writing multithreaded code since OS/2 2.0 came out
in 1992. I also like using old apps--for instance, Wordperfect 5.1+ for
DOS *flies* on a modern machine.

On the other hand, there are enough customers for the fastest machines
(who know very well what they need) to keep me in beer and skittles,
anyway. I like doing things that are useful and fun. Why do you do
what you do?

Cheers,

Phil Hobbs
 
H

Hal Murray

Jan 1, 1970
0
You are assuming that every computer is a PC. Some computers
are not PCs. Some computers are routers with 48 separate gigabit
ethernet ports and a requirement to inspect every packet coming
in from every port. Some are part of large web server farms that
handle all of the searches on Google or all of the bids on eBay.
Some are doing simulations of complex phycical systems. Some are
doing the rendering for the next Jurassic Park movie. And some
are at Supernews, handling your posts and the post of millions
of others.

And some (many?) run bloatware which eats up CPU cycles
and memory and disk and ....
 
J

John Larkin

Jan 1, 1970
0
Now, Robert, be nice. ;)

For one thing, IBM (where I work) doesn't even make PCs anymore.


Depends on your problem set. I have a 14-processor Opteron cluster in
my lab, used only by me, for electromagnetic simulation and device
design. I put it together for a song (about $12 altogether), but some
of the problems I use it for can't be solved on a machine with less than
30 GB of RAM. Google probably isn't going to try to run on a single
uniprocessor box of any speed whatsoever.



I'm glad you're happy with what you have. I use computers of varying
ages too...my office machines are 4 and 10 years old respectively, and
they're both dual-processor SMPs, because I push them pretty hard
sometimes. I've been writing multithreaded code since OS/2 2.0 came out
in 1992. I also like using old apps--for instance, Wordperfect 5.1+ for
DOS *flies* on a modern machine.

On the other hand, there are enough customers for the fastest machines
(who know very well what they need) to keep me in beer and skittles,
anyway. I like doing things that are useful and fun. Why do you do
what you do?

Cheers,

Phil Hobbs

It's ironic that most of the compute power in the world goes to
gaming. The most compute-intensive thing we do, in fact the only
compute-intensive thing we do, is fpga p+r. Design-rule checking the
most complex pc board we make takes about 5 seconds on a
standard-performance PC. The rest of what we do is dominated by our
DSL rate.

Even Spice usually runs fast. I guess em simulation could be slow, but
we rarely do that, thank Goodness.

Intel must be running scared; some day pc's will be good enough and
become as exciting as toasters, and $5 Taiwanese cpu's will be
powerful enough.

John
 
J

Joel Koltner

Jan 1, 1970
0
John Larkin said:
Intel must be running scared; some day pc's will be good enough and
become as exciting as toasters, and $5 Taiwanese cpu's will be
powerful enough.

You can bet that they do nothing but encourage Microsoft to build OSes that
consume vast quantities of CPU power performing, e.g., animation,
transparency, 3D effects, etc. The IT "industry" also seems to encourage
greatly increasing PC resource usage -- it's very common today that computers
in larger companies run on-access virus scanners and create backups of every
single file as soon as it's re-saved.

Intel does have a pretty smart pricing strategy -- the prices of their new
CPUs drop almost like clockwork, to keep them competitive with offerings from,
e.g., AMD, VIA, etc. AMD was doing well for awhile with their Athlon CPUs,
but Intel came back with cheap dual-core units and now AMD is back to being
not much better than an "also ran."

There is plenty of emphasis these days on "performance per MIP," which is a
good thing for just about everyone -- large data server farms care about power
consumptions just as much as battery-powered laptop users.
 
J

JosephKK

Jan 1, 1970
0
Joel Koltner [email protected] posted to
sci.electronics.design:
It's not a phoney push; there are plenty of applications where there
will simple never be enough CPU horsepower available to solve
them... at least not using technology that resembles anything like
what we have today.

Actually the question is more an ongoing issue of what is currently
affordable. A typical new desktop has more (and faster) memory,
disk, and compute power than a 1960's (or even a 1970's)
supercomputer.
Ever for "simple" problems such as PCs running word processors, web
browsers, etc., CPU horsepower helps a great deal: The average
person today can walk up to pretty much any modern PC and get it to
do "useful" things, whereas 20 years ago even if you were a skilled,
e.g., IBM PC user you couldn't just walk to an Apple II or Commodore
64 and do "useful" things without a fair amount of instruction.

Let's see, 1987, i could make all three of them (as well as some
flavors of unit workstations) stand up, beg, roll over, and do most
anything i wanted of it.

Maybe, if you like being infected with various forms of malware.
That may be for you, but not everyone feels the same. 20 years ago
I'm sure I could have found someone saying that any CPU speed over
10MHz was wasted, no needed more than 640KB (hmm... there's an
infamous quote!), and for 99+% uses DOS worked fine.

What part of that do you not understand is ridiculous hyperbole.
 
J

JosephKK

Jan 1, 1970
0
Joel Koltner [email protected] posted to
sci.electronics.design:
You can bet that they do nothing but encourage Microsoft to build
OSes that consume vast quantities of CPU power performing, e.g.,
animation,
transparency, 3D effects, etc. The IT "industry" also seems to
encourage greatly increasing PC resource usage -- it's very common
today that computers in larger companies run on-access virus
scanners and create backups of every single file as soon as it's
re-saved.

Intel does have a pretty smart pricing strategy -- the prices of
their new CPUs drop almost like clockwork, to keep them competitive
with offerings from,
e.g., AMD, VIA, etc. AMD was doing well for awhile with their
Athlon CPUs, but Intel came back with cheap dual-core units and now
AMD is back to being not much better than an "also ran."

There is plenty of emphasis these days on "performance per MIP,"
which is a good thing for just about everyone -- large data server
farms care about power consumptions just as much as battery-powered
laptop users.

Just remember, monopolies respect nothing else than their own power.
Do you really want Intel to become a monopoly? What about
megasloppysoft?
 
R

Robert Baer

Jan 1, 1970
0
Guy said:
Robert Baer wrote:




You are assuming that every computer is a PC. Some computers
are not PCs. Some computers are routers with 48 separate gigabit
ethernet ports and a requirement to inspect every packet coming
in from every port. Some are part of large web server farms that
handle all of the searches on Google or all of the bids on eBay.
Some are doing simulations of complex phycical systems. Some are
doing the rendering for the next Jurassic Park movie. And some
are at Supernews, handling your posts and the post of millions
of others.
Specialized applications need to have specialized hardware & firmware
to handle the particular needs.
Routers and webservers are very good examples of tossing software at
a problem just because a "fancy PC" has quad core, 1+Ghz FSB, etc.
They really should have specialized hardware & firmware; the more in
hardware, the more robust (read: harder for hackers to crack and/or
overwhelm).
Complex simulations points to vector processing and the "latest and
greatest" PC ain't nowhere close to that.
Real fancy graphics for a movie points to one or more high end
(video) graphics cards perhaps controlled by a PC-like multiprocessor.
Etc.
 
R

Robert Baer

Jan 1, 1970
0
Joel said:
It's not a phoney push; there are plenty of applications where there will
simple never be enough CPU horsepower available to solve them... at least not
using technology that resembles anything like what we have today.

Ever for "simple" problems such as PCs running word processors, web browsers,
etc., CPU horsepower helps a great deal: The average person today can walk up
to pretty much any modern PC and get it to do "useful" things, whereas 20
years ago even if you were a skilled, e.g., IBM PC user you couldn't just walk
to an Apple II or Commodore 64 and do "useful" things without a fair amount of
instruction.




That may be for you, but not everyone feels the same. 20 years ago I'm sure I
could have found someone saying that any CPU speed over 10MHz was wasted, no
needed more than 640KB (hmm... there's an infamous quote!), and for 99+% uses
DOS worked fine.




The signal processing in a modern cell phone is quite impressive -- hundreds
of MIPS go into it.

---Joel
A long time ago, the PC/XT came out and not too long afterwards,
there were database programs, spreadsheet programs, and word processor
programs.
I know that at least one of those spreadsheet programs was used by an
accountant to set up and run a company with 4 or five seperate divisions.
He started not knowing anything about computers and with no seperate
help, in two months had a complete system, with reports, running flawlessly.
Word processing?
One WP simulated the Wang that was an "industry standard", so a user
needed no re-training moving between the two systems.
Others were also built on previous WP practices.
So, in many cases, a "fair amount of instruction" was not needed.
 
R

Robert Baer

Jan 1, 1970
0
Phil said:
Now, Robert, be nice. ;)

For one thing, IBM (where I work) doesn't even make PCs anymore.



Depends on your problem set. I have a 14-processor Opteron cluster in
my lab, used only by me, for electromagnetic simulation and device
design. I put it together for a song (about $12 altogether), but some
of the problems I use it for can't be solved on a machine with less than
30 GB of RAM. Google probably isn't going to try to run on a single
uniprocessor box of any speed whatsoever.
** Most definitely not home number crunching..
I'm glad you're happy with what you have. I use computers of varying
ages too...my office machines are 4 and 10 years old respectively, and
they're both dual-processor SMPs, because I push them pretty hard
sometimes. I've been writing multithreaded code since OS/2 2.0 came out
in 1992. I also like using old apps--for instance, Wordperfect 5.1+ for
DOS *flies* on a modern machine.

On the other hand, there are enough customers for the fastest machines
(who know very well what they need) to keep me in beer and skittles,
anyway. I like doing things that are useful and fun. Why do you do
what you do?
** I run three OSes: DOS/Win3.11 for files that were generated back in
the DOS daze as well as doing projects that almost nothing else will do;
Win98Se for 99+% of my offline and online work (totally immune to some
of the more current hacks, and user base too small to be a target), and
Win2K for the "fancy" stuff like CorelDraw, Spice, PCB work and on
occasion, multi-million digit software work.
My other computer is an older P2-266 running DOS almost 100% of the
time, but it also supports the other 2 OSes.
Used mainly to run an old A/D board for datalogging via custom
programs written in BASIC and compiled for use.
 
Top