Maker Pro
Maker Pro

How to create an equation to solve for resistance in this circuit

ArgonautQuest

May 6, 2020
13
Joined
May 6, 2020
Messages
13
I am very much a beginner in the world of electronics and I just bought an Arduino board. I am trying to start simple so I can understand things. Right now, I'm trying to figure out how to measure the resistance of a resistor by following the steps found on this web page. On this page, you can see a diagram showing how the circuit is set up:
FPD2MOJIBRRYBU7.LARGE.jpg

I set my circuit up just like this but I don't quite understand how they came up with this equation:
F7FZ9RCIBV0HQJL.LARGE.jpg

To solve for R1 (the unknown resistor).

Can someone help me understand how you come up with the equation Vout = Vin * R2/(R1+R2). There is a description on this page but after reading it, I'm still a little confused. I know it has something to do with Ohm's law in which V = IR but how does the "I" term play a roll here?

I'm not expecting a full blown explanation but if someone can just give me a hint on how to get from Ohm's law to that equation, I would very much appreciate it!

Thanks.
 

ArgonautQuest

May 6, 2020
13
Joined
May 6, 2020
Messages
13
ETA - sorry I forgot to finish writing the title but don't know how to change it!
 

ArgonautQuest

May 6, 2020
13
Joined
May 6, 2020
Messages
13
Haha thank you. I was going to say, "How to create an equation to solve for resistance in this circuit." Or something like that. :)
 

hevans1944

Hop - AC8NS
Jun 21, 2012
4,878
Joined
Jun 21, 2012
Messages
4,878
...
I set my circuit up just like this but I don't quite understand how they came up with this equation:
F7FZ9RCIBV0HQJL.LARGE.jpg

To solve for R1 (the unknown resistor).

Can someone help me understand how you come up with the equation Vout = Vin * R2/(R1+R2). ...
The current through R1 and R2 (in series) from the voltage source Vih is i = Vih / (R1+R2) (1)
The voltage Vout = i (R2) (2)
Both of the above two statements are algebraic statements of Ohm's Law: E = I x R or I = E / R or R = E / I. These three equations are all statements of Ohm's Law.
So, substituting in (2) for i as stated in (1) yields: Vout = Vih (R2) / (R1+R2) (3)

What you really want to know is the resistance of R1 expressed in terms of Vout, Vih, and R2. It is assumed you know the value of Vih and R2 and the Arduino will measure Vout. Rearranging equation (3) above will yield:
R1 = R2 (Vih/Vout - 1) (4)

Note that the accuracy of measuring resistance with this method using an Arduino is dependent on the accuracy of the voltage source, Vih, and the accuracy of resistor R2, as well as the accuracy of the analog-to-digital conversion performed by the Arduino. It's also not a linear function of resistance versus measured voltage, but that is a separate issue.
 

WHONOES

May 20, 2017
1,217
Joined
May 20, 2017
Messages
1,217
If you just want to measure a resistor then buy a DMM. But then that will teach you nothing about the Arduino or the equation you seek.
 

ArgonautQuest

May 6, 2020
13
Joined
May 6, 2020
Messages
13
The current through R1 and R2 (in series) from the voltage source...

Thanks very much for this detailed reply. It makes sense but also brings up a few more questions, which I think I can find the answer to with some additional searches. Thanks a lot.

I'm heading over to... http://www.basicsofelectricalengineering.com/ because it seems like a reasonable starting point, based on the URL. ;)

-Jason
 

Cannonball

May 6, 2017
193
Joined
May 6, 2017
Messages
193
lift one end of the resistor and measure it with an ohm meter.
 
Last edited:

hevans1944

Hop - AC8NS
Jun 21, 2012
4,878
Joined
Jun 21, 2012
Messages
4,878
I'm trying to figure out how to measure the resistance of a resistor by following the steps found on this web page.
You have stumbled onto just about the worst web site, Instructables, for any beginner to learn electronics. While technically correct in approach (measure the voltage across the unknown resistor and divide by the measured current through it) the author poorly describes the process that is required to do it. And only a vague description is provided of the hazards associated with choosing (or guessing) a "reference resistor" with too high or too low a resistance. This web page offers mainly a "monkey see, monkey do" approach to "learning" electronics. It is typical of Instructables pages, which IMHO are not very good for beginners and often contain unedited errors.

Its a "good thing" that you subsequently landed here at ElectronicsPoint, because we have members (some quite knowledgeable in the field of electronics and microprocessors) who can lend a helping hand to beginners. So, welcome to Electronics Point! And please do investigate the links that moderator @bertus provided. It is important that you establish a back-and-forth dialog with responders to your questions, as well as looking elsewhere for further clarification if necessary.

I'm heading over to... http://www.basicsofelectricalengineering.com/ because it seems like a reasonable starting point, based on the URL.
This is a link to an uncurated blog that appears to be a collection of articles attached to a "search engine" that always returns a page from the the blog. Nothing wrong with that, but the pages seem to be full of examples with little or no theory. Unless you learn the theory, the why of how things electrical and electronic work, examples are of little value. And please don't fall for "click-bait" URL titles. Anybody can purchase a domain name with almost any "title" they wish (I think obscenities are forbidden), as long as it isn't in use by someone else.

You may not be aware that ohmmeters, that are an integral part of multi-meters, have changed a lot since the introduction of digital display technology controlled by microprocessor chips. The latter are usually permanently embedded in the circuit board of the meter, connected to circuit board conductive traces with tiny wires ultrasonically welded to pads on the bare chip die, and with the other wire end soldered to copper pads on the circuit board. This whole mess (presumably after testing for functionality, but maybe later...) is then covered with a blob of black epoxy, rendering the microprocessor inaccessible for troubleshooting or repair. Fortunately, the microprocessor seldom fails.

The microprocessor brings several advantages to the multi-meter resistance measuring function. Perhaps the most important is it provides a constant current output to the two probes. Secondly, it reads the voltage drop between the two probes and displays this directly as a resistance value. Remembering that R = E / I, if I is a constant current, say 1mA, then E measured in millivolts will represent resistance R measured in ohms. No math necessary. The "milli" in millivolts cancels with the "milli" in mA leaving just volts divided by amperes which is ohms.

It would be a big improvement in your Arduino-based measurement if you were to discard the "reference resistor" and replace it with a constant current source. Since the full-scale sensitivity of the Arduino is 5V, resulting in an output digital value of 1023, a 1mA constant current source would produce 5.000V across of 5kΩ resistor. Your software "sketch" will have to multiply the digital output values by 5000/1023 = 4.888 to obtain numeric readings directly representing ohms.

You can also play around with other values of constant current, use different conversion factors than 4.888, and possibly (with higher current sources) attenuate the voltage drop across the unknown resistor to stay within the zero to five volt input limit of the Arduiono. Some versions of Arduino have a 12-bit instead of a 10-bit analog-to-digital converter, allowing four times better resolution by extending the full-scale counts from 1023 (10-bit A/D) to 4095 (12-bit A/D). And you can add external A/D converters, some with as many as 24 bits resolution and programmable gain. Lots of options for you once you get the "knack" of working with Arduinos. You may even want to explore the use of other microprocessors, such as the PIC series from Micrologic or the TMS320 series from Texas Instruments.

And if you want to measure resistances larger than 5kΩ... this will lead you into the rabbit hole of analog electronics. A very nice adventure, Alice, and perhaps the beginning of a lifelong career, or a new hobby! Have fun! Watch out for that wascully white wabbit though, and his disappearing Cheshire cat.
 

ArgonautQuest

May 6, 2020
13
Joined
May 6, 2020
Messages
13
...You can also play around with other values...

And if you want to measure resistances larger than 5kΩ... this will lead you into the rabbit hole of analog electronics. A very nice adventure, Alice, and perhaps the beginning of a lifelong career, or a new hobby! Have fun! Watch out for that wascully white wabbit though, and his disappearing Cheshire cat.

Thanks for all of the help! Okay so now I think I have a better handle on where to go for basic instruction and I'm working through it. I look forward to playing around a little more with the Arduino, especially given how I'm stuck at home for who knows how much longer. ;)

I also bought a raspberry pi to go with an RC car kit. Oh and with the arduino, I got some extra parts like digital read outs, switches, LEDs, resistors, etc. I'm a civil engineer and finding that this stuff is really quite fun!
 

hevans1944

Hop - AC8NS
Jun 21, 2012
4,878
Joined
Jun 21, 2012
Messages
4,878
I also bought a raspberry pi to go with an RC car kit.
My oldest son had an RPi on his desk, along with a nifty plastic box to enclose it. When I expressed an interest in it, he picked it up and gifted it to me. I had just started playing around with Arduino Unos and assorted shields available for same at Radio Shack, so the RPi was relegated to my box of other spiffy toys... something to play with later. I did download a version of Linux and installed same on a CD memory card, just to see if the RPi worked. Whew! Way too much processor for what I needed at the time! But it has internet capability and interfaces to standard monitors and keyboards and stuff, so I may find time to seriously play with it later this year.

In the meantime, I discovered Microchip PIC microprocessors and played with a couple of those for almost a year, mainly learning how to take advantage of their ultra-low-power "sleep" mode to conserve battery life. Putting a PIC to sleep is a piece of cake; getting it to wake up and do something useful before "sleeping" again is more difficult. It seems that PICs (Peripheral Interface Controllers) have been around for a long time, being embedded in computer peripherals for mainframe computers. Peripherals such as line printers, 9-track reel-to-reel tape drives, Hollerith card readers and card punches, and disk storage devices. I think what has made them attractive to the hobbyist now (besides their dirt-cheap prices) is the availability of on-board, non-volatile, programmable EEPROM (Electrically Erasable Programmable Read-Only Memory) for program storage. No expensive development system hardware necessary!

Apparently, I was one of the last to know about PICs, because my previous embedded computer projects either all used Intel 8-bit 8085 microprocessors with external, ultra-violet erasable EPROMs, or embedded "IBM compatible" personal computers.

As a civil engineer, you have probably noticed the revolution that electronics has made in your areas of expertise. GPS disciplined earth-moving machines as well as GPS-based surveying immediately come to mind. And of course no one drafts anything by hand anymore. Collaboration across global distances is commonplace. This is truly the Golden Age of Engineering. Even the sky is no limit anymore. Just ask Elon Musk.
 

ArgonautQuest

May 6, 2020
13
Joined
May 6, 2020
Messages
13
This is the first I've heard about PICs. I'll look it up.

I didn't even know what a microcontroller was until recently. Yesterday, I got the Arduino to display a number on a 4-digit led display, which was surprisingly satisfying. It's one thing to write a program that prints output to a computer terminal but this is different.

As a civil engineer, I feel my education didn't focus enough on the fundamentals of programming. One course in electrical eng might have helped a lot because I'm finding that the way my program interacts with the electrical components is helping me visualize what's going on at a lower level.
 

hevans1944

Hop - AC8NS
Jun 21, 2012
4,878
Joined
Jun 21, 2012
Messages
4,878
As a civil engineer, I feel my education didn't focus enough on the fundamentals of programming.
As an electrical engineer, I KNOW my education didn't focus enough on the fundamentals of programming. Apparently that kind of instruction was reserved for Computer Science majors. The only required course for me was one semester of FORTRAN. BASIC was available as an interpreted time-shared program on university terminals, but you were on your own as far as learning how to program with it. The computer sci weenies pooh-poohed BASIC as an "unstructured, toy" language. That didn't stop a whole army of hackers from writing games for it, or techs and engineers using it to solve real math problems. Even Microsoft gave its blessing by releasing Visual BASIC 6.0 for Windows. But it seemed to me that most of my FORTRAN course was devoted to learning how to properly format a line of code so the FORTRAN compiler would execute it, instead of barfing it back and refusing to continue. This was especially troublesome with input and output statements. Even if your program compiled correctly and executed, there was always the possibility that there was a logical or mathematical error embedded in the code that would cause the operating system to terminate execution. Stupid stuff like infinite loops and dividing by zero were always caught and quickly dispatched to a dump file, but you didn't find out about mistakes until late the next day or sometimes the next week.

Clearly this was no way to run a railroad or a human-friendly computer science department!

Neither compiled FORTRAN code, nor interpreted BASIC code, would accept any input or produce any output except on computer terminals, typically ASR33 teletype machines. You could theoretically punch BASIC paper tape programs on the teletype machines for later on-line execution, but I discovered that it was easier to use an on-line text editor and save the text file for later execution.

Then I found out that the FORTRAN compiler, which normally ran as a batch job from a punched card deck, could be invoked (late at night) from my ASR33 computer terminal, with program input taken from a text filed saved on hard disk (precious storage in the 1960s!). This discovery eliminated many trips to a common-room, across campus from where I worked, to where a half-dozen or so Hollerith card keypunch machines were located for student use. Because FORTRAN was a required course for an engineering degree, there was usually a long line of students waiting to use the few available machines. However, no more submitting Hollerith decks for me. I programmed from the ASR33 in my office, invoked the FORTRAN compiler and directed output back to my ASR33. Wash, rinse, repeat until everything was debugged and copacetic. I then directed the program output to the line printer located near the mainframe for compatibility with my classmates submissions. I also punched the (now debugged) source card deck there too. Only problem with that was the card punch machine did not print anything across the top of the cards! My instructor wouldn't accept that. He claimed not to be able to read EBCDIC punched code and wanted to see the translation printed neatly across the top of each card in my program deck.

So I made friends with one of the night-shift computer "priests" (a grad student in actuality) to arrange to have my deck of cards sent through a card reader, attached to a small IBM 360 mainframe generally reserved for computer science student use. This card reader had a ribbon and the capability to read the EBCDIC (Extended Binary-Coded-Decimal Interchange Code) keypunch code and translate that code into human-readable dot-matrix printer characters that it printed along the top of the card "on the fly" while the card was being read.. So in the end, although I was able to program and debug from my office ASR33 terminal, I still had to turn in a Hollerith card deck with my program, as well as a printed copy of the run results.

In the Research Institute, where I worked as an electronics technician, we had two keypunch machines that the administrative staff used for some arcane purpose. Since they all went home promptly at 4:00 or 5:00 PM, I was always able to access one machine to punch my FORTRAN deck before I learned about the on-line work-around. These keypunch machines also had a typewriter ribbon that printed along the top edge of the card what the EBCDIC punched code underneath represented. This allowed a dropped box of cards that became inadvertently shuffled to be fairly quickly put back into the correct order. It also allowed fairly easy swapping out of cards that were incorrectly coded. So, the drill was this: get a class assignment to program something; write out the program on paper; take the paper across campus along with a box of blank Hollerith cards and punch the program onto the cards; submit the card deck for execution as a part of a batch FORTRAN job that would run later that evening when the time sharing network wasn't very busy. Come back the next afternoon or the next week to see the results of your effort. Blech!

About the time that I was learning how much I hated FORTRAN, Intel was developing microprocessors for Japanese pocket calculators, beginning with the 4004 and much later the 4040. I got into the game late with a project to embed a microprocessor in an x-ray excited, photon absorbing, and electron emitting, spectroradiometer. Or Electron Spectroscopy for Chemical Analysis (ESCA). This machine basically illuminated the surface of a specimen in vacuum with monochromatic x-rays and measured the energy of the photo-electrons that were emitted, said energy being characteristic of the chemical signature of whatever absorbed the x-ray photon.

This ESCA was an all-analog machine that the Surface Analysis Laboratory, which was right across the hall from the Electronics Laboratory where I worked, wanted to bring into the digital world for data acquisition and analysis by computer instead of lengthy time consuming hand interpretation of strip-chart ink recordings. The Intel 8080 was so new then that the vendor who sold us the system had the CPU on its own dedicated card, as it did with the static random access memory cards (1k bytes each!), and the peripheral controller ICs, which were NOT microprocessors like the PICs that I didn't even know existed. The entire "embedded" microprocessor was contained on a nineteen-inch wide by six-inch high rack full of cards, along with separate power supplies. A far cry from the way it is done today, but pretty much "state-of-the-art" for our little electronics laboratory.

Anyhoo, there was very little RAM (Random Access Memory) available for program storage and execution back in those days, although by the mid-1970s dynamic RAM (DRAM) was making major inroads and eventually replaced static RAM for everything but the highest-speed applications. A result of this shortage of available memory was I had to learn to program in 8080 assembly language to use what little RAM that I had available as efficiently as possible. And I had no idea what that was (assembly language) or how to use it (to write programs). So I signed up for a computer science course titled "IBM 360 Programming in Assembly Language". I really needed to learn how to get really close to the hardware, "banging on the bits" as it later came to be known. Turn things on and off at the bit level, like motors and lights and "stuff like that". But the course was aimed at very senior computer science majors who wanted to learn how to program mainframes using macro assembly language constructs, and mostly canned macros at that: little program snippets written by systems analysts for systems programmers.

Not exactly what I was looking for, but Intel provided plenty of reading material and I finally got the hang of it and even learned what a macro was and when to use one or a few of them. I even learned how to read machine executable code, which came in handy when IBM released the assembly source code for their BIOS (Basic Input Output System) used to "boot up" their personal computers. That decision to "open source" the BIOS was truly a godsend for the hacker international community. So-called "IBM compatible clones" sprung up like dandelions on a spring lawn in the suburbs within a year of the release date of the first genuine IBM Personal Computer! My very first home computer was one of these clones, and all the many that followed used the same building paradigm. It was many years later that I built my last computer, based on an Intel core i7 CPU. It was of course immediately obsolete the day I finished it and turned it on, but it works "gud enuf" for what this engineer needs to use it.

So now I've pretty much regressed back to the early days when TTL (Transistor-Transistor-Logic) was King and you hooked up a few dozen TTL ICs to do some logical thing based on a few inputs and probably even fewer outputs. Because of two things, the way we would do it today is much better. First thing: the old-school method requires that your design be 100% correct or else some usually expensive hardware changes would be necessary. Second thing: if the customer decided to add or subtract "features" of a completed design... some hardware changes would be necessary. And those were guaranteed to be expensive.

With one or a few PICs in the mix, you get to keep most of the hardware and make almost any required changes in software. And all without the need for gigabytes of RAM, and terrabytes of rotating disk storage, or even an Internet connection (unless you really want and need one). Of course PICs keep getting bigger and more powerful each passing year, the older versions being replaced by newer and better versions, so you still have to swim like hell if you want to keep up. Or sit back and let the next generation of engineers and technicians take over the major swimming while you just dip your toe into the water once in awhile just to see how it feels...

The mainframe computer I first learned to "program" on was an RCA Spectra 70 with a time-sharing operating system, which was intended by RCA to fill a perceived niche in the IBM line of mainframe computers available at the time. It was built and sold to another (unidentified) school that decided not to accept it for whatever reasons. RCA was therefore stuck with big iron that they couldn't move until the University of Dayton school of computer science expressed an interest. RCA practically fell all over themselves in selling the Spectra 70 to UD (at a loss, I heard) along with a coterie of engineers and techs to keep it running. By the time I graduated UD in 1978, the writing was on the wall for anyone who cared to read it. Mainframes were obsolete and expensive dinosaurs. Personal computers overtook and exceeded all except Cray supercomputers in performance, and beat virtually everything in terms of price (bang for the buck). I have heard that cooperative networks of personal computers can even out-perform a Cray.

So what's the "next big thing" you (didn't) ask? Implantable tissue-compatible bionics... the conjoining of living cells with nano-machines designed to augment or replace human organs. Everyone speculates that might be nanobots, little cell-sized robots programmed to perform a specific task, like removing the plaque from your clogged arteries without damaging the underlying tissue. That may happen eventually, but something simpler and more mundane is more likely. An artificial kidney or pancreas perhaps, or maybe a replacement for a damaged or detached retina. Someone who visits here at Electronics Point is working on an artificial muscle that will interface to the nervous system. Such a thing has all sorts of applications in prosthetic appliances, but also even better applications for robots with the flexibility and dexterity of human beings. Send a few hundred of those robots ahead to build habitats for humanity on planets the human race hasn't had time to terraform yet. Now that would be a civil engineering project for an adventurous sort to lead! Can't just let that many robots run around unsupervised, ya know? They might get ideas and want a planet of their own... for their own kind.

So, have some fun while practicing engineering. Cross over a little bit into electronics. Never stop learning. Seek new adventures beyond the next visible horizon. Live long and prosper! Buy and use a PIC for FUN!
 

ArgonautQuest

May 6, 2020
13
Joined
May 6, 2020
Messages
13
Interesting to hear how engineering school was for you. I'm a little after that time but still got entire lectures of a walk through pages of FORTRAN code, which appeared to be more about formatting inputs and outputs than the actual problem at hand. Needless to say, without a proper introduction to programming, we were all lost. Sadly, we all just kept using spreadsheets to do our work and re-inventing the same exact functions over and over again. Looking back, I realize that a lot of my work in college could have been enhanced by writing reusable scripts. I remember the chemical engineers were using MathCad so I tried that for a while and realized it was a little better than Excel but not much. It was the electrical engineering students that I noticed over there on the Unix stations in our computer lab. They all seemed like they knew what they were doing with their tackle boxes and pages of code. At our school, it was the chemical engineering students who had the appearance of the best social life among us, so nobody paid much attention to the EE students.

After I get my 4-digit display working, I'm going to move to some basic sensors. Maybe PIC will be on my horizon soon. ;)
 

hevans1944

Hop - AC8NS
Jun 21, 2012
4,878
Joined
Jun 21, 2012
Messages
4,878
After I get my 4-digit display working, I'm going to move to some basic sensors.
Let us know if you need any help expanding from a single-digit display to four digits.

Counting the (optional) decimal point, each 7-segment display requires eight bits, so adding three more digits increases the number of bits to thirty-two. This is about where you need to start thinking about other approaches, lest you run out of output bits on the Arduino. Multiplexing the digits is the usual solution, but there are several implementation methods. One approach is to use four bits, set and cleared in sequence, to address and enable each of the four digits in rapid succession. That allows you to drive four digits with just twelve bits.

You could get by with just two bits for selecting one of the four digits, but a little external logic is required to "decode" those two bits and create four separate selection bits. Hardly worth the effort for just four digits, but maybe worth it for a dozen or so digits... like maybe for a beer pong scoreboard or a digital totalizer app or a frequency meter. If refreshed fast enough, flicker will not be a serious problem with just four digits.

Along the same lines, you can insert 8-bit latches between each numeric digit and eight Arduino output bits and load the latches sequentially. That eliminates any possible flicker problem caused by human persistence of vision and still only requires twelve output bits from the Arduino. You could also use seven-segment decoders between 4-bit latches and the 7-segment displays, further reducing the number of bits necessary to display four digits. More wiring though.

Yeah, next to physics majors, EE students got very little respect at my little Catholic university. It was (and still is) basically a liberal arts college and a showcase for the University of Dayton Flyers NCAA Division I Basketball Team. But I did get a good education there. It just took me ten years instead of the usual four to graduate. That was the down-side. The up-side was UD paid for my tuition since I was a full-time employee. My only major expense was text books, so I graduated without any school-loan debt and only used my GI Bill benefits during the last year of two of study.

There was a war going on (in Vietnam) when I started matriculating at UD in 1968, and I thought I was "doing my part" to support our troops by foregoing the government handout. What an idiot I was! My GI Bill benefits were not even a drop in the bucket compared to what was spent on that war in just one day... or maybe one minute. It was expensive in more ways than just money spent.

Some of my younger co-workers had draft deferments, because UDRI was performing work on "essential defense contracts." I was, of course, immune to the draft having served my four years in the Air Force. I could work anywhere without fear of losing a draft deferment. Back in those days, a draft deferment was like a gold "get out of war free" card. I stayed for the free tuition and then left a year after graduating.

You have maybe heard of working your way up from the mail room, just to get "in" with a good company and "learn the ropes" so to speak? Well, that does not apply in academia. Didn't matter how many degrees I might have gotten, I started there as an electronics technician and that's how faculty, staff, and fellow workers perceived and treated me... even after I graduated. I spent the rest of my career in the "real world" after leaving UDRI instead of in an "ivory tower," always as an employee of government contractors. Well, some people would say I just traded one tower for another, but I did have a lot of fun and (most of the time) a lot bigger budget and salary. Yeah, I'm a war monger, but I never made it big like Daddy Warbucks.:rolleyes:


When I got my first personal computer, a clone of the IBM XT, a few years after graduation (BEE, 1978, UD), Excel was only a gleam in Bill Gate's eye. The biggie spreadsheet program that was taking the business world by storm was Lotus 1-2-3. This was heavily copied and distributed, despite NOT being freeware or shareware, because its "copy protection" consisted of an easily hacked floppy disk that was used to install Lotus on a PC hard disk. Sometime in the 1980s, while Microsoft was developing Windows and MS-DOS was still running most PC software, the expression "DOS ain't done 'til Lotus won't run" became a mantra at Microsoft. It was many years later that I finally "upgraded" to Windows XP Professional, running on an Intel Pentium IV CPU with enough disk space and dynamic random-access memory to make running Excel practical.

I never trusted spreadsheets, especially other people's spreadsheets, because there was no easy way to verify the accuracy of their results. Its a real PITA to discover (and test) the underlying formulas. I did purchase an early version of MathCad and was very disappointed to find I couldn't import its nicely formatted formulas into my Corel word processor. What a waste of money that was. In the meantime, my programmable HP calculator did almost all my math chores until it was stolen from the saddle bag of my motorcycle one afternoon. I never did replace it, but learned to "get by" with just a Casio fx-115W scientific calculator, which was not programmable but a helluv lot less expensive. If I want programmatic ability, I will write a program for my PC... probably in Microsoft Visual BASIC 6.0 since it is now free.:D
 
Top