Maker Pro
Maker Pro

getting 16 bit from 8 bit ADC in software

B

bazhob

Jan 1, 1970
0
Hello there,

Unfortunately I'm not too much into electronics, but for a
project in Computer Science I need to measure the output
from an Opamp/integrator circuit with a precision of 16-bits
using a standard 8-bit ADC. The integrator is reset every
500 ms, within that interval we have a slope rising from 0V
up to a maximum of about 5V, depending on the input signal.

The paper of the original experiment (which I'm trying to replicate)
contains the following note: "A MC68HC11A0 micro-controller operated
this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."

Can someone please point me to more information about this
technique of doubling accuracy in software by dividing the
measured interval by 256? For a start, what would be the
technical term for that?

Thanks a lot!
Toby
 
T

Tim Wescott

Jan 1, 1970
0
bazhob said:
Hello there,

Unfortunately I'm not too much into electronics, but for a
project in Computer Science I need to measure the output
from an Opamp/integrator circuit with a precision of 16-bits
using a standard 8-bit ADC. The integrator is reset every
500 ms, within that interval we have a slope rising from 0V
up to a maximum of about 5V, depending on the input signal.

The paper of the original experiment (which I'm trying to replicate)
contains the following note: "A MC68HC11A0 micro-controller operated
this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."

Can someone please point me to more information about this
technique of doubling accuracy in software by dividing the
measured interval by 256? For a start, what would be the
technical term for that?

Thanks a lot!
Toby
* Your instructor is cruel and evil, or doesn't really
understand what he's asking, or he really cares about
you and is sneaky to boot.

* A 256-step to 65536-step increase in accuracy is not
a doubling of accuracy, it is an increase of 256 times.

* You probably aren't increasing the accuracy by anything
close to 256 times.

* You may be increasing resolution by 256 times, though,
and sometimes that's good enough --
http://www.wescottdesign.com/articles/sigmadelta.html

You can make a pretty good ADC by integrating and timing how long it
takes to get to a limit (search on "single-slope ADC"). You can improve
this by integrating up and then down again, timing each segment, and
doing some math (search on "dual-slope ADC").

You're being asked to build something akin to a single-slope ADC. You
should be able to get pretty good resolution by sampling your ADC every
one or two milliseconds then doing a linear curve fit to get your
"enhanced" data.

Note that while you are most certainly going to increase resolution and
accuracy your accuracy is limited by a heck of a lot more than your
resolution is; you can probably count on getting a good honest 16 bits
of resolution, and probably significantly enhanced nonlinearity, but you
will bump into the other things that limit your accuracy.

You can investigate your resolution enhancement by simulating the thing
in a good math package like MatLab, MathCad, Maple, SciLab, etc. I
highly recommend this. Verifying accuracy means understanding all of
the accuracy drivers in the ADC and investigating their effect on
the accuracy of your result. This is a worthwhile goal if you want
to make a thesis out of it.
 
C

Clifford Heath

Jan 1, 1970
0
bazhob said:
"A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."

This can, at most, increase signal accuracy by a factor of 16
(four bits), being the square root of 256. And that's assuming
a number of things about the behaviour of the converter.

To increase accuracy to 16 bits, you need to take 256*256 samples
at least, based on random distribution of quantisation errors.

Clifford Heath.
 
J

Joerg

Jan 1, 1970
0
Hello Toby,
....... A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."

Have the authors validated that claim with some hardcore measurements,
such as detecting and restoring a signal that was, say, 13 or 14 dB
below 5Vpp?

Regards, Joerg
 
T

Tim Wescott

Jan 1, 1970
0
Clifford said:
This can, at most, increase signal accuracy by a factor of 16
(four bits), being the square root of 256. And that's assuming
a number of things about the behaviour of the converter.

To increase accuracy to 16 bits, you need to take 256*256 samples
at least, based on random distribution of quantisation errors.

Clifford Heath.

It's not just quantization, though -- there should be timing information
available from the slope; merely averaging isn't all there is to it.
 
M

mike

Jan 1, 1970
0
Tim said:
bazhob said:
Hello there,

Unfortunately I'm not too much into electronics, but for a
project in Computer Science I need to measure the output
from an Opamp/integrator circuit with a precision of 16-bits
using a standard 8-bit ADC. The integrator is reset every
500 ms, within that interval we have a slope rising from 0V
up to a maximum of about 5V, depending on the input signal.

The paper of the original experiment (which I'm trying to replicate)
contains the following note: "A MC68HC11A0 micro-controller operated
this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."

Can someone please point me to more information about this
technique of doubling accuracy in software by dividing the
measured interval by 256? For a start, what would be the
technical term for that?

Thanks a lot! Toby
* Your instructor is cruel and evil, or doesn't really
understand what he's asking, or he really cares about
you and is sneaky to boot.

* A 256-step to 65536-step increase in accuracy is not
a doubling of accuracy, it is an increase of 256 times.

* You probably aren't increasing the accuracy by anything
close to 256 times.

* You may be increasing resolution by 256 times, though,
and sometimes that's good enough --
http://www.wescottdesign.com/articles/sigmadelta.html

You can make a pretty good ADC by integrating and timing how long it
takes to get to a limit (search on "single-slope ADC"). You can improve
this by integrating up and then down again, timing each segment, and
doing some math (search on "dual-slope ADC").

You're being asked to build something akin to a single-slope ADC. You
should be able to get pretty good resolution by sampling your ADC every
one or two milliseconds then doing a linear curve fit to get your
"enhanced" data.

I thought I understood until I read this.
If the signal is stable and the A/D is stable, you should get the SAME
reading every time??? To get improved ACCURACY or RESOLUTION, don't you
first need a system that's stable to much better than the quantization
interval then to perturb the input with a signal of known statistics?

If you're counting on system instability or (uncontrolled) noise to do
the deed, you're just collecting garbage data. yes? no?
mike
Note that while you are most certainly going to increase resolution and
accuracy your accuracy is limited by a heck of a lot more than your
resolution is; you can probably count on getting a good honest 16 bits
of resolution, and probably significantly enhanced nonlinearity, but you
will bump into the other things that limit your accuracy.

You can investigate your resolution enhancement by simulating the thing
in a good math package like MatLab, MathCad, Maple, SciLab, etc. I
highly recommend this. Verifying accuracy means understanding all of
the accuracy drivers in the ADC and investigating their effect on
the accuracy of your result. This is a worthwhile goal if you want
to make a thesis out of it.



--
Return address is VALID.
Wanted, PCMCIA SCSI Card for HP m820 CDRW.
FS 500MHz Tek DSOscilloscope TDS540 Make Offer
http://nm7u.tripod.com/homepage/te.html
Wanted, 12.1" LCD for Gateway Solo 5300. Samsung LT121SU-121
Bunch of stuff For Sale and Wanted at the link below.
http://www.geocities.com/SiliconValley/Monitor/4710/
 
J

John Perry

Jan 1, 1970
0
mike said:
Tim said:
bazhob said:
Hello there,

...
The paper of the original experiment (which I'm trying to replicate)
contains the following note: "A MC68HC11A0 micro-controller operated
this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."

I don't know whether the author was being sloppy with his terminology or
just didn't know what he was talking about, but this is altogether wrong
as stated.

Under certain circumstances you can get 16-bit _precision_ by averaging
many readings froma an 8-bit ADC, but you can never get _accuracy_
better than your reference, no matter what tricks you try.

Now, if you're allowed to add a reference known accurate to that level,
or you can add a 16-bit _accurate_ DAC, and you can add an amplifier to
amplify your differences, and, finally, your signal is stable long
enough that you can get through all this process before it changes, you
can eventually cobble together a measurement to that accuracy.

Note the long list of "if's". Especially the signal stability. That
alone can make it impossible to get even a measurement to 16-bit
_precision_, much less 16-bit _accuracy_.

Some of the other comments are also appropriate.

John Perry
 
J

Jonathan Kirwan

Jan 1, 1970
0
Can someone please point me to more information about this
technique of doubling accuracy in software by dividing the
measured interval by 256? For a start, what would be the
technical term for that?

Aside from the other comments, you might look at the technique described for the
Burr Brown DDC101, I believe. It might add another thing to consider.

Jon
 
T

Tim Wescott

Jan 1, 1970
0
mike said:
Tim said:
bazhob said:
Hello there,

Unfortunately I'm not too much into electronics, but for a
project in Computer Science I need to measure the output
from an Opamp/integrator circuit with a precision of 16-bits
using a standard 8-bit ADC. The integrator is reset every
500 ms, within that interval we have a slope rising from 0V
up to a maximum of about 5V, depending on the input signal.

The paper of the original experiment (which I'm trying to replicate)
contains the following note: "A MC68HC11A0 micro-controller operated
this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."
-snip-

I thought I understood until I read this.
If the signal is stable and the A/D is stable, you should get the SAME
reading every time??? To get improved ACCURACY or RESOLUTION, don't you
first need a system that's stable to much better than the quantization
interval then to perturb the input with a signal of known statistics?

No, because the input signal is being applied to an integrator, so you
should see a monotonically increasing ramp. The rest is weird science.
If you're counting on system instability or (uncontrolled) noise to do
the deed, you're just collecting garbage data. yes? no?
mike

In this case it's neither uncontrolled nor noise. You can, however,
count on random noise to increase the effective resolution of an ADC as
long as it's PDF is wide enough. I've done this very successfully in
control systems that need high resolution and short-term repeatability,
but that don't need terribly high accuracy -- so you can take a 16-bit
ADC that's really only _accurate_ to 15 bits or so, and get 18 or 19
bits of _resolution_ out of it by oversampling like mad and averaging
the hell out of it. Note that the accuracy doesn't change -- only the
resolution.-snip-
 
C

CBarn24050

Jan 1, 1970
0
Subject: Re: getting 16 bit from 8 bit ADC in software
From: mike [email protected]
Date: 27/01/2005 03:59 GMT Standard Time
Message-id: <[email protected]>
this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."

Its not possible the way you described it.
If the signal is stable and the A/D is stable, you should get the SAME
reading every time??

Exactly, you would need to add a small signal, 1lsb sawtooth, and sample over
the sawttoth period. Getting a 16 bit absolute reading is very very difficult,
just about impossible, from a simple setup like this even if you had a real
16bit adc.
 
I thought I understood until I read this.
If the signal is stable and the A/D is stable, you should get the SAME
reading every time??? To get improved ACCURACY or RESOLUTION, don't you
first need a system that's stable to much better than the quantization
interval then to perturb the input with a signal of known statistics?

The usual name for the perturbing signal is "dither". There is a fair
amount of literature on the subject if you can find it. My "Comment on
'Noise averaging and measurement resolution" in Review of Scientific
Instruments, volume 70, page 4734 (1999) lists a bunch of papers on the
subject.
If you're counting on system instability or (uncontrolled) noise to do
the deed, you're just collecting garbage data. yes? no?

IIRR random noise isn't too bad as a dither source - there is an ideal
distribution, but Gaussian noise is pretty close.

My own experience of getting 16-bit accuracy out of a quasi-quad-slope
integrator-based ADC was that it took a while, but I had to find out
about "charge soak" in the integrating capacitor the hard way, and also
started out relying on the CMOS protection diodes to catch signals
spiking outside the power rails.

Switching to a polypropylene capacitor and discrete Schottky catching
diodes solved both those problems. A colleague of mine once built a
20-bit system based on a similar integrator, using Teflon(PTFE)
capacitors and a feedback system that minimised the voltage excursions
across the integrating capacitor in a much more intricate and expensive
unit.
 
N

Nicholas O. Lindan

Jan 1, 1970
0
bazhob said:
Unfortunately I'm not too much into electronics, but for a
project in 'Computer Science'

The blind leading the blind, I think it is called.
I need to measure the output from an Opamp/integrator
circuit with a precision of 16-bits using a standard 8-bit ADC.

'Precision' - meaning what? 1/2 bit of noise in the reading?
"A MC68HC11A0 micro-controller operated this reset signal [...]
and performed 8-bit A/D conversion on the integrator output.
A signal accuracy of 16 bits in the integrator reading was
obtained by summing (in software) the result of integration
over 256 sub-intervals."

If this was written by someone at your school you may want to transfer
schools.

It is possible to oversample with a V/F or other integrating
A/D to increase _resolution_: accuracy is out the window.

To pick up one usable bit of resolution you will have to
increase the measuring interval by 4. Homework: why?
Hint: the readings have to be independent.

To get from 8 bits to 16 bits will require a 4^8 integration
period increase, or 65,564 readings.

This is a very profound area of inquiry. Talk to someone
in signal processing in the EE department.
 
N

Nicholas O. Lindan

Jan 1, 1970
0
CBarn24050 said:
Exactly, you would need to add a small signal, 1lsb sawtooth, and sample over
the sawttoth period.

This happens automatically if you use an asynchronous V/F converter: If there
are, say, 4.5 V/F periods in the sampling interval then 1/2 the time the reading
is 4 and half the time it is 5. Since the reading is noise based you have to
measure 4x to drop the noise by 2x and pick up an extra bit [of resolution].
 
R

Rich Grise

Jan 1, 1970
0
Hello there,

Unfortunately I'm not too much into electronics, but for a
project in Computer Science I need to measure the output
from an Opamp/integrator circuit with a precision of 16-bits
using a standard 8-bit ADC. The integrator is reset every
500 ms, within that interval we have a slope rising from 0V
up to a maximum of about 5V, depending on the input signal.

The paper of the original experiment (which I'm trying to replicate)
contains the following note: "A MC68HC11A0 micro-controller operated
this reset signal [...] and performed 8-bit A/D conversion on the
integrator output. A signal accuracy of 16 bits in the integrator
reading was obtained by summing (in software) the result of integration
over 256 sub-intervals."

Can someone please point me to more information about this
technique of doubling accuracy in software by dividing the
measured interval by 256? For a start, what would be the
technical term for that?

You could surprise the living s*** out of everybody and look up
"half-flash", "semi-flash" or "sub-ranging" ADCs. That's what it
actually sounds like the perfesser is after. ;-)

Good Luck!
Rich
 
T

Tim Wescott

Jan 1, 1970
0
Guy said:
Tim Wescott wrote:




You make it sound like that's a bad thing.

:)
No, weird science is fun and sometimes lucrative.
 
Top