John said:
That's an idiotic statement, at the very least.
Last week Cornell sent me an email offering to license an invention
that digitizes a signal only when the level crosses bit boundaries, so
it outputs low rate samples when the signal isn't changing much.
Seems sort of silly to me, given real-world issues like noise and
signal processing.
The patent isn't worth anything - I used that technique when I
programmed the PDP-8 to handle the experimental data for my Ph.D. work.
I knew I was looking at a monotonic decay, so I had the computer sample
the data until there was a significant change in voltage, then work
out the average value of all the samples stored since the last value
had been stored, then store that average (and the time) if there had
been a siginficant change in voltage.
The initial samples were summed over 2^n intervals (double precision)
and divided by 2^n by shifting., which dealt with the noise. I adjusted
n by hand, but a high pass filter would have extracted the noise to
allow "n"to be set automatically.
Much the same idea was employed in the Cambridge Instruments Electron
Beam Tester 1989-91, but it wouldn't be useful in a patent case, since
it wasn't publicly documented.
The digital signal processing was all done in 100k ECL, and all the
sample intervals were restricted powers of two to avoid the necessity
for a proper multiplier.
Worked fine in both application - and in 1968 it meant that I could get
my data into the 3k of 12-bit memory left available after I'd loaded my
900-word program. Ph.D. was deposited in the Melbourne Univeristy
library in 1970, so Cornell's patent is probably so much waste paper.