Connect with us

Help comparing two FFT approaches - UPDATE

Discussion in 'Electronic Design' started by RobertMacy, Dec 20, 2013.

Scroll to continue with content
  1. RobertMacy

    RobertMacy Guest

    Thank you EVERYBODY for replying. Gave me food for thought and confirmed
    earlier efforts.

    Finally, I found absolutely NO difference between results obtained from
    doing one very long packet, or averaging together the SAME information in
    smaller packets!

    What had prevented earlier confirmation of that fact was...I made a
    mistake implementing the technique - which led me to observing a 'slight'
    difference. Sorry about all the brouhaha

    Now in an ACTUAL implementation with ACTUAL signals the difference has
    dropped to almost numerical accuracies. We're talking less that 0.001 ppm
    difference! Which is good, because that should be the difference.

    However from the excellent responses I learned some new techniques AND a
    suggestion to look at the data and make certain it is contiguous! Small
    bits and gaps would cause some horrific effects.


    One idea came to mind during all this,

    Is there any advantage to 'slipping' the packets?

    An example would be assume you have two 1000 length packets containing
    known signals you're looking for buried in white noise.
    You can do two FFT's and average, or one long FFT and get identical
    results. or
    do an FFT on 1000 length, slip one sample, do FFT on 1000 length sample,
    slip another sample etc. end up averaging in a special way 1000 FFTS.
    Would that yield any improvement? Thoughts? Anybody tried that?

    The idea is that coherent signals keep adding their energy, but the white
    noise could be destroyed by its own randomness.

    Or, is it just a case of an averaging process reaching its limits where
    two times 1000 sample points and you can't get better than that.??
     
  2. RobertMacy

    RobertMacy Guest

    further note: shifting one sample and one sample etc then averaging yields
    EXACTLY the same as if you did one pass, sigh.
     
  3. Here is a image (20 meters), decoded via 8 bit 11k Rate zero crossing
    (no DFT/FFT at all)
    http://webpages.charter.net/jamie_5/1.jpg

    Here.
    http://webpages.charter.net/jamie_5/2.jpg
    Was 22k 16 bit using FFT.

    Problem, with the FFT/IFFT math I was using, higher sampling rates
    inserted artifacts that didn't seem to be related to nyquist issues,
    at least with all the test I ran I didn't see that, because low
    sampling rates perform much better, also makes the code much quicker
    to process.

    The only thing I could come up with was the problem of using complex
    numbers where things started to drop off in the vapor. Maybe I should
    revisit that some day..

    I used buffer chunks and overlapped them.

    Jamie
     
Ask a Question
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.
Electronics Point Logo
Continue to site
Quote of the day

-