Connect with us

Noise in CMOS image sensor

Discussion in 'Electronic Design' started by TD, Sep 27, 2005.

Scroll to continue with content
  1. TD

    TD Guest

    Hello, I would like to know how to estimate the noise on cmos image
    sensors. We have a custom cmos image sensor designed/ manufactured by a
    company and we have designed a proto board for hosting the cmos image
    sensor with A/D converter, clock, power supply etc which is interfaced
    to Analog devices FPGA/ FIFO board to read out the data to the
    computer. My question is how do I characterize / estimate noise on the
    sensor ? Thanks much...

  2. TD

    TD Guest

    We reconstructed a couple of images from the sensor and we seem to see
    the effect of alternate brighter and dimmer pixels. what are the other
    noise sources ?
  3. Joerg

    Joerg Guest

    Hello TD,
    It's been a long time that I designed a CCD camera but what you most
    likely see is the differences between adjacent readout columns. There
    will be "gain" differences as well as offsets. That can be corrected
    either analog or after the ADC. I did it analog but this was in the
    80's, about the time the Rolling Stones came back.

    Then there is the dark current which is temperature dependent. Also IR
    sensitivity which the mfgs tried their best to muffle.

    Most important are squeaky clean row and column clocks with not a speck
    of jitter. This also means very quiet power rails. The real kicker was
    to know when to sample and then do a proper integrate&hold with tons of
    dynamic range. The only thing I found adequate for that task was a
    quad-diode switch driven via a nicely symmetrical toroid transformer.
    Which of course you couldn't buy back then. This got me about 15dB or so
    more dynamic range out of one of the "stone-age" sensors than the data
    sheet said.

    Regards, Joerg
  4. KoKlust

    KoKlust Guest

    computer. My question is how do I characterize / estimate noise on the
    I did designs with a CMOS image sensor.

    The pure 'pixel noise' can be calculated as the RMS value of a number of
    readings of the same pixel in a certain readout-mode, under certain lighting

    Be careful to skip the first image(s) after powerup or initialization, use a
    repeatable control sequence, start in the (complete) dark, and get rid of
    'dead' pixels before you do the calculation.

    Noise is usually specified either as a number of electrons. Sometimes
    manufacturers only specify the SNR in dB.

    I agree completely with Joerg about the cleanliness of the control signals.
    Also, optimal signal scaling (using a VGA, and automatically adapting the
    shutter speed) can be used to optimize the SNR.

    So that is all about 'temporal noise'. Because of variations between pixels,
    there is also 'spatial' noise in the image. It is the same for each frame,
    so you may be able to correct for these effects by analog or digital
    (onchip/offchip) processing.

    Spatial noise types:
    - variation of the dark response from pixel to pixel (dark signal
    non-uniformity, DSNU)
    - variation of the offset from pixel to pixel (fixed pattern noise, FPN)
    - variation of the sensitivity from pixel to pixel (photo response
    non-uniformity, PRNU)

    Does this help you?

    Best regards,

  5. TD

    TD Guest

    thans for your replies. Its been really helpful

    I did a calculation of the "pixel noise"/ temporal variation on a pixel
    and found it to be around 2-3 mV. We used a 12 bit A/D converter for
    digitizing, and so a 9 or 10 bit A/D seems right with these noise
    levels...Our voltage swing is 2V.

    Also I estimated the DSNU. One thing I am not able to understand is
    when I take the data set on a completely dark condition. I see that
    every other pixel shows 0. I thought there will always be some noise in
    the sensor/ readout circuitry leading to a non zero value. And the
    alternate pixels are around 3mV.

    And can you explain this, variation of the sensitivity from pixel to
    pixel/ offset from pixel to pixel. I see variations in the pixel values
    under uniform lighting conditions, making neighboring pixels brighter
    and dimmer.

    thanks again,
  6. Joerg

    Joerg Guest

    Hello TD,
    A 10 bit converter won't cut the mustard here. You need at least 12
    bits. An honest and detailed data sheet will show you the effective
    number of bits (ENOB) versus frequency. Many converters drop almost two
    bits when being used at more than half the maximum clock frequency. So
    from a noise perspective that 10 bit converter might really behave like
    8.5 bits or worse. I think TI has a nice fast 16 bitter.
    Zero is not normal. There always is some noise and it should certinly no
    be this non-uniform. Could the array be damaged?
    Offsets and variations are normal, see my previous post. If you observe
    neighboring pixels of an illuminated area increase in value even though
    they weren't illuminated that is usually caused by charge bleeding into
    their cells. Often this is called "blooming". Mostly that happens when
    you exceed the maximum charge in a pixel cell. I guess Shakespeare would
    have said "the bucket overfloweth".

    It's hard to guess what really happens on your array because none of us
    knows how it's laid out. If you were at liberty to post the architecture
    that would help a lot.

    Regards, Joerg
  7. TD

    TD Guest

    Hello Joerg, Thanks again for your suggestions.
    We are reading out 4 channels of data from the CMOS sensor
    corresponding to columns that are interdigitated. and we are reading
    out the data row wise. When I view the data, I see there are 0
    digitized values on only one of the channels ( so it may have been some
    specified columns in the array) But we tried imaging a number of
    different objects and the reconstructed image looks very good except
    for FPN. So defintely the array doesnt seem to be damaged. aaha I think
    I may have found out why it happens.

    The A/d converter digitizes differential input from -1 to 1V. and hence
    anything below -1v is reported as 0. I think what may have happened
    here is we have an offset control for the analog output swing, probably
    and hopefully its swinging from -1.2 to 0.8 or -1.1 to 0.9 instead of
    -1 to 1V. and hence ADC reports these as 0s. (I also do not remember
    digitized value reaching 4096, always a little lower) But I have to
    see how I can control the offset and verify this....

    But I see this pattern occuring even the pixels are not saturated. I
    see this effect at almost all light levels.
    I do not have the entire architecture and I am not allowed to post what
    I have due to proprietary reasons.

  8. Joerg

    Joerg Guest

    Hello TD,
    That would explain it. Offset controls for CCD are a very touchy
    subject. They usually aren't meant to ease the life of the guy that has
    to design the interface. In my CCD days about 20 years ago I used a
    Philips NXA sensor. It was very particular about any of its DC levels.
    Again, offset and transfer factor (often called gain) deviations are
    part of life with CCDs. Not just between columns but also rows if the
    readout occurs via more than one row at the bottom. In my case it was
    three rows. I don't remember how many columns I had. I think it was four
    like on your array.

    Offsets can be measured. Usually (hopefully...) the manufacturer has
    provided a covered blank pixel for each column. Then you can detect and
    clamp on that value which will restore the DC bias for each columns.

    Transfer function deviations are more difficult. Basically you'd have to
    calibrate for that, using the cal values as coefficients to scale the
    readout values.
    That's understandable. But you as the circuit designer must have the
    entire architecture. I don't know how else you could achieve optimum

    Assuming you get raw and unsampled pixel output from the array: Do you
    have a proper integrate and hold circuit? That makes a huge difference
    in uniformity and also in dynamic range. Resist the temptation to just
    kind of add the row outputs together and send them through some RC.
    That's what the app notes for my array had originally suggested. The
    difference between their sample camera and the new one was stunning.

    Regards, Joerg
Ask a Question
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.
Electronics Point Logo
Continue to site
Quote of the day