Connect with us

Multiple CCD signal to single, summed signal

Discussion in 'General Electronics Discussion' started by PlanetHunter, Oct 28, 2017.

Scroll to continue with content
  1. PlanetHunter


    Oct 28, 2017
    Hello everyone,

    I am an advanced amateur astronomer from Greece and with 2 other guys we plan to build our own observatory that will use many telescopes which will act as one through the combination of all CCD cameras to one. So I would like to be informed from some experts in electronics, if possible, as I have some questions that are waiting to be answered.

    A little description first; What we plan to do is to use 6 11-inch SC telescopes, all with the same focal ratio and at the focus of each of those 6 optical tubes there will be a CCD camera. Total of 6 CCDs, that will be all the same to each other, as well, in order to keep the same image scale and field of view. Something like e.g the Dragonfly Telescope, that uses a lot of lenses that act as one (actually giving a larger effective telescope aperture that resolves much more detail compared to a single, monolithic optical tube with its CCD camera) through the combination of the CCDs.

    The question is, how can we combine the outputs of the CCD cameras to act as one Super-CCD? Do we have to convert each camera's output into a tramsmission line and then combine the transmission lines all togetherfor the signal to be summed into one? And how can we do that, technically, if this is the way to go? If this can be done, would we have to make (or buy) a device that does the summing of the signal to one for every exposure we do? And of course, if there is such a device (or if can be constructed) it is necesairy to be connected to the PC, in order to control all CCDs as one through this device, actually gathering the light of a 26.9" diameter telescope (sqrt(6)*27.9 cm) and with at least 2.45 times the ability to see more detail than a monolithic 26.9" telescope.

    For a better explanation about what I was thinking, I created this in Gimp;

    Thanks in advance.

    Attached Files:

  2. davenn

    davenn Moderator

    Sep 5, 2009
    Hi there
    welcome to EP :)

    As an active imaging astronomer, I think you are going about this the hard way. It would be so much easier to take the actual generated images created by each scope
    and just stack them as the rest of us guys do. I don't really see any advantage in doing what you are suggesting. Rather you would be looking at some very complex electronics
    that would be out of the average electronics guy to be able to achieve.

    There are some very good stacking programs out there PixinSight, Deep Sky Stacker, Nebulosity3 to name several

    here is one reference I found to what you have in mind....

  3. PlanetHunter


    Oct 28, 2017
    Thank you for your reply to my post. I think that by taking the images of each telescope and adding them together will be just like taking the images of one scope, so practically we will not achieve a bigger aperture results, because going this way is like "you have more telescopes, so you have more images aquired in less time", and the stacking result will be pretty much like that of one telescope.

    The Dragonfly telescope team actually achieves these results by connecting the CCDs to Mac minis and through ethernet connection the data are being transfered into the PC which runs Linux and the synchronization and summing of the signal is being done through a python script that has been written by the professors that use it. So I can't write a script because I don't know any programming at all.
    Also, if you sync the CCDs, the total field of view is being increased. The team of the Dragonfly Telescope has detected extremely dim magnitudes, I think the went down to 23 or so, and they use 48 lenses (and 48 ccds) of f/2.8, so their field of view, as they report on the papers released 2 years back, is becoming 2.8 over the square root of 48 = a huge field of view that is equal to f/0.404 (they use 48 STF-8300 ccd cameras).
    Such a result can be achieved only if the CCDs are synchronized and every sub-exposure is automatically summed with the one taken at the same time from each of the other scopes, before the ccds go to the next sub-exposure.

    Thanks for giving me this link, I will keep this in mind.

    P.S. I have MaxIm DL, The Sky 6, Chartes du Ciel, ASCOM, Nebulosity 4.0 and Astroart 4.0 and I did photometry back in 2007 and 2008, measuring Wasp-2b exoplanet with my LX90. The main purpose of what we paln to do is planet hunting.
    davenn likes this.
  4. davenn

    davenn Moderator

    Sep 5, 2009
    well more images in the same amount of time ;)

    say 6 scopes, so 6 times more images for each exposure session
  5. Enry Og

    Enry Og

    Jan 11, 2017
    Perhaps putting question to TV Broadcasting or Gaming people might bring further info/ideas.

    Multiple cameras (TV studio or Outside Broadcasts) need to be synchronized to a master signal (gen-lock). If not, each time a different camera is switched in there will be a temporary loss of picture stability until the viewing device re-establishes own sync to stabilize picture.

    Also, if you have six independent video signals being multiplexed piped down a single cable, the bandwidth requirement on the multiplexer transmitter, cable and receiver could be significant and costly. - I suppose 6 times the requirement for just one channel. I assume you want HD.
  6. kellys_eye


    Jun 25, 2010
    I would have thought NOT as you're conflating the principle of phase arrays for radio signals with DC levels as produced by a CCD element????

    Would it not be a case of simply taking the raw CCD element signal(s) and summing them pre internal signal processing? You then take that resultant signal and THEN process it through just ONE of the active CCD processors?

    You will, of course, have to synchronise the cell clocks and gating.

    Sounds like de-constructing a single CCD camera and adding 'gates' and 'summing' amplifiers before re-constructing the signal for post processing purposes. Whether or not the individual camera can deliver such signals on a software basis depends on the device but I reckon a hardware modification is the only real solution.

    Is this technique a method to reduce the amount of exposure time? I'm guessing it is as there are limits as to how long a CCD can remain 'open' before thermal degradation sets in.....

    Might therefore be easier to work on liquid helium cooling than multi-CCD phasing.

    I am, however, mystified at how you get 'something from nothing' as the inability for ONE CCD device to resolve low light levels isn't 'cured' by 'adding' multiple signals together is it? The sum of all 'zeroes' is still zero. If this were the case then you could just take the signal from ONE cell and multiple (amplify) it as much as you wanted!!! Background noise notwithstanding of course.

    Of course, wiring between CCD devices and the use of such low signal levels introduces problems in themselves but perhaps pre-emphasis and subsequent de-emphasis could cover that?
    Last edited: Nov 4, 2017
Ask a Question
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments (here). After that, you can post your question and our members will help you out.
Electronics Point Logo
Continue to site
Quote of the day