S
Simone Winkler
- Jan 1, 1970
- 0
Hello!
I'm designing a kind of digital camera - and i use a omnivision 8610 cmos
image sensor for it. The surrounding hardware is an FPGA, a microcontroller
and other components used for the camera.
What i don't know is how to interface the camera. In the OV8610 datasheet i
cannot really find sufficient information about the digital video formats
and so on.
I read at beyondlogic.com, that due to the clocking of the datastream that
comes out of the camera a large amount of cpu resources is needed and no
method to detect errors is possible - so one possibility is to use a FPGA
and put the data directly to the RAM - which can further be memory mapped.
But how can i perform this?
Which clock does the camera need, what is HREF, FODD, VSYNC, PWDN_C?
Do I get byte for byte of the Y and UV values and then i have to assemble at
myself in the right "order"?
Thanks,
Simone
I'm designing a kind of digital camera - and i use a omnivision 8610 cmos
image sensor for it. The surrounding hardware is an FPGA, a microcontroller
and other components used for the camera.
What i don't know is how to interface the camera. In the OV8610 datasheet i
cannot really find sufficient information about the digital video formats
and so on.
I read at beyondlogic.com, that due to the clocking of the datastream that
comes out of the camera a large amount of cpu resources is needed and no
method to detect errors is possible - so one possibility is to use a FPGA
and put the data directly to the RAM - which can further be memory mapped.
But how can i perform this?
Which clock does the camera need, what is HREF, FODD, VSYNC, PWDN_C?
Do I get byte for byte of the Y and UV values and then i have to assemble at
myself in the right "order"?
Thanks,
Simone