In Part I of this blog, we looked at the reasons why your SBIG CCD camera doesn’t require a super-fast interface that races along the data highway into your computer. As to the reasons why, first we looked at Cable Length, and now we’re going to look at the second reason, the Sensor Readout Rate.
Sensor Readout Speed Limit – How fast will it go?
There’s an old joke about the auto mechanic charging a lot of money if you want your sports car to go fast, and even more money for good brakes to stop really fast too.
When it comes to cars or cameras, speed is limited by the slowest part of the system. In a car, speed is limited by the engine, aerodynamics, roads, driver, and the police. In a camera, it’s usually the image sensor that limits the top speed.
Let’s review how the camera works to understand this design factor. Cameras perform three steps:
- Expose the sensor to light
- Digitize the pixels
- Download the numerical data to the computer
Still image cameras such as those for deep-sky imaging complete the exposure first, then they begin the second step of digitizing the pixels through the analog-to-digital converter (ADC). The ADC converts the electrical charge (an analog voltage) to a binary number (typically 16-bits for a CCD or 10/12/14/16 bits for a CMOS APS device). The data is stored in a memory buffer for subsequent download. In a modern camera, the download process is initiated by the software running on the host computer, and the data is brought down in chunks over the interface.
CCD sensors generally have 1 to 4 readout amplifiers in the corner of the chip. These are the “pipes” that feed the pixel’s electrical charge to the ADC. The car analogy would be exhaust from each cylinder going to a single tailpipe out the back. The charge is passed along the row to the readout amplifier in the corner, until all the pixels have been read, and then we clock the rows down, and repeat the process for all pixels on all remaining rows.
Sensors like the KAF-16803 used in the STX-16803 camera have a digitization rate of 1.8MHz, which translates to 9.3 seconds to digitize all 16.8 million pixels with a 16bit number. That’s a peak speed of about 60 Megabits/second download rate. A fast camera like the STF-8050 (F for Fast) has a KAI-08051 interline CCD that tops out at about 480 Megabits/second, which is as fast as USB 2.0 goes. So there really isn’t a lot of point to having an interface that moves data faster than the sensor can be digitized.
What about USB 3.0, 3.1, 3.2 and 4.0?
Well, do we really need the 40Gbit/s speed of USB 4.0 when the fastest we could get the data off the sensor is so comparatively slow? There’s really no benefit in the case of a still image camera that is doing multi-minute exposures.
A bit like a 320 km/h supercar sitting in your parking spot most of the time. USB 3.0 can come in handy for larger sensors and fast-frame rates, but the yellow caution flag comes down on the distance limitation.
As a percentage of the exposure time for a still image, download time is almost negligible. For example, if you’re doing a 10-minute exposure on the 9.1-megapixel Aluma 814, digitization and download add 2/600 of the run time; a 600 second exposure takes perhaps 602 seconds to get into the computer. That’s a 0.3% slowdown.
Put another way, if you had a perfect 8-hour night, 48 exposures would take just an extra 96 seconds. This is an endurance race, it’s not about the fastest lap time.
Interface speed is (almost) irrelevant
So, does USB speed matter? When we think about real-world performance and 100% reliability, USB 2.0 is plenty fast for deep sky Astro-photographers or for people getting photometric data to measure stellar brightness or follow up on exoplanet transits.
The answer is clear: for most CCD sensors, USB 2.0 goes the distance, and gives you 67% more cable length. Slow down and enjoy the ride!