0 Items Selected

No products in the cart.

I’m often asked how CMOS Active Pixel Sensors differ from CCD sensors in terms of image calibration. For me, as a camera designer and a user, it’s been something of a learning process—not all CMOS sensors behave the same way. There are some rather surprising differences compared to CCD sensors. The basic principles are largely the same, but the details can be very different.

I’ll first discuss the differences from a technical standpoint. If you aren’t interested in the nitty-gritty technical details, skip to Part II for some specific recommendations on your calibration process.

Part I – What’s Different Between CMOS and CCD?

Both types of sensors detect light the same way, but the readout process is vastly different.

CCD sensors are simple analog devices running at relatively slow speeds (typically 1 to 20 MHz). They have very few active transistors, possibly as few as two, just to read out the pixels in one corner of the sensor. Some sensors have four readout amplifiers for each corner to speed up readout. Some have limited additional onboard electronics to handle substrate bias or other basic functions.

The electronics that transfer the pixels to the readout are all external to the sensor; they strobe electrodes inside. Likewise, the electronics that process and convert the sensor output to digital numbers are also external. The designer is responsible for the external circuitry, such as selecting the analog-to-digital converter, power supply, amplifiers, and other items needed for a reliable, low-noise, and accurate camera.

CMOS APS sensors are quite the opposite. Instead of a handful of transistors, there are millions. These transistors run at very high speeds, often well over 100 MHz. There are transistors at every pixel. They amplify and switch photoelectrons onto internal wires. Along the sides of the silicon sensor itself (often top and bottom), there are large sections full of electronics – typically eight or more sets top and bottom that include: double-correlated samplers, multiplexers, analog-to-digital converters, drive electronics, high-speed data encoding and transfer electronics, and output line drivers.

Why does this matter? I’ll explain.

Frame Cycle
CCDs don’t have a frame cycle, which is a repeating process for each image frame to come out immediately after the other in a fixed time interval, as is typical for video. Instead, the camera flushes the pixels, waits a while for photoelectrons to accumulate, and reads them out. High-speed versions of CCD sensors can do video rates; making them do longer exposures is as simple as waiting longer between flush and readout.

Most CMOS sensors are fundamentally designed as high-speed video sensors, with some tricks for longer exposures. As such, they output data every frame cycle. The frame cycle time (or frame time for short) can be tens to hundreds of milliseconds depending on the size of the sensor, the readout mode used, and the detailed electronics design of the specific sensor.

Rolling shutter sensors reset the lines of pixels one after the other down the chip. They then read the lines one after the other, with a slight delay between the reset and the readout—this is your exposure time. This runs continuously if the exposure is shorter than the frame time.

To take a long exposure, the sensor skips cycles. The electronics prevented from reading out the sensor for a certain number of frames. The sensor will typically go through the motions of reading out without doing anything, but more advanced models can optionally partially power down during this time.

It isn’t hard to see that the behavior of the sensor could be very different when operating for continuous frames versus skipping frames to take a long exposure. It’s possible for different makes or models of a camera using the same sensor to behave very differently – the camera designer has many choices they can make.

Amp Glow
Camera designers—and users—must constantly cope with the well-known problem of “amp glow.” This is the built-in electronics generating areas of glow within the sensor itself. However, you may not know this glow’s source is very different between CCD and CMOS sensors.

CCD sensors have a small number of transistors, but they are biased at relatively large voltages. Typically, 15V is used for the readout transistors. The transistors act as tiny, inefficient light-emitting diodes (LEDs) at those voltage levels. They may not generate much light, but they do it right inside the chip, resulting in a glow in the corner where the readout amplifier is located (and, depending on the sensor design, possibly in other areas too). The solution is temporarily reducing the voltage during integration while the photons accumulate. You can’t turn it off entirely without disabling the sensor, but you can set it low enough to prevent significant LED emission. The voltage is turned back up a few seconds before the readout starts, and everything returns to the same as before.

CMOS sensors have millions of transistors operating at low voltages but at high speeds. The low voltage means they don’t generate LED effects. Unfortunately, operating at high speed means they generate heat—lots of heat. That heat results in hot spots where the dark current is higher. There’s some extra heat at every pixel, but it’s massively more heightened in the readout electronics at the sensor’s top and bottom edges (or sides).

Managing this heat is difficult. If you have a series of short, one-frame exposures, the heat generated by the onboard readout electronics is very significant. On the other hand, if you take a long exposure and the sensor design allows you to turn things off during that time, the chip cools down during integration but heats back up a bit during readout. This cycle doesn’t make calibration impossible if it’s consistent, but when you change exposure time, you change the heat pattern.

A key implication is that you can’t scale dark frames to match the exposure like you can with a CCD. When you change the exposure, the amp glow pattern changes. Some modern sensor designs have improved characteristics, which helps mitigate the obvious glow, but it can still adversely impact calibration.

Bias Stability
Many CMOS sensor models automatically adjust the bias level on every exposure. This allows the manufacturer to ensure maximum sensor dynamic range on long exposures where dark current increases the apparent background bias level.

That’s a good idea for a consumer DSLR camera but a bad idea for a scientific sensor! You want the ultimate stability at the bias level. If you add a small constant to any calibration frame, flat-field calibration will become inaccurate – it’s simple math. [(A + Offset) / (B + Offset) is not the same as A / B!]

Some manufacturers allow the camera designer to turn this function on or off; others do not. One sensor I’ve been working on in recent weeks defaults off, but the manufacturer strongly recommends that you turn it on. That is advice I am not following! Other manufacturers may recommend something different.

The practical impact is that temperature and exposure time changes could impact the bias level. So if you’re wondering why your flat-fields aren’t working perfectly, this is possibly the cause.

The solution? First, take all your frames at the same sensor temperature. That includes light frames, flat-field frames, and dark frames. Try to make everything as consistent as possible; in extreme cases, that might even mean matching your flat-field frame exposure times to your light frames—which could be very difficult to achieve.

Thermal Regulation
CCD sensors have minimal on-chip heat generation. However, because of the active electronics, CMOS active pixel sensors are very much the opposite—larger sensors can make several watts of heat. In addition, they generate much more heat when reading out than when they are idle (or integrating).

The temperature of a CMOS sensor will always increase when the readout is active. That is unavoidable, but the key thing is consistency. Don’t run a series of 100 back-to-back 0.1-second exposures to focus, and immediately start running long exposures (dark or light). Let the temperature of the sensor re-stabilize first. Also, bear in mind that some camera manufacturers may not understand the subtleties of temperature regulation with these devices. I’m going to some lengths in my designs to ensure as stable a temperature as possible, but I don’t know what other manufacturers are doing. Even with careful camera design, the sensors’ variable thermal generation effects make it impossible to be perfect.

The best advice I can give is to wait a while for the sensor to cool down after focusing/centering before you start doing science images. Or better yet, start imaging and throw away any early frames contaminated by extra heat. This is better because you know the camera is in a stable operating mode, generating the same amount of heat in the same pattern.

Thermal Regulation, Again!
How do you do this with a CMOS sensor? In practice, this can be a challenge. Some sensors have onboard temperature sensors, but they’re located with the readout electronics. As a result, the pixels themselves can be a couple of degrees cooler. When it comes to image calibration, that’s a lot! I always recommend +/-0.1C regulation for accurate calibration.

What about a thermistor on top of the cold finger? That seems reasonable, but the sensor generates heat – a small amount right at the pixels and a larger amount at the readout electronics. The reality is that the pixels can be warmer than the cold finger by a couple of degrees.

You don’t need to know the exact temperature as long as it’s consistent. That said, the temperature difference between the sensor and your measuring device will change depending on whether you’re doing long or short exposures. So, you may need to wait for the sensor temperature to stabilize after changing exposure times.

An even more pernicious problem can present itself – ambient temperature coupling. You can sometimes see that changes in the heat sink temperature affect the sensor temperature slightly. The camera says it’s the same, but it’s not. Depending on the sensor’s dark current levels, this might not be a significant problem.

We’ve gone to great lengths to ensure stability over ambient temperature on some of our camera designs, including using copper cold fingers to minimize temperature differences and, in some cases, advanced algorithms to compensate for known effects.

My recommendation: verify. Try doing a set of dark frames at the same sensor temperature but in different ambient temperature conditions at least 10C apart. Generate master darks for both and subtract them from each other. Better yet, make two sets at each temperature to compare subtracting master frames at identical ambient temperatures versus subtracting master frames at different temperatures. Either way, make sure you get a nice clean frame with no extra dark current noise and no offset. I will cover this in more detail in the next section.

Part II – Calibration Recommendations

Validate Your Hardware
Different CMOS APS cameras will behave differently, even with the same sensor. The camera designer can change the operating mode of the sensor, often in subtle ways, and the thermal design will be different. Further, the noise characteristics of CMOS cameras can vary, as the sensors are exceedingly sensitive to tiny power supply fluctuations. Be sure that your camera is operating to the manufacturer’s specifications.

To ensure the highest quality data, you need to validate your hardware. If you find some hardware limitations, you can adapt to them.

Here are some recommendations of things to check:

Now, pick one flat image and try calibrating it with the 10% master. Then try again with the 30% master and then again with the 50% master. See if that affects the uniformity of the result. If the results look different, you probably need lower ADU percentage flats. Compare flat-fields taken at the same ADU level but with different exposure times. Make a master flat for one set, then try calibrating images from both sets. They should work the same; if not, there’s a drift based on exposure time.

If you find some issues, here are some mitigations you can try:

Basic Calibration
Keep it simple – same basic math as for CCDs.

Output Image = (Light – Dark) / Normalized (Flat – Flat-Dark)

Advanced Techniques

Modern CMOS APS cameras do have advantages over the venerable CCD imagers. For example, modern CMOS sensors read out faster, often have lower read noise, and may cost less at the same performance level. They aren’t perfect, though; for some applications, high-performance CCD sensors are still superior. This is especially true for spectroscopy (due to their analog binning capabilities) and for infrared imaging applications.

As we have seen, scientific imaging has potential pitfalls compared to the more mature and well-understood CCD camera technology. This is particularly true when it comes to image calibration. These pitfalls can be caused by limitations in the sensors themselves, but they can also depend on the exact details of the camera design and its firmware. Also, be aware that no matter how good the camera design is, the inherently variable heat generation in the sensors can have an impact. Finally, don’t go by other people’s opinions on Internet forums – you really need to get some hard numbers to understand your camera’s performance.

At Diffraction Limited, we are working hard to produce the most stable, reliable cameras on the market today. Our powerful imaging software, MaxIm DL Pro, is designed to produce clean, well-calibrated, and stacked images with the least effort.

For more information, ask me a question on our forum at https://forum.diffractionlimited.com/.

Doug George
President/CEO and Lead Engineer
Diffraction Limited