Diffraction Limited’s line of high performance or “scientific grade” CCD and CMOS cameras are designed to operate under extremely low-light conditions with high sensitivity, low noise, and good linearity and dynamic range. Let’s discuss some of these factors in detail, and why they matter when choosing a detector.
Detection performance of the camera is all about signal-to-noise ratio. In other words, more signal and/or less noise provides better performance. The ability of a sensor to detect light is measured by quantum efficiency (QE), the percentage of incoming photons that are recorded. Since visible light sensors use silicon as the detector, differences in QE between different models depends entirely on the physical design of the sensor.
Astronomical cameras have three major sources of noise: read noise, dark current noise, and photon shot noise. The combination of all independent noise sources determines your overall noise floor, but it important to note that they add root-sum-square. That affects how you optimize your overall system for the best performance.
The dynamic range of the sensor is also an important factor, especially when making photometric measurements. Dynamic range is essentially the difference between the lowest detectable signal and the saturation point of the detector, that is, the point at which adding more light does not produce a higher output.
Root Sum Square
If you have three sources of noise contributing to your overall noise floor, they don’t simply add up; they add root-sum-square. Let’s say your three noise sources produce noise levels of 3, 2, and 8. The total noise is square root (3 x 3 + 2 x 2 + 8 x 8) = 8.8. You can see that this isn’t that much bigger than 8… the 3 and 2 don’t matter very much! The key takeaway is that unless two noise sources are almost the same, only the largest one really matters. There’s no point in trimming the bushes at the end of the airport runway; you need to chop down the tallest tree.
This can impact your choice of equipment, or how you use that equipment. It’s an optimization task, which we will discuss in greater detail below. First, we need to talk about those noise sources.
Photon Shot Noise
Photon shot noise is inherent to the physics of light – something that is out of the control of the camera designer. Light itself is inherently a statistical phenomenon, as it is transmitted in tiny packets of energy called photons. These photons are generated randomly, but for a particular source they are generated at a certain average rate. When you increase the amount of light, ironically you also increase the amount of photon shot noise. Mathematically, the noise is the square root of the number of photons. Fortunately, as the source gets brighter the signal increases faster than the noise, but this added noise still has an important impact.
Quite often the limit of performance is determined by the shot noise from background sky glow, which can be either natural (atmospheric sky glow or moonlight) or artificial (light pollution). Sometimes filters can be used to control this source of noise; for example, by using narrowband filters you can isolate targets that emit light at specific wavelengths and block the rest of the sky background. Locating the instrument in a prime dark location is of course highly effective but potentially not practical due to logistics and/or funding. The ultimate solution is to locate the instrument above the atmosphere, but that is of course by far the most expensive option.
Since the signal grows faster than the noise, increasing the overall sensitivity of the system generally improves signal-to-noise ratio. Even if your source isn’t much brighter than the background level, increasing both still improves your ability to detect the faint source. The drive to build ever-larger telescopes has been driven by the need to detect fainter light, as has been the development of better detectors.
It should be noted that sky background can also contribute to noise in a different way. Your flat-field calibration frames will never be perfect; they will also contain noise. A high background signal gets multiplied by the flat field, and any noise in the flat field becomes noise in the final image. When imaging targets that are much fainter than the sky background, flat-fielding error can end up being the ultimate performance limit.
In the early days of astrophotography, the detector of choice was film emulsion. Film QE is in the 3% range, so there was a lot of room for improvement. The first CCD detectors were built in 1969, and improvements over time eventually resulted in QE approaching 100%. Similarly, CMOS sensors started out as poor cousins but eventually achieved very high QE performance. Modern detectors are nearly perfect from a sensitivity point of view. Historically, the decades-long pause in building larger telescopes between the Palomar Hale 200″ telescope and instruments such as the Keck Observatory was in part due to the rapid development in CCD detectors. Once the sensors started approaching near-perfection, the only way to improve performance was to build even larger telescopes.
As a general takeaway, a higher QE camera is equivalent to having a larger telescope, and bigger is always better.
Read Noise
Whenever you make a measurement with any instrument, it will give a slightly different reading every time. This is caused by noise sources in the device itself.
Every time a photon is detected, a photoelectron is produced. Electrons are accumulated in the pixel, and then the total charge is subsequently read out by an amplifier, filtered, corrected for initial conditions (a process known as double-correlated sampling), then converted to a digital number by an analog-to-digital converter. Typical CCD sensors have read noise levels ranging from 5 to 20 electrons. Some modern CMOS sensors achieve read noise levels in the 1 to 3 electron range.
If your sky background is high, then read noise generally doesn’t matter. Photon shot noise dominates. You can take short exposures and stack them which will produce the exact same result as a single long exposure. Shorter exposures can be useful if your tracking is relatively poor and/or you are not using an autoguider. Short exposures are also useful for bright targets that would otherwise saturate the detector.
On the other hand, for narrowband imaging when you have low sky background and low photon counts, read noise becomes important and long exposures will improve performance. With longer exposures, though, dark current noise also starts to become important.
Dark Current Noise
Dark current is a noise source generated in the detector itself, with thermal vibration randomly knocking electrons loose – and these are indistinguishable from photoelectrons. If you are taking 1 second exposures dark current is generally not significant. For longer exposures dark current starts to become a problem.
Imaging cameras designed for science applications generally cool their sensors to reduce dark current. Depending on the specific sensor design, dropping the temperature 5 to 7 degrees C will cut the dark current in half. Some sensors have inherently greater dark current, such as deep depletion CCD sensors used for infrared imaging. These sensors require more powerful cooling, and major observatories often use liquid nitrogen for that purpose. Other sensors produce relatively low dark current even at room temperature, and only require modest cooling for good long exposure performance. The most cost-effective method of cooling sensors utilizes solid state Peltier-effect devices, also known as thermoelectric coolers (TECs). In addition to the heat extracted from the sensor, the TEC also generates its own heat. This combined heat must be removed from the camera, or else it will heat up resulting in no cooling. Typically, this heat is either removed using a heat sink and fan or a liquid circulation system. Well-designed cameras like the SBIG Aluma series have protection against overheating; you don’t want your camera to be badly damaged simply because the cooling fan jammed.
It is worth noting that dark current is one area where CCD sensors have an architectural advantage over CMOS sensors. The silicon behaves the same in either type of sensor, but CMOS sensors have transistors at every pixel, plus typically 8 or 16 analog-to-digital (A/D) converter subsystems and output driver electronics across the top and bottom of the sensor. This means CMOS sensors have millions of transistors, and these transistors all generate heat. The extra heat not only creates extra dark current, but also makes the sensor more difficult to cool since that extra heat must be pumped out of the camera. In comparison many CCD sensors have just two transistors in one corner!
Since CCD sensors generally run at higher voltages, the amplifier voltage must be reduced during integration to reduce amplifier glow. If they remain fully powered the transistors act like tiny LEDs in the corner of the sensor. CMOS sensors run at very low voltages and do not have this issue. They do have a different source of amplifier glow: heat generation. This heat is generated right at the pixels and in the readout electronics at top and bottom. This extra heat can be partially mitigated by powering down large parts of the sensor during integration. During readout all the transistors must be turned back on, which results in a large spike in heat output. As a result, typical CMOS sensors will get as much as 10 degrees C warmer when running a fast series of short exposures.
Dark current presents two problems. The first is that every pixel is a little different; some generate more, others generate less. The second is that dark current is a random phenomenon just like light – dark current is produced at a constant average rate at a given temperature, but in a random fashion. When you have greater dark current, you also have more dark current noise. Again, the noise is the square root of the number of thermal electrons produced.
This variation in dark current between pixels produces hot and cold spots in a fixed pattern, which has a major impact on the image if not corrected. This fixed pattern can be easily corrected by subtracting a dark frame, which is an exposure taking under identical conditions but with no light present. Many cameras such as the SBIG Aluma AC4040 include mechanical shutters for this purpose; others such as the SBIG Aluma STC-7 will use a blank in the filter wheel. Competing cameras often require the user to cap the telescope, which prevents easy automation.
A common misconception is that subtracting a dark frame reduces noise. It does the opposite – it adds noise. From a random noise perspective, subtraction is the same as addition, and the noise adds root-sum-square. Dark correction removes the fixed pattern from the image, but subtracting a single dark frame adds 41% more noise to the image. It should also be noted that hot pixels have more dark current noise than their neighbours – this cannot be corrected.
A further problem is the fact that dark current is extremely sensitive to sensor temperature. For dark subtraction to work effectively, the sensor must be at the exact same temperature for both frames. For this reason, scientific imaging cameras are designed to regulate the sensor temperature. For good results, the usual target is to regulate better than +/- 0.1 degrees C. CMOS cameras often experience a temperature transient when you change exposure conditions, so it is often best to throw out a few frames initially to give some time for the camera to stabilize the sensor temperature.
Although subtracting a dark frame adds noise, it is essential to good results. Fortunately, it is possible to mitigate the extra noise simply by stacking dark frames. Taking an average of 16 dark frames will reduce the added noise by a factor of 4. The root-sum-square noise addition works in your favor here; with enough stacked dark frames, the extra noise will be negligible in the final output image.
Optimizing Noise Performance
You may want to integrate for 3 hours to get sufficient signal-to-noise ratio for a certain target, but it is generally not practical to take exposures this long. One reason is sensor saturation; in some cases, the dark current alone will saturate the sensor. In other cases, bright stars or even the sky background can saturate. In addition, the longer the exposure the greater the risk of imaging problems such as drift, clouds, wind gusts, a passing airplane, satellite trails, cosmic ray strikes, poor guiding, user error, etc.
The universal approach to solve all these problems is to take a series of shorter exposures and stack. Using stacking algorithms such as Sigma-clipping or Standard Deviation (SD) Masking will very effectively remove outliers such as cosmic ray strikes and satellite trails. Stacking also affords you the opportunity to reject frames with bad characteristics, such as guiding errors or momentary clouds. It also allows you to reject images impacted by a decline in seeing conditions due to changing weather conditions or increasing air mass as the target sets. Diffraction Limited’s software, MaxIm DL, has a Stack command that can automatically grade images for seeing and guiding issues, and you can set a threshold to automatically reject bad images. You can also adjust the threshold and watch how many images will be rejected as a result.
Some optimization tips:
- For narrowband imaging you want to be dark current limited. You may want to use an exposure as long as 30 minutes or more. Of course, you don’t want the dark current to consume your dynamic range, so for some cameras this may be the effective limit on exposure duration. In other situations, the other potential problems mentioned above might limit you to 10 minutes.
- For high sky background situations, you will be photon shot noise limited. This typically would suggest using shorter exposures to limit bright target saturation, guiding errors, airplanes, etc.
- High sky background images are also more impacted by flat-fielding error. Your target may be less than 1% brighter than the sky background; obviously even a small flat-fielding error will rapidly become your limiting factor for detecting faint sources. In this case spend more effort getting good flat-fields. Consider using sky flats.
- Some targets inherently have extremely large dynamic range. An obvious example is the M42 complex in Orion – the Trapezium stars saturate on even short exposures, while capturing the faint extended detail requires long exposures. Consider stacking a large number of short exposures. This target may benefit from taking exposures of different durations and combining them in processing.
- For shorter exposures with low background, the system can become read noise limited. The simple solution is to take longer exposures.
- Lower f/ratios (or larger pixels) allow each pixel to see more sky. This makes the instrument more sensitive to diffuse sources such as nebulosity, but also increases the sky background noise contribution for each pixel.
This all suggests running some tests – take exposures of different lengths amounting to the same overall exposure time, stack, and compare results. If your can adjust your instrument’s focal length, and therefore how much sky each pixel sees, then it may be useful to test different instrument configurations as well.
In general, it is best to choose the shortest exposure duration that does not significantly degrade your signal-to-noise ratio, since that also gives you a greater advantage from removing bad images.
Estimating your Signal-to-Noise Ratio
It is possible to calculate your signal-to-noise ratio if you have information on your camera, telescope, observing location and conditions. Calculations are unlikely to provide the perfect result given the number of variables involved, but they are extremely useful in identifying what factor is limiting your imaging performance. As an example, you can easily determine the impact of longer or shorter sub-exposures.
A handy calculator spreadsheet is available on our web site at https://cdn.diffractionlimited.com/down/snr.xls
Dynamic Range
Every detector has a noise floor and a saturation level. Any object below the noise floor will not be detected; any object above the saturation point will be chopped off at that level.
In MaxIm DL you can easily measure your noise floor using the Image Info Palette. The Standard Deviation is your noise measurement. Typically, an object is considered “detected” when its peak is 3X the noise floor. For the purposes of calculating dynamic range, we look at the ratio between the noise floor and the saturation level. Suppose your 16-bit camera has a pedestal level of 1000 ADU (average response to zero light) and a noise floor of 16 ADU. Your effective dynamic range will be (65535-1000) / 16 = 4033:1.
It is possible to increase dynamic range through stacking. Consider taking a single 1-hour exposure versus 16 exposures adding up to 1 hour. The shorter individual exposures will allow you to record an object 16X brighter without saturating. At the same time the noise floor increases only 4X. Doing this effectively increased your dynamic range by 4X. If your read noise floor is still below other sources, it’s a win-win.
A related concept, which is often misunderstood, is Bit Depth. A 16-bit camera can encode numbers from 0 to 65535, but that doesn’t mean the dynamic range is 65535:1. In practice it is much less than that.
A sensor with small pixels has an inherently limited dynamic range, simply because the pixels can’t hold very many electrons. If a pixel can hold 25,000 electrons and has a noise floor of 7 electrons, that’s a dynamic range of about 3500:1. You might think you could use a 12-bit converter, which can encode 0 to 4096, but you do need a finer sampling than the noise itself. If the A/D converter’s quantization is too crude it will add noise to the system. In practice you need to sample the noise with a couple of bits, so an ideal converter for that sensor would be 14-bit. Back in the day manufacturers would throw in a 16-bit converter because it would add about $10 to the cost of a CCD camera and eliminated any potential customer confusion. With CMOS sensors the huge number of transistors required to implement an array of 16-bit converters is very expensive, so it’s far more important now than ever to optimize the bit depth.
Many CMOS sensors have 12-bit converters. As you can see from our example above many of those would benefit from 14-bit converters. Instead, the user is offered a choice via a gain setting – either get the best available read noise and clip the signal well short of saturation – or get the full dynamic range at the expense of more noise. Or use some compromise in the middle. That is why the gain setting is there – the choice all depends on how you want to optimize your imaging. Understanding whether you got the correct compromise would require some data and some math.
Some CMOS sensors offer 16-bit outputs, though usually this is through High Dynamic Range (HDR) techniques. This involves using two 12-bit converters with different gain settings. One converter is optimized for optimum read noise, and the other is adjusted to record the full well depth of the pixels. The output is then combined into a single 16-bit number. Some sensors also support taking two exposures back-to-back with different durations, to achieve the same effect.
An alternative to HDR is to use a 12-bit converter and stack. Some cameras include built-in stacking; for example, some SBIG cameras have the StackPro feature. These cameras can automatically add together up to 16 sub-exposures to produce a single image. Adding sixteen 12-bit images together produces a 16-bit image, and the only cost is a modest increase in read noise. Fortunately, the read noise for modern CMOS sensors is often very low. If a sensor has just 2 electrons of read noise, stacking 16 images results in a read noise figure of 8 electrons, which is comparable to many CCD sensors. As a direct comparison, an SBIG STC-428-P with a Sony IMX428 CMOS sensor with 16X stacking in High Gain mode has 7.6 electrons read noise and 80,000 electrons full well, whereas a KAF-8300 CCD sensor has a read noise of 9.3 electrons and full well of 25,500 electrons.
Conclusion
There is a lot you can do to optimize the performance of your imaging system. One of the simplest things is to optimize your exposure time to match your equipment and observing conditions. If your camera has adjustable gain settings or different operating modes, that is also worth exploring. A lot can be determined through calculation; however, some simple experimentation and comparison of the results is often the best way forward.