The Principles Behind Digital Camera Technology | Andor

Digital Camera Fundamentals

The principles behind digital camera technology

In the last few years, light measurement has evolved from a dependence on traditional emulsion-based film photomicrography, to one where electronic images are the media of choice. The imaging recording device is one of the most critical components in many experiments so understanding the process of how the light images are recorded and the choices available can enhance the quality of the light measurement data. In this guide we aim to provide an understanding of the basics of light detection and also help select a suitable detector for specific applications. High performance digital cameras can be defined by a number of variables.Each of these variables is discussed in detail in subsequent sections but a brief description is included here for convenience.

Scientific Digital cameras come in 4 primary types based on the sensor technology they use and these are; CCD's, EMCCD's, CMOS and ICCD cameras. The different cameras and their various architectures have inherent strengths and weaknesses and these are covered in depth.

The most common scientific camera the Charge Coupled Device camera (CCD) comes with three fundamental architectures and these are Full Frame, Frame Transfer and Interline format. The different architectures and their inherent strengths and weaknesses are also covered in depth.

The spectral response of a camera refers to the detected signal response as a function of the wavelength of light. This parameter is often expressed in terms of the Quantum Efficiency (hereinafter in this document referred to as QE), a measure of the detector's ability to produce an electronic charge as a percentage of the total number of incident photons that are detected.

The sensitivity of a camera is the minimum light signal that can be detected and by convention we equate that to light level falling on the camera that produces a signal just equal to the camera's noise. Hence the noise of a camera sets an ultimate limit on the camera sensitivity. Digital cameras are therefore often compared using their noise figures and noise derives from a variety of sources principally:
Read Noise: inherent output amplifier noise
Dark Noise: thermally induced noise arising from the camera in the absence of light (can be reduced by lowering the operating temperature)
Shot Noise (Light Signal): noise arising out of the stochastic nature of the photon flux itself

It is often overlooked that the light signal has its own inherent noise component (also know as Shot Noise) which is equal to the square root of the signal. Another noise source which is often overlooked is the excess noise that arises from the camera's response to light signal, which is known as the Noise Factor.

Dynamic Range is a measure of the maximum and minimum intensities that can be simultaneously detected in the same field of view. It is often calculated as the maximum signal that can be accumulated, divided by the minimum signal which in turn equates to the noise associated with reading the minimum signal. It is commonly expressed either as the number of bits required to digitise the associated signals or on the decibel scale.

A camera's ability to cope with large signals is important in some applications. When a CCD camera saturates it does so with a characteristic vertical streak pattern, called Blooming. In this section the effect is explained and how it can be compensated for.

A camera's signal-to-noise ratio (commonly abbreviated S/N or SNR) is the comparison measurement of the incoming light signal versus the various inherent or generated noise levels and is a measure of the variation of a signal that indicates the confidence with which the magnitude of the signal can be estimated.

Digital cameras have finite minimum regions of detection (commonly known as Pixels), that set a limit on the Spatial Resolution of a camera. However the spatial resolution is affected by other factors such as the quality of the lens or imaging system. The limiting spatial resolution is commonly determined from the minimum separation required for discrimination between two high contrast objects, e.g. white points or lines on a black background. Contrast is an important factor in resolution as high contrast objects (e.g. black and white lines) are more readily resolved than low contrast objects (e.g. adjacent gray lines). The contrast and resolution performance of a camera can be incorporated into a single specification called the Modulation Transfer Function (MTF).

The Frame Rate of a digital camera is the fastest rate at which subsequent images can be recorded and saved. Digital cameras can readout sub sections of the image or bin pixels together to achieve faster readout rates, therefore typically two frame rates are defined, i.e. one is a full frame readout rate and the other is the fastest possible readout rate.

Cameras to some degree all exhibit blemishes which affect the reproduction of the light signal. This is due to several variables, i.e.:
Gain variations across the sensor
Regional differences in noise

EMCCD cameras are relatively new types of cameras which allow high sensitivity measurements to be taken at high frame rates. The operation and properties of these cameras are outlined./p>

Intensified CCD cameras combine an image intensifier and a CCD camera and are inherently low light cameras. In addition the image intensifier has useful properties which allow the camera to have very short exposure times. The operation and properties of these cameras are outlined in this section.

In this section a detailed comparison between CCD, EMCCD and ICCD cameras is shown and the applications suited to each camera is highlighted.


    Sign up for the Andor Newsletter Now!

    Receive articles like this one, product launches, press releases and more with our quarterly newsletter focusing on either Physical Science or Life Science. It's free to subscribe and you can opt out at any time.

    NAME EMAIL

    Physical ScienceLife ScienceSUBMIT