Top 10 Imaging Innovations
Physics World article - April 2012
Breakthroughs in scientific imaging routinely find their way into a wide variety of applications, enhancing scientists’ understanding of fields as diverse as drug discovery, astronomy, medical diagnostics, materials characterization and cancer research. With a user base spanning scientific, biomedical and industrial research, it should come as no surprise that innovation in scientific imaging appears in many guises, be it a manufacturing breakthrough that yields next-generation camera technologies or a neat bit of software magic that opens up new possibilities for an end user’s microscopy or spectroscopy system. Here, a team of product specialists from Andor Technology, a leading manufacturer of scientific digital cameras and related imaging systems, offers its take on the game-changing technologies and applications that are shaping the future of scientific imaging. The company, which was founded in 1989 as a spin-off from the physics department at Queen’s University Belfast in Northern Ireland, now employs some 300 staff in 16 locations around the world. Floated on the London Stock Exchange’s Alternative Investment Market in 2004, it claims to be the fastest growing firm making high-performance digital cameras.
The sCMOS sensor
Conventional imaging technologies, such as charge-coupled devices (CCDs), tend to be good at one thing but not another. With CCDs, for example, it is possible to have great signal-to-noise ratios, but because the frames have to be read out one at a time, the technique cannot operate at high speed. Conversely, when CCDs are pushed to faster frame rates, resolution and field of view are sacrificed. One breakthrough imaging technology, that can outperform most scientific imaging devices on the market today is scientific CMOS (sCMOS). Based on a new generation of CMOS design and process technology, sCMOS sensors have been developed to overcome the drawbacks of traditional CMOS image sensors. The result is that sCMOS devices offer high sensitivity, fast frame rates, extremely low noise, high resolution and a large field of view all at the same time – perfect for high-fidelity quantitative scientific measurements Andor has used sCMOS on its flagship Neo camera. It has a sensor with 5.5 megapixels each just 6.5 μm in size, offering a very large field of view and high resolution. It has an exceptionally low read noise of as little as 1 electron RMS (the standard unit in this context) without amplification, even at high speeds of 30 frames per second, making it better than the best CCDs. It also operates at up to 100 frames per second, with a read noise of 1.4 electrons RMS. Thanks to a unique dual-amplifier architecture, which basically splits the sensor into two independently readable halves while allowing each pixel to be sampled simultaneously by both high and low gain amplifiers, the device also has a much higher dynamic range than a CCD with similarly small pixels. Moreover, the image can be read out in two different ways that are normally mutually incompatible: in “rolling” mode, where different lines of the array are exposed at different times as the read out “wave” sweeps through the sensor, and in “global” mode, where each pixel begins and ends its exposure at the same time.
Hitting the target: photostimulation
Photostimulation describes a range of new microscope techniques in which light of a specific wavelength is targeted onto a sample or substrate, while imaging it at the same time. The light typically comes from either a laser, light-emitting diode or a more conventional lamp, depending on the AI technique and the power and wavelength required. It can be targeted onto a spot as small as that limited by diffraction, with larger areas or patterns created through steering optics or a digital micromirror device. The targeted sample or substrate can be either biological (such as tissue) or non-biological (such as a semiconductor), with ultraviolet light used for high-energy applications, such as marking and micro machining, and longer, visible wavelengths for biological applications. AI is routinely used in biology to study various properties of cells, such as the role of calcium in intercell communication, while industrial applications include repairing liquid-crystal displays, or inspecting or analysing semiconductors that have failed.
Multiple benefits: EMCCD cameras
Introduced onto the scientific market by Andor in 2000, the electron multiplying CCD (EMCCD) camera was a big leap forwards in creating a device that is both ultrasensitive and ultrafast. EMCCD cameras use an on-chip amplification mechanism called “impact ionization”, which boosts a signal by multiplying even single-photon events well above the read-noise floor. It works by harnessing a process called “clockinduced charge” that occurs naturally in CCDs, whereby a free electron in a material’s conduction band has enough energy to create another electron–hole pair. EMCCDs make this process, which is normally thought of as a source of noise, more likely in two ways. First, the initial electron is given more energy by clocking the charge with a higher voltage. Second, the EMCCD has hundreds of cells in which impact ionization can occur; although the chance of amplification in any one cell is small, over the register of cells the probability is very high. The probability of charge multiplication also increases as the device is cooled and its voltage is raised, such that an EMCCD camera can achieve a gain of 1 at 20 V but several thousands at 25–50 V. The EMCCD is sensitive enough to detect individual photons – even at speeds of more than 11 000 frames per second in the case of Andor’s new iXonUltra897 device. The technique is therefore ideal for demanding ultralow-light measurements, such as pinpointing individual molecules or counting single photons. The iXonUltra897 can take data at frame rates 60% faster than its predecessor with a bestin-class sensitivity of any scientific digital camera, while allowing the user to capture and view data in terms of either electrons or incident photons.
Dark powers: thermoelectric cooling
One problem that plagues all photosensitivedevices is the current that flows through them even when no photons are entering the device. Known as “dark current”, it is caused by electrons and holes being randomly generated in the device that are then swept by the high electric field. Dark current is a bigger problem with EMCCD technology than it is for standard CCDs because the former technique involves amplifying any electrons – both the photon-generated electrons and the dark electrons alike. Cooling the device can, however, reduce the current, and the best way to do this is to use “thermoelectric coolers” – small, electrically powered devices with no moving parts that are therefore convenient and reliable. These coolers are essentially heat pumps, transferring heat from their “cold” side (the CCD) to the “hot” side (the built-in heat sink). Andor has developed a system of vacuum cooling that creates an unrivalled 110 °C temperature difference between the two sides – so large that the CCD operates at –80 °C and the dark current is virtually eliminated. The camera’s performance at these temperatures is far better than at –30 °C, even if the camera is used for very short exposure times where background events are predominantly from dark current. Deep thermoelectric cooling also reduces blemishes on the image from “hot pixels” – those that have much higher dark currents than their neighbours as a result of contamination embedded in the sensor. Fortunately, the effect of hot pixels can usually be removed by taking a background image.
Lighting up life: optogenetics
One extension of the active-illumination technique mentioned above involves optically manipulating cell function, for example, to investigate behaviour control in organisms, characterize signalling pathways or test cellular network models. Broadly known as “optogenetics”, the approach was named “Method of the Year 2010” by the journal Nature Methods. It involves genetically modifying “phytochromes” – lightsensitive signalling proteins found in plants – so that they respond to different wavelengths of light, say being activated by light with a wavelength of 650 nm, but inactivated by light above 750 nm. These proteins can then be genetically integrated with subcellular components of interest. It is possible, for example, to activate and silence subcellular components using special software that finely tunes when, where and with what brightness the light falls on a sample. The technique has lots of applications, notably to understand how signals pass through a system. Optogenetic tools can be applied either at the microscopic level of cells and cell networks, or at the macroscopic, whole-animal level to study conditions such as obesity.
Simple tack: laser-free microscopy
Laser-based imaging, in the form of confocal microscopy, is generally high on any imaging facility’s wish list. The primary benefit of the technique, which uses a laser to scan a slice through a specimen one point at a time, is that it creates high-contrast, high-resolution 2D or 3D images. But while laser-based imaging systems have many benefits, much of the routine imaging that they are used for can actually be carried out using cheaper and more accessible laserfree solutions. In addition to being cheaper, laser-free systems also give users a bigger choice over what wavelength of light to use. The oldest of these laser-free techniques involves using a wide-field “epifluorescence” microscope, in which light passes via the objective lens (rather than directly) onto the sample, gets absorbed by “fluorophores” in the sample and is then re-emitted at a different wavelength. One drawback is that this light has to be converted into an image using sluggish algorithm-based deconvolution or image-processing techniques. However, faster computers mean that these algorithms can now run at high speed, while new hardware developments have led to confocal-imaging devices that can fit onto conventional wide-field fluorescence microscopes. These simple devices, such as the differential spinning disk, offer fast laser-free imaging, making epi-fluorescence light sources routine. Applications include fixed-cell 3D imaging and some multidimensional live imaging.
Sharp approach: adaptive optics
For almost two decades astronomers have been trying to obtain high-quality, “diffraction-limited” images using groundbased telescopes. Achieving this fundamental limit is not easy because the Earth’s atmosphere is turbulent, which distorts and blurs any image of the skies. One way of getting around this problem is to use “adaptive optics”, which involves measuring the distortion of light from, for example, a guide star and then using this information to correct the signal – usually by using deformable mirrors to achieve the final image. However, in recent years adaptive optics has failed to fully achieve its promise, partly because atmospheric turbulence is more complex than initially modelled. This has triggered a growing interest in alternative techniques such as “lucky imaging”, which involves using a high-speed camera to take images over short (100 ms or less) intervals, during which changes in the Earth’s atmosphere are minimal. The small fraction of exposures least affected by the atmosphere are then chosen and combined into a single image to create an image at, or near, the diffraction limit. Hybrid techniques, such as “AO lucky”, may have the best potential of all.
New dimensions: 3D imaging
Today’s life scientists can draw on a vast array of novel imaging techniques that offer remarkable pictures of everything from single molecules to entire cells, tissues and organs. But obtaining these images often involves analysing and processing vast amounts of scientific data obtained using different techniques. Powerful image-analysis software can, however, help with such multidimensional and multimodal image sets, by guiding and aiding users through the analysis of what are often massive, multi-gigabyte datasets. If biologists are to extract the information that they are after, such software needs to be flexible, scalable and easy to use. The Imaris software from Bitplane, an Andor company, combines all of these functionalities and offers a powerful visualization and analysis suite that has already been used to analyse the small protrusions, or “spines”, from a nerve cell that receives its input signal, and to map the relationships between cellular structures.
On the up: hyperspectral imaging
Hyperspectral imaging is a way to generate quantitative 2D spatial images over a third spectral dimension to yield a “data cube” representative of particular sample spectral signatures. The short and mid-infrared regions of the spectrum are key drivers for this technique, with applications in areas such as forensic science, process control, recycling, and airborne agricultural and geological surveys. But the development of ultrafast and ultra-sensitive detection systems, based on sCMOS or EMCCD detectors (see above), has also led to ways of screening products quickly using ultraviolet and visible light. In the food industry, for example, such techniques allow firms to check whether fruits have matured enough or whether chicken meat has been contaminated. One of the most recent developments has been seen in the medical industry, with such hyperspectral devices being used for testing patients for signs of malignant melanoma cancer, thus opening the door to non-invasive, faster and cheaper screening.
Super stuff: past the diffraction limit
In 1873 the German physicist Ernst Abbe discovered that the resolution of a focusing light microscope is fundamentally limited by diffraction. Although the advent of confocal and multiphoton fluorescence microscopes helped 3D imaging, they have not really been able to improve the resolution of an image. In the best case, such focusing microscopes can resolve objects no smaller than 200 nm in the x–y focal plane and only 500–800 nm along the z-axis. Unfortunately, most cellular organelles involved in physiologically important processes involving cell-to-cell communication, growth and response to environmental signals are often smaller than 200 nm and therefore would go unseen.
Super-resolution techniques, going under acronyms such as STORM/PALM, FIONA and SSIM, allow images to be captured with a higher resolution than the diffraction limit, enabling the user to inspect structures of interest at ever finer detail(even down to sub-micron resolution). These novel approaches involve linear and non-linear fluorescence methods as well as refined methods of point spread-function analysis at the single-molecule level. These techniques have allowed the spatial resolution of images to be enhanced by as much as 100-fold.