Skip to content Skip to footer
Science & technology

Hyper/Multi-Spectral Imaging merges imaging and spectroscopy in a single, powerful analytical instrument

Hyper/Multi-Spectral Imaging (collectively hereafter referred to as Spectral Imaging) merges imaging with spectroscopy in a single instrument. It is the technology of choice when solid, heterogeneous objects are examined and the analysis requires compositional information to be gathered on a point-by-point basis. From the imaging perspective, spectral imaging comprises an advancement of the well-known three-channel color cameras through the acquisition of several, narrow wavelength band channels (colors)  

Spectroscopes and color cameras have been developed independently and find an enormous number of applications in several diverse fields. Spectroscopes offer dense wavelength sampling (several hundreds) of light intensity for a single spatial point or, in other words, a very detailed spectrum. Color cameras, on the other hand, record light intensity in multiple, matrix-type spatial points but in just three wavelength channels. Due to these unique characteristics (and trade-offs), spectroscopes are intended for chemical analyses, whereas color cameras for photography and morphological analyses. The motivation for developing advanced sensor technologies, capable combining the advantages of both spectrometers and imagers in a single instrument comprised the source of inspiration and the ignition of a very competitive research towards developing spectral imaging technologies. Several innovations in the field, including the discoveries of the novel electro-optic tunable filters, led to the development of research grade spectral imaging systems, which are currently exploited in an increasing number of applications with impressive results.

Spectral imaging capitalizes on its unique feature of combining the advantages of both imaging and spectroscopy (high spatial and spectral resolution) in a single instrument. The figure below illustrates the dataset collected by spectral imaging systems. It comprises a stack of images, where each image is acquired at a narrow spectral band and all together compose the so-called spectral cube.

The spectral cube has all the information required to calculate a full spectrum for every image pixel, enabling spatially resolved spectroscopy. In practical terms, spectral imagers acquire several dozens of narrow band spectral images and millions of spectra, with a scan lasting about one minute.

By offering both spatial and spectral information, spectral imaging comprises an exciting new analytical advance that answers commonly asked questions such as what chemical species are in a sample, how much of each is present, and most importantly, where are they located.

It is common practice to subdivide spectral imaging into multispectral imaging for images with a few wavebands and hyperspectral imaging for images composed of hundreds of wavebands. It should be noted, however that there is no a widely accepted cut-off value discriminating between these two “categories” and so the term Spectral Imaging if often used to avoid misinterpretations.

Science & technology

Color Vs Spectral cameras

A. Color Cameras

Photons encountering the pixels of an imaging sensor create electrons in pixel cells (photoelectric effect) thereby the number of photons is proportional to the number of electrons. The generated photo-electrons are converted to voltage levels proportional to the number of electrons, providing a straight forward means to digitize and measure light intensities on a pixel-by-pixel basis.

The photon’s wavelength information, however, is not “transferred’ to the electrons. Hence unfiltered imaging chips are color blind. Color or spectral imaging devices employ optical filters placed in front of the imaging chip.  Color imagers use either Si Charge Coupled Devices (CCD) or C-MOS sensors, which are sensitive in the visible and in the Near Infrared (NIR) part of the spectrum (400-1000nm). A band-pass filter is used for rejecting the Near Infrared (NIR) band (700-1000nm). Color cameras have been devised to capture three primary colors by utilizing a mosaic filter assembly, that it is disposed over a pixelized sensor array. This way, each pixel captures a certain primary color and then millions of colors are generated by combining the relative primary color intensities.

Science04

Camera electronics combine the Red, Green and Blue (R,G,B) imaging channels composing a high quality color image, which is delivered to external devices through an analog or digital interface. Due to the fact that each pixel “sees” only one primary color, three pixels are required to record the color of the corresponding area of the object. This reduces significantly the spatial resolution of the imager. This unwanted effect is partially compensated with a method called “Spatial Color Interpolation” carried out by the camera electronics. Color cameras emulate the human vision for color reproduction and are real-time devices since they record three spectral bands simultaneously at very high frame rates. Human vision-emulating color imaging devices usually describe color with three parameters (RGB values), which are easy to interpret since they model familiar color perception processes. They share however the limitations of human color vision. Color cameras and human color vision allocate the incoming light to three color coordinates thus missing significant spectral information. Due to this fact, objects emitting or remitting light with completely different spectral components can have precisely the same RGB coordinates, a phenomenon known as metamerism. The direct impact of the metamerism is the inability of the color imaging systems to distinguish between materials having the same color appearance but different chemical composition. This sets serious limitations to their analytical power and consequently to their diagnostic capabilities.

B. Spectral Cameras

Unlike images taken with standard color (RGB) cameras, SI information is not discernible to the human eye. In SI, a series of images is acquired at many wavelengths, producing a spectral cube. Each pixel in the spectral cube, therefore, represents the spectrum of the scene at that point. The nature of imagery data is typically multidimensional, spanning spatial and spectral dimensions (x, y, λ).

The figure illustrates the datasets that are generated by color and SI systems for comparison purposes. A color camera captures typically three images corresponding to the band-pass characteristics of the RGB primary color filters (a). Color image pixels miss significant spectral information as it is integrated into three, broad spectral bands (b). The color of a pixel can be represented as vector in a 3-dimentional “color space” having the RGB values as coordinates. SI systems collect a stack of pictures, where each image is acquired at a narrow spectral band and all together compose the spectral cube (d). A complete spectrum can be calculated for every image pixel (e), which can be otherwise represented as a vector in a “multidimensional spectral space” (f).

Science05
Imagery data capturing and representation in color (a,b,c) and spectral (d,e,f) cameras

Spectral Imaging systems use monochrome sensors or sensor arrays, which can capture only two of the three spectral dimensions of the spectral cube at a time. To capture the third dimension, spatial or spectral scanning is required. Depending on the method employed for building the spectral cube, Spectral Imaging devices are classified as follows: a) whiskbroom devices, where a linear sensor array is used to collect the spectrum (λ dimension) from a single point at a time; the other two spatial coordinates are collected with (x, y) spatial scanning; b) pushbroom devices  in which a 2-D sensor array is used, the one dimension of which captures the first spatial (x) coordinate and the other the spectral coordinate in each camera frame; the second spatial coordinate (y) is captured with line (slit) scanning; c) staring (or tunable filter) devices, where a 2-D sensor array is coupled with an imaging monochromator, which is tuned to scan the spectral domain and in each scanning step a full spectral image frame is recorded.

Whiskbroom and pushbroom imagers utilizing spatial scanning for building the spectral cube do not provide live display of spectral images, since they are calculated from the spectra after the completion of the spatial scanning of the corresponding area. Staring imagers, on the other hand, are based on the tuning of the imaging wavelength and the spectra are calculated from the spectral cube composed by the spectral images that are captured in time sequence. Compared to the other approaches, staring imagers have the advantage of displaying live spectral images, which is essential for aiming and focusing.

Spatial scanning SI instruments have been originally developed long time ago for remote sensing applications. These instruments are typically carried by a platform, such as satellites, airplanes etc., and have been extensively used mainly for Earth resources monitoring. In these applications the movement of the platform comprise the spatial scanning mechanism and the spectrally scanned lines are stitched together to construct spectral images of the scene. The spatial scanning concept has been transferred to biomedical imaging and especially in laser microscopy. A typical microscope operating on the basis of the spatial scanning principle is the confocal microscope. Here, the sample is spatially scanned with a laser and the resulting fluorescence emission spectrum is collected for each point by a detector after passing thought the confocal aperture of the microscope.

Staring devices or full frame spectral imagers (tunable filter and snapshot) are more effective in accommodating both static and moving scenes. In static scene applications, particularly, no external mechanical scanners are required. This simplifies setups and maximizes portability, thus expanding the usability of hyperspectral imaging to the vast number of applications that are addressed with conventional full frame cameras.

Science & technology

Analysis of Spectral Imaging data

A major advantage of acquiring two-dimensional spectral information from a series of complete images -as opposed to acquisition of point information- is that both spectral and spatial information can be depicted and displayed comprehensively with the aid of the so-called “thematic maps”. A thematic map may be the outcome of a classification or unmixing process and it is constructed by annotating different artificial colors or gray shades to pixel clusters belonging to different classes.

Usually a “training set” of spectra for each class is collected, with the size of the set depending on the complexity of the material under investigation. A classification process usually involves assigning each pixel to a predefined class. Classes should be exhaustive, meaning that they should include not only the detection targets, but also all the other distinct categories present in a scene. Generally speaking, a sufficient number of different categories reflecting the complexity of the material should be defined in order for all the pixels to be classified correctly. Another important issue, when collecting spectra for training, is that classes must be separable in terms of available spectral features. Low separability may be due to the high similarity of spectral signatures corresponding to different classes, which would comprise the foundation for pure results, regardless of the efficiency of the classifier.

Spectral mapping facilitates the direct visualization of the various image clusters corresponding to the identified classes. When more than two classes are present, it is more convenient to represent them using a pseudocolor scale, with each color indicating the presence of an area within the sample, having same or similar spectral characteristics.