DC-16 - Nature of Multispectral Image Data

You are currently viewing an archived version of Topic Nature of Multispectral Image Data. If updates or revisions have been published you can find them at Nature of Multispectral Image Data.

A multispectral image comprises a set of co-registered images, each of which captures the spatially varying brightness of a scene in a specific spectral band, or electromagnetic wavelength region. An image is structured as a raster, or grid, of pixels. Multispectral images are used as a visual backdrop for other GIS layers, to provide information that is manually interpreted from images, or to generate automatically-derived thematic layers, for example through classification. The scale of multispectral images has spatial, spectral, radiometric and temporal components. Each component of scale has two aspects, extent (or coverage), and grain (or resolution). The brightness variations of an image are determined by factors that include (1) illumination variations and effects of the atmosphere, (2) spectral properties of materials in the scene (particularly reflectance, but also, depending on the wavelength, emittance), (3) spectral bands of the sensor, and (4) display options, such as the contrast stretch, which affect the visualization of the image. This topic review focuses primarily on optical remote sensing in the visible, near infrared and shortwave infrared parts of the electromagnetic spectrum, with an emphasis on satellite imagery.  

Author and Citation Info: 

The latest version of the entry "Nature of Multispectral Image Data" may be cited as:

Warner, T. A. (2017). Nature of Multispectral Image Data. The Geographic Information Science & Technology Body of Knowledge (3rd Quarter 2017 Edition), John P. Wilson (ed.). DOI: 10.22224/gistbok/2017.3.1

This entry was published on July 17, 2017.

This Topic is also available in the following editions: DiBiase, D., DeMers, M., Johnson, A., Kemp, K., Luck, A. T., Plewe, B., and Wentz, E. (2006). Nature of multispectral image data. The Geographic Information Science & Technology Body of Knowledge. Washington, DC: Association of American Geographers. (2nd Quarter 2016, first digital)

Topic Description: 
  1. Definitions
  2. Significance of multispectral imagery in GIS&T
  3. Digital imagery
  4. The four components of scale in multispectral imagery
  5. Interpreting multispectral imagery

 

1. Definitions

pixel: The fundamental unit of an image. It is represented by a single digital number (DN) for each of the image’s spectral bands. Pixel is a compound word derived from picture and element.
 
spectral band: The name given to the individual electromagnetic wavelength regions for which a separate DN value is recorded for each pixel. For example, a natural color image has three spectral bands, in the blue, green and red wavelength regions.  
 
raster: The grid of pixels that make up an image. 
 
multispectral images: Images comprising multiple spectral bands. Many multispectral sensors have four bands (such as the original Landsat Multispectral Scanner, launched in 1972). A more recent multispectral sensor, Worldview-3, launched in 2014, has 16 spectral bands.
 
radiance: A measure of radiant electromagnetic energy per unit time and unit solid angle. Radiance as measured by a satellite-borne sensor changes over time as illumination and atmospheric properties vary, even if the underlying scene does not vary.
 
reflectance: A unitless ratio of the reflected electromagnetic energy, divided by the illuminating energy. Conversion from radiance to reflectance is not simple, because remote sensors cannot directly measure illuminating energy at the ground, and furthermore, the atmosphere both absorbs and contributes radiance. Nevertheless, reflectance is of great interest in remote sensing, since it is a property of the surface itself, and does not change if the illumination or atmosphere changes.
 
 
 
Multispectral imagery is one of the key sources of raw geospatial data for Geographic Information Systems and technology (GIS&T) (Lillesand, Kiefer, & Chipman, 2014).  Broadly, multispectral imagery is used in three different ways in a GIS:
  1. Imagery is perhaps most commonly used as a visual backdrop, providing a geographic context for display of other geospatial data, such as vector polygons.
  2. Imagery is also used for manual interpretation, in which analysts visually identify objects or features, and use this information to update previously generated GIS layers, or to generate entirely new layers.  Normally, analysts do this by digitizing features on the computer monitor.
  3. Multispectral imagery can also be used to generate new GIS layers through a wide variety of automated image analysis methods, such as transformations or classifications. For more information on this topic, see topic D-18-Algorithms and processing.

 

3. Digital imagery

A remotely sensed image of the earth is, to put it simply, a picture. However, understanding how that picture is produced requires some understanding of light, which is a form of electromagnetic energy (Lillesand et al., 2014).  A defining characteristic of electromagnetic energy is wavelength. For example, blue light has a relatively short wavelength compared to green and red light. Wavelength is measured in micrometers (μm) or nanometers (nm; 1000 nm = 1 μm).  For example, the visible is 0.4 – 0.7 μm.  In the region beyond what our eyes can see, the near infrared (NIR) is 0.7 – 1.4 μm, and shortwave infrared (SWIR) is 1.4 – 2.5 μm (note though that there is some inconsistency in the remote sensing literature regarding the boundary between NIR and SWIR). One interesting issue is that, although different hues have different associated wavelengths, our human perception of color is in general entirely subjective, and varies for different people.

Multispectral Images

Figure 1. Multispectral data has the potential to differentiate different features on the earth’s surface or in the atmosphere. These Landsat Thematic Mapper (TM) images of a mountainous region in Washington State, USA illustrate why it is important to consider the spectral properties of the features of interest in choosing which spectral bands to use from within the multispectral data. The image on the left is a standard simulated natural color image (with TM bands visible bands 3 (red), 2 (green), and 1 (blue) displayed as red, green and blue (rgb), respectively). In this image, snow and clouds are both bright and therefore not spectrally separable. The image on the right uses bands from the shortwave infrared (SWIR; band 5), near infrared (NIR, band 4) and red (band 3) as rgb.  In this image, clouds are still bright, but the snow has a distinctive cyan color. This is because snow has a very distinctive contrast between a bright response in the visible and NIR, and strong absorption in the SWIR. Thus, the snow is characterized by high NIR and red, which are displayed here as green and blue, which together make cyan.

The remote sensing model provides a useful conceptual framework for understanding how an image is produced.  When a source of electromagnetic energy, such as the sun, illuminates a scene, the electromagnetic energy travels through the atmosphere, is reflected by the scene, and then travels again through the atmosphere, to the imaging sensor. The sensor focuses incoming light on a detector, or more often, an array of detectors, in order to map variations in energy across the scene. The particular wavelength interval that the detector responds to determines the spectral band that the image represents.  For example, a sensor that measures red energy produces a red band image.  A multispectral image comprises separate, co-registered images, one for each spectral band. 

The fundamental structure of a digital image is a grid of pixels, where each grid cell is termed a pixel. A convenient conceptualization for remote sensing is that an image grid can be projected onto the ground, so that each pixel represents light coming from a specific, rectangular, non-overlapping ground area.  The value associated with each pixel is termed the pixel’s digital number or DN, and this determines the brightness of that pixel in the image.  In a multispectral image, each pixel has a separate DN associated with each spectral band.

It is important to appreciate that the DN values in their raw form are merely relative values, not calibrated to physical units. However, satellite image vendors are increasingly providing data calibrated to radiance (i.e. energy), or even reflectance.  The difference between radiance and reflectance data is that the former varies with illumination, for example time of day, while the latter is an inherent property of the surface material and therefore is very useful for many quantitative applications, such as monitoring change in a scene over time.

 

4. The four components of scale in multispectral imagery

One of the most important advances in remote sensing over the last three decades has been the proliferation of multispectral data sources.  Choosing between these different data sources is a key step in any remote sensing project.  In particular, it is important to match the scale of the imagery to the scale of the phenomenon of interest, otherwise misleading results may be obtained (Warner, 2010b). 

In the context of remotely sensed images, there are four different types of scale (Warner, Nellis & Foody, 2009):  spatial, spectral, radiometric and temporal, each of which is discussed in more detail below. Scale has two broad components:  grain (or resolution) and extent (or coverage). Engineering constraints mean that there are inherent trade-offs, such that generally increasing one type or component of scale can only be achieved at the expense of some other component. Thus, for example, sensors that produce data with fine spatial detail usually do not have many spectral bands.

4.1 Spatial scale

Spatial scale is the most conceptually straightforward type of scale. Aspects of the spatial scale include both the image extent as well as the ground sampling distance, a measure of the grain. It is important to note that, though ground sampling distance and pixel size are often used as a shorthand to characterize image spatial resolution, these terms are not synonymous. For example, an image with 30 m pixels will not necessarily resolve objects 30 m in size.

Although it was emphasized above that the pixel is the fundamental unit of the raster image, there is considerable current research into the unmixing of pixels, in which the proportion of various cover types in each pixel are estimated (Wang, Shi, Diao, Ji & Yin, 2016).  A related topic is super-resolution mapping, which is the attempt to predict the spatial locations of the unmixed components within the image (Nasrollahi & Moeslund, 2014).  

4.2 Spectral scale

Spectral scale encompasses the number, location and spectral width of the spectral bands. A panchromatic sensor produces just a single spectral band image, normally by integrating over a wide range of wavelengths (for example, about 200 nm), with a relatively high spatial resolution (Warner, 2010a).  Multispectral bands usually each cover a somewhat narrower range of wavelengths compared to a panchromatic band (for example, on the order of 70 nm, though there is considerable variation), but with a larger pixel size. There is no clear definition of the number of bands in a multispectral image: there can be as few as three bands to as many as 10 or more bands. Hyperspectral instruments, also known as imaging spectrometers, typically have hundreds of spectral bands, each of which is usually 10 nm or narrower. The key attribute of a hyperspectral sensor is that the narrow bands are contiguous, allowing the generation of a detailed graph of the reflectance as a function of wavelength, known as a spectral reflectance curve (van Leeuwen, 2009). 

Though this review focuses primarily on reflected visible, NIR and SWIR regions, multispectral thermal sensors, which are designed to measure spectral emittance in the 3-12 μm region, are also important for some applications, such as geology or for estimating surface temperature.  In addition, multispectral microwave and radar data are also valuable sources for information about earth features.

The atmosphere does not provide perfect transmission of electromagnetic energy. For example, constituents of the atmosphere, such as water, absorb, scatter and even re-radiate electromagnetic energy preferentially at certain wavelengths.  Atmospheric windows are regions where the atmosphere provides sufficiently high transmission that the ground can be observed effectively from space. Multispectral sensors designed to monitor the earth’s surface generally only have bands in the atmospheric windows, although sensors designed to study the atmosphere may have bands located in the regions of high absorption.

4.3 Radiometric scale

Radiometric scale includes the number of gray levels that a sensor can potentially differentiate.  As an example, the Landsat 5 Thematic Mapper data was quantized in 8 bits (256 gray levels), but the more recently launched Landsat 8 Operational Land Imager has 12 bits (4096 gray levels). The magnitude of the radiance a sensor is designed to measure is also important.  For example, the Visible Infrared Imaging Radiometer Suite (VIIRS) has a sensor able to image the distribution of nighttime light, thereby optimizing the sensor for measuring low radiance levels (Elvidge, Baugh, Zhizhin, Hsu and Ghosh, 2017).  

4.4 Temporal scale 

Satellite imagery is normally acquired repetitively, either on a regular basis in the case of most nadir-viewing satellites, such as Landsat, or on an ad-hoc opportunistic basis, in case of satellites that can be programmed to acquire images at an oblique angle (i.e., are pointable), such as the commercial high-resolution satellites, for example, WorldView-3.  Temporal resolution of a satellite system is determined by the type and frequency of orbit, field-of-view or swath width of the sensor, and the latitude of the scene.  Satellite data archives tend to have frequent acquisitions due to their regular orbits and the historical emphasis on archiving satellite data.  In comparison, airborne systems tend to have greater flexibility for acquisition timing, and the historical archive, at least in some locations, extends back much further in time.  An example of an important source of free digital airborne data, is the US Department of Agriculture’s National Agricultural Imagery Program (NAIP).

The frequency that imagery is, or can be, acquired may an important attribute for determining whether imagery can be obtained at times appropriate for a particular project.  In addition, the temporal length of the available archive of imagery may be key for studies focusing on studying landscape change.  For this reason, Landsat (Roy et al. 2014) and NOAA Advanced Very High Resolution Radiometer (AVHRR) (D'Souza, Belward & Malingreau, 2013) data sets are often key information sources for global change studies because both satellite programs have data that extends over several decades.

 

5. Interpreting multispectral imagery

The four main factors that affect image brightness levels within different spectral bands are discussed below.

5.1 Illumination variation and effect of the atmosphere 

Raw images are in many cases provided in raw DN form, where brightness values are simply relative measurements of the energy falling on the detector. For displaying an image or doing routine classification of the image, raw DN value data are usually perfectly adequate.  However, for some analysis methods, and comparisons over time, the confounding effects of illumination and atmospheric variation need to be removed through conversion of the raw DN to reflectance.  Many modern sensors allow conversion from DN values to radiance through applying a simple scaling.  Converting the radiance to estimated reflectance is a challenging task, though increasingly satellite data vendors are providing the data as estimated reflectance, thus saving the analyst a difficult step. Usually routine reflectance products do not include a removal of local illumination variation due to slope and aspect effects of the topography, though there are methods that attempt to remove this effect.  Local illumination variation can in fact be one of the major controls of brightness variations in an image. 

5.2 Spectral properties of the materials in the scene

An understanding of the spectral properties of common land cover materials is key to interpreting multispectral imagery (van Leeuwen, 2009). Although there is information in the overall brightness of a pixel, generally the most useful spectral signal is the contrast between brightness values of different spectral bands. Much of remote sensing is predicated on the assumption that different materials have different characteristic spectral curves, and that this information can be used to identify the materials in the scene. 

An example of a material with a distinctive spectral reflectance curve is vegetation.  Vegetation has strong absorption from pigments in the visible, strong reflection and transmission from the intercellular walls in the NIR, and absorption from water in the SWIR.  Soils, on the other hand, typically have more smooth spectral reflectance curves, with the overall albedo generally a function of organic matter and moisture content.  Anthropogenic surfaces tend to vary greatly in spectral properties, especially if the material is painted or has some surface coating.

5.3 Spectral bands of the sensor

In a multispectral image, DN values in each band are a result of the spectral response function of that band (i.e. what wavelengths the detector is sensitive to) convolved with the signal falling on the detector.  Multispectral sensors can only approximate the spectral curve because spectral bands generally have low spectral resolution, and are usually not contiguous.  For this reason, there has always been great interest in hyperspectral imagery, in the hope that the many fine spectral bands of hyperspectral data will allow more reliable estimation of spectral reflectance curves, and thus greater separability of classes than is possible with multispectral data.  Nevertheless, for differentiating broad classes of materials like vegetation, soil, water, snow, etc., multispectral data have been found to be very successful. Sensors that have bands in a wide range of wavelengths, including across the visible, NIR and SWIR offer the most potential in this regard.  

5.4 Display options

If data are displayed visually, for example as color composite image on a computer monitor, then the options used in generating the display are crucial.  Because the human eye detects thee color dimensions, red, green and blue, a maximum of three individual bands can be displayed at one time as a simple color composite. However, the bands used to generate the color composite do not have to match the visible region, and so the more generic term for such an image is false color composite. A standard false color composite is produced by displaying the NIR, red and green bands as red, green and blue, respectively.  In general, though, bands should be selected to highlight the materials of interest.  For example, since snow has very different reflectance in the NIR and SWIR, and a cloud has bright values in both bands, snow and cloud can generally by differentiated through a false color composite that incorporates NIR and SWIR bands. 

A contrast stretch is usually necessary in order to scale image values to the full range of display device values. This is because the radiometric range of the sensor does not always match that of the display device. Furthermore, even if the two scale ranges were to match, a default mapping of the DN values to the display device brightness values will not necessarily provide the most effective enhancement of the patterns of interest.

 

References: 

D'Souza, G., Belward, A. S., & Malingreau, J. P. (Eds.). (2013). Advances in the use of NOAA AVHRR data for land applications (Vol. 5). Dodrecht, The Netherlands: Kluwer Academic Publishers.

Elvidge, C. D., Baugh, K., Zhizhin, M., Hsu, F.-C., & Ghosh T. (2017). VIIRS nighttime lights. International Journal of Remote Sensing (in press).  DOI: 10.1080/01431161.2017.1342050

Lillesand, T., Kiefer, R. W., & Chipman, J. (2014). Remote sensing and image interpretation. Hoboken NJ:  John Wiley & Sons.

Nasrollahi, K., & Moeslund, T. B. (2014). Super-resolution: a comprehensive survey. Machine vision and applications, 25(6), 1423-1468. DOI: 10.1007/s00138-014-0623-4

Roy, D. P., Wulder, M. A., Loveland, T. R., Woodcock, C. E., Allen, R. G., Anderson, M. C., ... & Scambos, T. A. (2014). Landsat-8: Science and product vision for terrestrial global change research. Remote Sensing of Environment, 145, 154-172. DOI: 10.1016/j.rse.2014.02.001

van Leeuwen, W. J. (2009). Visible, near-IR, and shortwave IR spectral characteristics of the terrestrial surfaces. In T. A Warner, M. D. Nellis and G. M. Foody (Eds.), The SAGE Handbook of Remote Sensing (pp. 33-50) London, UK: SAGE.

Wang, L., Shi, C., Diao, C., Ji, W., & Yin, D. (2016). A survey of methods incorporating spatial information in image classification and spectral unmixing. International Journal of Remote Sensing, 37(16), 3870-3910. DOI: 10.1080/01431161.2016.1204032

Warner, T. A., Nellis, M. D., & Foody, G. M. (2009). Remote sensing scale and data selection issues. In T. A Warner, M. D. Nellis and G. M. Foody (Eds.), The SAGE Handbook of Remote Sensing (pp. 3-17) London, UK: SAGE.

Warner, T.A. (2010a). Panchromatic imagery. In Wharf, B. (Ed.), SAGE Encyclopedia of Geography. London, UK: SAGE Publications.

Warner, T.A. (2010b). Remote sensing analysis: From project design to implementation. Chapter 17 in J.D. Bossler, R.B. McMaster, C. Rizos and J.B. Campbell (Ed.), Manual of Geospatial Sciences (2nd Edition) (pp. 301-318). London, UK: Taylor and Francis.

Learning Objectives: 
  • Describe the basic data format of a multispectral image in terms of pixels, rasters, and DN values.
  • Explain the four aspects of scale in the context of remote sensing:  spatial scale, spectral scale, radiometric scale, and temporal scale, and differentiate between scale as a measure of grain and extent.
  • Describe the concept of a spectral band in the context of multispectral imagery.
  • Differentiate between panchromatic, multispectral and hyperspectral imagery.
  • Explain what a spectral reflectance curve is, and why it is central to remote sensing image interpretation.
  • Identify major factors that determine image brightness variations in a scene, and explain the role of each factor.
Instructional Assessment Questions: 
  1. Draw the remote sensing model as a cartoon, and label the key components, including source illumination, transmitting energy, target, and detector. 
  2. Define the terms spectral band, panchromatic band, multispectral bands, and hyperspectral bands.
  3. Create a table that summarizes the characteristics of Landsat 8 Operational Line Imager data in terms of the four components of scale.
  4. Why would a 30 m object not necessarily be visible in an image, with 30 m pixels?  Is there a circumstance where objects smaller than 30 m might be identified within a scene?  (Hint:  consider the contrast ratio between the object of interest and the background.)
  5. Each pixel in a raster image is assumed to represent the light coming from a specific, rectangular, non-overlapping ground area.  What could blur an image, making this model only an approximation?
  6. Draw and label spectral reflectance curves for vegetation, soil, and water.  Use the spectral curves to explain why a useful band combination for visualizing multispectral data is to display one band each from the visible, NIR and SWIR.
Additional Resources: