top of page
  • Writer's pictureArpit Shah

Imaging Spectroscopy - Into the World of 'Hyperspectral'

Updated: Jun 11, 2023

The Investigative aspect of Remote Sensing is what, I suppose, draws enthusiasts to this field. To scan imagery for the purpose of extracting something meaningful - be it objects, materials or processes.


In this article, you will know more about a powerful form of imagery - Imaging Spectroscopy or what is known in common parlance as 'Hyperspectral Imaging'. I'll summarize my understanding of imaging fundamentals first before delving into the subject matter and ending with an elaborate video walkthrough from which you'd be able to get practical insights into the processing operations involved.


Imagery can be captured using two mechanisms - a) by using a Passive sensor which captures reflected radiation from an object, the illumination source being sunlight or b) by using an Active sensor which carries its own source of illumination and which captures its reflected radiation (example: camera flashlight, radio waves).


Now, the natural-colored photograph that we routinely see is captured by a passive sensor and is depicted in three bands - RGB (Red, Green, Blue). Red, Green & Blue - which constitutes what is known as primary colors. Using a particular blend / combination of the primary colors, one can reproduce almost all the colors within the visible spectrum i.e. radiation wavelengths roughly between 400 to 700 nm (nanometers) which the human eye can detect. One can use these images to detect a variety of objects, materials & processes, as we all do in some way, everyday.


Alongside visible spectrum (RGB), Satellites carrying passive sensors can capture wavelengths which are slightly beyond as well such as Near Infra-Red, Short-Wave Infrared, Ultra-Blue etc. Sunlight contains these wavelengths - just that we cannot see them with the naked eye.


Imagery captured by such Satellites is called Multispectral imagery (comprising of reflected wavelengths from the visible spectrum + slightly beyond on either side). This type of imagery typically contains 10+ bands of wavelength ranges (eg. between 200-350 nm, 500-600 nm etc.). The more bands there are in a particular imagery, the more information we have at our disposal which could be useful during processing & analysis.

Electromagnetic Spectrum Infographic (Visible Spectrum is just a small range within)
Figure 1: Electromagnetic Spectrum Infographic (Visible Spectrum is just a small range within); Source: NASA ARSET

To depict an imagery on a screen, we have to use Band Combinations. In simple words, Band Combinations are 'settings' to visualize imagery in a certain color-combination (wavelength combination). To analyze the image in multiple ways, we have to deploy various types of Band Combinations i.e. Band Manipulation. This is done so as to highlight / suppress certain imagery characteristics for us to move in the direction of identifying what we have set out to.


Band Combinations are set using three channels in Multispectral imagery. To visualize the image in natural colors (RGB mode), the Band Combination would entail - Red occupying Channel 1, Green - Channel 2 and Blue - Channel 3. The Red wavelength would be present in a particular band range which we'd have to select and likewise for the other two wavelengths (colors).


For example, in Sentinel-2 Satellite Imagery: Red Occupies Band 4, Green - Band 3 & Blue - Band 2. (Bands are not standardized across all satellites).

Think of a luggage lock - Inserting 4, 3, 2 as the combination would unlock the suitcase for you giving you access to the natural colored photograph within.

Read this article to know more about some of the commonly used multispectral band combinations in Sentinel-2 Satellite Imagery and how manipulating the bands can help us to detect specific surface characteristics such as moisture content, healthy or unhealthy vegetation, geological features etc. which are not discernible using common optical imagery captured using wavelengths from the visible spectrum.

 

In my previous articles, I've analyzed multispectral imagery to - Assess Blast Damage, Detect Seaweed, Detect Glacial Fault, Mapping Waterbodies, Mapping Crop Types, Mapping Forest Fires and to Visualize Pollution. You can read some of these to know more about satellite imagery and the multispectral nature of it.


Please note that Multispectral imagery is different from SAR - Synthetic Aperture Radar Imagery.

SAR (Radar imagery) has several notable advantages over Multispectral imagery (optical imagery) primarily because it uses its own source of illumination - radiowaves from an active sensor.


You can read my Imagery analytics work using SAR satellite imagery published on this blog from this link.

 

Some of you may wonder-


Q: Why are there 3 channels in Multispectral imagery?


A: To mimic the human eye whose retina contains 3 classes of cone photo-receptors. These are adept at recognizing Long, Medium and Short wavelengths respectively. Red has the longest wavelength followed by Green while the color Blue has the shortest wavelength. Hence, R,G, B is input in the 3 channel slots respectively to visualize the image in natural color mode. To understand the concept of channels better, you can read these informative articles - 1 & 2.


Q: In how many ways can we visualize a Multispectral image?


A: We can input in the three channels (based on some of the ways I know and have used before- 1) Any of the wavelength bands available 2) input dual bands or single band only by leaving one or two channels empty, 3) use Band Maths to derive a new value to be input in a channel (for example - Band 1 + Band 8) or 4) any variation using the combination of options 1), 2) & 3). Therefore, certainly there are a multitude of ways (band combinations) to analyze an image. The skill lies in identifying the method which gives us the best chance to detect the particular object, material or process.


Q: Which factors determine the ideal method we should use for our Multispectral imagery analytics study?


A: The most important parameter is how our subject of interest (material's surface in most cases) interacts with the passive source of illumination - sunlight. However, this is but one aspect of the complete process involved.


To elaborate, there are three aspects which we have to take into consideration - a) how the illumination radiation reacts with particles while entering the atmosphere, b) how the illumination radiation interacts with our subject of interest (matter) on the earth's surface and c) how the reflected radiation reacts with the atmospheric particles enroute to the spaceborne satellite sensor.


For a) and c) we have to factor in extent of radiation absorption and radiation scattering as depicted in the visual below-

Radiation Transfer Characteristics
Figure 2: Radiation Transfer Characteristics; Source: HYPERedu, EnMAP education initiative

For aspect b), we have to factor in the following parameters - 1) Reflective properties of the object (whether the surface is rough or smooth), 2) Geometric Effect of reflection (angle of illumination and of reflection) and 3) Bio / Geo / Chemical characteristics of the object under consideration (factors such as moisture content, mineral properties, size of object etc.).


Researchers study this interaction between illumination and atmosphere & between illumination and matter extensively to determine the best possible ways to analyze the imagery and extract the desired information. Their work helps in generating validated methods of band combinations and enhanced imagery post-processing methodologies.


The three parameters pertaining to aspect b) mentioned above can be better understood from the figures below-

1)

Reflective Properties of an Object
Figure 3: Reflective Properties of an Object; Source: HYPERedu, EnMAP education initiative

2)

Geometric Effects of Radiation Interaction
Figure 4: Geometric Effects of Radiation Interaction; HYPERedu, EnMAP education initiative

3) Bio / Geo / Chemical properties

These are the characteristics of the substance which influence how illumination radiation interacts with it. For example, certain bio/geo/chemical properties of Soil are Moisture, Organic Content, Mineral Composition, Grain Size etc. Similarly, certain properties of Vegetation which influences how illumination reacts to it are - Moisture, Chlorophyll Content, Species, Phenology etc.


To show you an example, below is the band combination (Infrared) which is commonly used to determine the presence of vegetation. You'll know straightaway by looking at the image that this is not a natural color image - band manipulation has been done to emphasize the chlorophyll content (more red means more chlorophyll content implying denser presence of vegetation).


P.S.- Band 8 was input in Channel 1, Band 4 in Channel 2, & Band 3 in Channel 3 of this Sentinel-2 Imagery to visualize the imagery in this specific manner.

Infrared Band Combination
Figure 5: Infrared Band Combination; Source: GISGeography.com
 

I felt it was important to explain imaging fundamentals in as much detail as I've done (halfway into this article). This would help the readers to understand the concept behind Imaging Spectroscopy (Hyperspectral Imaging) with more clarity and in what ways it is different to Multispectral Imaging.


Let's jump into it, beginning with its definition, as described in the introductory course on Hyperspectral Imaging by EO College -

"Imaging spectroscopy refers to imaging sensors measuring the spectrum of solar radiation reflected by Earth surface materials in many contiguous wavebands, on the ground as well as air or spaceborne.../...there are up to hundreds of reflectance bands that allow detection and quantification of materials based on the shape of the spectral curve. "

Contrary to what you may perceive, it is important to highlight that the topic of this article - Imaging Spectroscopy - is not a new technology. In fact, the first imaging spectrometers (devices used to capture the hyperspectral imagery) became operational as far back as 1982. However, these were installed in research flights (airborne) to capture footage over relatively small areas at select locations. I suppose only a handful of researchers would have access to the readings to conduct further investigations. Only after the spectrometers were launched onboard satellites in the 2000s that the technology and its output's usage became mainstream.


As with any new technology, the later versions iron out the original's flaws and benefit from the growth of the ecosystem around the concept and so is the case with Imaging Spectroscopy. Since 2019, multiple new variants of spaceborne Hyperspectral sensors have been launched and newer algorithms have been developed which promise to usher in a new era of mapping, like never before in the quest to obtain a finer understanding of the geochemical, biochemical and biophysical properties of the Earth's surface and atmosphere.

 

Q: So how is Hyperspectral imagery superior to Multispectral imagery?


A: Each surface material has a unique spectral signature (consider it as a fingerprint of sorts) - a graph of measure showing the energy reflectance values at various wavelength ranges i.e. it indicates how the surface exterior has interacted with the sun's illumination. (This phenomena is explained in more detail in the next section).


The narrower the range of wavelength in a band, the finer its spectral resolution would be. Spectral resolution is used to categorize a material's reflectance readings i.e. the Spectra). So finer spectral resolution implies more granular categorization of reflectance readings. This is one of the defining traits of Hyperspectral Imaging.


Additionally, from the image below, you will gather that while multispectral bands are segregated into a certain number of non-contiguous blocks, hyperspectral imagery have contiguous bands which give a continuous reading of the radiance reflection throughout the 'visible spectrum and slightly beyond' range. This uninterrupted view (also known as Hyperspectral Cube) is another defining trait of Hyperspectral imagery. This trait is of much value in applications which require researchers to detect objects from subtle differences in the reflectance signal (more on this in the coming sections).


To summarize, Higher spectral resolution & Uninterrupted readings due to contiguous bands are the factors that lend Hyperspectral imaging to be more suitable for certain applications than Multispectral Imaging would.

Multispectral v/s Hyperspectral Imagery Spectrum Comparison
Figure 6: Multispectral v/s Hyperspectral Spectrum Comparison; Source: Edmundoptics.com

However, please note that excess information is not always good information. In truth, it is a double-edged sword. Because the bands are so closely knit in a Hyperspectral imagery product, the non-useful bands generate significant noise in the imagery during the processing phase. This is very problematic for researchers and entails significant correction / post-processing to filter it away. Goes on to show that sometimes less information is better as it means less choice, less noise & more clarity.


Also - operationally, the cost of Hyperspectral data is also high and processing & interpreting it is more difficult than Multispectral data. However, with more spaceborne missions and better algorithms being developed, one expects these deficiencies will ease out in time.

 

Q: Can you elaborate spectral signature in more detail, though?


A: The concept of spectral signature may be confusing to some. To explain using a simple example - imagine a mixture comprising of 5 substances. We proceed to expose the mixture to a temperature range of -150 to + 150 degrees Celsius and note down the exact temperature when each substance changes its state of matter. Let's call the chart below, a temperature signature spectra of the mixture.


Can you point out which substance is H2O (Water) from the depiction below?

Fictitious Example of Temperature Signature of a Mixture
Figure 7: Hypothetical Example of Temperature Signature of a Mixture; Source: Mapmyops

A: Substance C is H2O (Water). Most of you would have been able to identify it very quickly because its temperature signature is very well known. H2O exists in a solid state below 0 degrees Celsius, in a liquid state between 0-100 degrees Celsius and in a gaseous state above 100 degrees Celsius.


A spectral signature is the same concept - just that instead of temperature, we see how the objects respond to illumination radiation which is measured in wavelength nm (nanometers).


For example, in the figure below, you can see the spectral signature of different types of vegetation. At lower wavelengths, the leaf pigment reacts in clearly unique ways to illumination based on the type of vegetation i.e. healthy or stressed or dry. For healthy vegetation, more illumination is absorbed and less is reflected which would result in a low reflectance output - which is natural when you think of it - healthy plants would tend to absorb the sunlight for photosynthesis.


The same principle is prevalent for the vegetation's water content properties which respond uniquely to higher wavelengths of illumination - the dry and stressed types of vegetation reflect illumination significantly more than what healthy type of vegetation does.

Reflectance Spectra of Green, Stressed & Drying Vegetation
Figure 8: Reflectance Spectra of Green, Stressed & Drying Vegetation; Source: HYPERedu, EnMAP education initiative

To drive home the point, it is easier to delineate various types of vegetation from Hyperspectral imagery because it captures information in contiguous bands i.e. we can see the spectral response of the object across the the wavelength range of the 'visible spectrum & slightly beyond'.


When it comes to Multispectral imagery, the spectral signature view does not have a seamless spectrum range - for example. the imagery product may capture readings only between 400 to 500 nm, then from 750 to 950 nm, and then from 1100 to 1250 nm, and so on. Now imagine that you are attempting to delineate the three categories of vegetation by interpreting the reflectance readings between 750 & 950 nm & between 1100 & 1250 nm wavelength range from Figure 8 above. It would be tremendously complicated as the three categories of vegetation (healthy, stressed and dry) do not show a tangible difference in reflectance at these two wavelength ranges. Hence, reflectance reading availability across a contiguous spectrum is a major benefit of a Hyperspectral product as it allows us to understand how the material (in our case, vegetation) responds to illumination across the complete wavelength spectrum range thereby giving us a better chance to understand and demarcate the categories of vegetation and to interpret its characteristics in a finer manner.


Reflectance Spectra of Open & Coastal Water
Figure 9: Reflectance Spectra of Open & Coastal Water; Source: HYPERedu, EnMAP education initiative

In contrast, refer to the image on your left which shows the spectral signature properties of Open & Coastal Water. Because water reflects a tiny amount of illumination only at lower wavelengths (400 nm to 750 nm), it is sufficient for us to use Multispectral imagery to detect and distinguish both these types of water as we know that their differing spectral signature characteristic is captured in a narrow wavelength range (400 nm to 600 nm) and remains either similar or static across the remaining spectrum range. This is very unlike the previous example of vegetation categories whose response to illumination i.e. its spectral signature, was very complex & intertwining across the spectrum range and where the need for Hyperspectral imaging was truly necessary to delineate it.


I must emphasize that the spectral reflectance signature is a combination of multiple factors (reflective, geometric & biogeochemical) and should not be attributed to a single factor or seen in isolation.


To elaborate visually, below is a video demonstration which shows how vegetation readings are impacted by three properties (2 of them biogeochemical & 1 geometric) and how the spectra would respond to various iterations it.

Interesting, right?

 

Just as we know that a higher resolution camera on our phones would imply cleaner and sharper images, similarly, not all types of Hyperspectral imagery products are equivalent. Our requirements and the material characteristics determine the type of Hyperspectral imagery product best suited for us and / or the scanning system we'd have to deploy to generate a suitable one.


Some Hyperspectral imagery data is best captured using Ground-based sensors in field / laboratory settings (of objects with very complicated / intertwining spectral reflectance signature under high resolution settings) whereas some other Hyperspectral imagery data can be preferred to be captured by Spaceborne sensors onboard satellites (wider coverage, lower resolution, much lesser cost, suitable for objects with less complicated spectral reflectance signature). In between comes Hyperspectral imagery captured from Airborne sensors onboard research flights (less coverage but at a higher resolution and at a higher cost than spaceborne acquisition technique, suitable for objects with mid-highly complex spectral reflectance signatures).


That being said, please note that the concept of 'resolution' in reflectance readings-based Imagery capture via satellite or aerial sensors is not as simple as that of the mobile phone camera resolution terminology we are used to.


The Sensor Resolution of Satellite / Aerial modes of acquisition actually comprises of four distinct aspects-


a) Spectral Resolution - range of illumination wavelengths to which the sensor is sensitive to (from 400 nm to 2400 nm for example)


b) Spatial Resolution - this resolution is more like the mobile phone resolution we all are very familiar with - it is the measure of the smallest feature which the sensor can detect (30 metres, 5 cm etc.)


c) Temporal Resolution - how quickly the sensor can detect the same object again. Spaceborne sensors are onboard Satellites which revolve around the planet. For certain earth observation requirements, it is important to have datasets captured at fairly regular intervals to maintain continuity and prevent extensive distortion. Think of a use case which requires frequent hyperspectral data capture during monsoons (rice-growing patterns study, for example). It is important that the acquisition technique has a higher temporal resolution here because during monsoons, there will inadvertently be cloud cover which would prevent the sensor from having accurate readings. Higher Temporal Resolution would imply a greater chance for the sensor to get accurate reflectance reading sets over the region under rice cultivation despite the presence of monsoons.


d) Radiometric Resolution - This is more towards the quality / strength of the sensor to capture the reflected signals accurately i.e. with limited noise. Please note that this is not referring to the scanning system deployed by the sensor (there are ways / methods to scan objects as well for obtaining best results. See video here if you wish to know more about the same).


As you would imagine, we can't have best of all the worlds in our ideal Hyperspectral sensor. There are trade-offs to consider and the study requirements determine the preferred sensor configuration. Higher spatial resolution requirement would mean that we'd have to capture imagery over our area of interest more slowly i.e. settle for lower temporal resolution. As a result, we'd have to opt for airborne sensor rather than spaceborne in this case. Whereas Higher temporal resolution requirement would imply that we'd have to settle for lower spatial resolution as we'd require quick coverage at more frequent intervals - something which only satellite mode of imagery acquisition us with. Similarly, there are trade-offs between spectral and spatial resolutions and among other pairings of resolutions as well.

 

Next, I would particularly like for you to see how Hyperspectral imagery is captured using On-ground techniques (at field and in a laboratory setting). This would give you very interesting insights on the importance placed on the correct 'process' to capture imagery data which is as much important, if not more, than the technology involved to capture the imagery data.


(Source of Videos: HYPERedu, EnMAP education initiative)


a) Field Setting


b) Laboratory Setting


Another Laboratory-based Hyperspectral imagery capture video can be viewed here. In case you would like to see a video on capture using Airborne sensors - you can view it here. Please note that many studies or applications require a combination of Hyperspectral data from multiple sources (field & satellite or laboratory & airborne, for example) to accurately validate the finding & to derive the desired output. Again, this goes on to show the importance of method involved in such high-tech maneuvers.

 

Q. So what are the applications of Hyperspectral Imagery? Where can it be used?


A. In many ways, you can treat Hyperspectral Imagery as a superior alternative to Multispectral Imagery for complex Earth Observation projects, in general be it Land Monitoring (Crop classification, Mineral identification) or Coastal / Water Monitoring (Coral reef, Ocean color), Disaster Mapping (Drought, Deforestation, Volcanic activity etc.) or Atmospheric Monitoring (Weather Patterns, Pollution, Ozone Layer).


The concept of Hyperspectral can be (and has been) successfully extended to other fields of work such as the bio-medical sector where Spectroscopy is deployed a useful, non-invasive measure to detect malignant, diseased cells. This is because certain illumination wavelengths penetrate the human skin and thereafter, the principles of reflectance readings / spectral signature for detection are applied.


To read more about the Earth Observation use-cases though, refer to the document Land Management & Coastal & Water Applications. To read about the Hyperspectral Missions and Data availability, refer to the document here.


You would realize that the concept of Hyperspectral imaging is highly technical. For the sake of being concise, I couldn't delve deeper into several aspects, not that I know all of it fully well either. However, should you be interested to learn this exciting field of knowledge, please feel free to enroll in EO College's MOOC - Beyond the Visible - Introduction to Hyperspectral Remote Sensing or refer to the webinars by NASA ARSET. I'm certain that you'll love the learning experience and I credit them for contributing to mine as well.


Before you go, if you liked what you've read about Hyperspectral imaging so far - you'd love watching this recorded video which gives a practical demonstration of Hyperspectral imagery processing, visualization & analysis. Do have a look and drop in your feedback in the comment section on YouTube.

Thanks for Watching!

 

ABOUT US


Intelloc Mapping Services | Mapmyops is engaged in providing mapping solutions to organizations which facilitate operations improvement, planning & monitoring workflows. These include but are not limited to Supply Chain Design Consulting, Drone Solutions, Location Analytics & GIS Applications, Site Characterization, Remote Sensing, Security & Intelligence Infrastructure, & Polluted Water Treatment. Projects can be conducted pan-India and overseas.


Several demonstrations for these workflows are documented on our website. For your business requirements, reach out to us via email - projects@mapmyops.com or book a paid consultation (video meet) from the hyperlink placed at the footer of the website's landing page.


Regards,

344 views

Mapmyops I Intelloc Mapping Services

Mapmyops
  • LinkedIn Social Icon
  • Facebook
  • Twitter
  • YouTube
Intelloc Mapping Services - Mapmyops.com
bottom of page