top of page
Writer's pictureArpit Shah

Extracting Water Body Footprint from Optical and Radar Satellite Imagery

Updated: Nov 20

INTRODUCTION


Those who have read my pre-2022 posts on Satellite Imagery Analytics or any news articles on the same topic would only see the final output and my / the journalist's interpretation and views on it. The processing steps involved are generally, hidden from view. After all, it makes more sense to watch a piece of art and admire its beauty or spot its flaws rather than watching an artist painstakingly prepare it. However, if you have seen those soothing art creation videos floating around on Social Media, you'll know that there is a lot of joy to be derived from watching the process of creating art as well.

There is much to like about processing Satellite Imagery. Much of it is because it is 'explorative' in nature - like an Archaeological expedition, one keeps on digging to uncover artefacts along the way.

In this post, you will gain some insights about the process of performing Satellite Imagery analysis for 'Water Body Footprint extraction' workflow. I will share a pictorial depiction of the steps involved for doing so on two types of Satellite Imagery - Optical and Radar. Along the journey, you will obtain an understanding of the functioning of a popular Imagery Analytics software - SNAP - as well.

 

Section Hyperlinks below:

 
Figure 1: Electromagnetic Spectrum Infographic. Source: EO College
Figure 1: Electromagnetic Spectrum Infographic. Source: EO College

An Optical Satellite Imagery dataset helps to visualize the study area like what you would see from an aerial spot with your eye - a photographic view of the scene - although there are other useful spectral band combinations which one can use to visualize the dataset as well. The Satellite's receiver captures the information from the light (energy) reflections from surface features on Earth originally emitted by passive sources - predominantly the Sun. 


Radar Satellite Imagery (also known as Microwave Remote Sensing), on the other hand, is captured by the Satellite's receiver from the reflectance of Microwave energy from the surface features on Earth originally emitted (not visible to the human eye) by an active energy source - the Microwave transmitter onboard the Satellite itself.


Notice, from the infographic in Figure 1 above, that a certain range of Microwaves can penetrate the earth's atmosphere just like Visible Light does. Satellites make use of this energy range to emit Microwaves in order to receive reflectances which are subsequently used to create images of the Earth's surface.


Both Optical and Radar Satellite imagery have their own set of features, advantages and disadvantages.


In this post, I will demonstrate the Water Body Footprint Extraction workflow over a cross-section within Masurian in North Poland. This region is famous for its Lake District (> 2000 Lakes within!). Just as applicable with Building Footprint, an accurate Water Body Footprint layer (preferably extracted in an automated manner) is vital for multiple Geo-workflows such as Flood Mapping, Water Level Mapping, Tidal Mapping, River Course Mapping, Glacial Studies etc.


Just imagine how difficult would it be to plot the contours of a Water Body with cartographic precision from just an Aerial Image or from a Topographic Survey. Extracting the same using Satellite Imagery, in comparison, helps save a lot of time, effort and money.


For me to process Satellite Imagery datasets, I first need to obtain them. Imagery Acquisition requires a little knowhow - while you can obtain insights into the types and means of doing so from this post of mine, simply stated, one needs to know the characteristics of the Imagery desired, its Resolution, Timeline of Imagery to be captured in order to study the phenomena, where and how to acquire it and, of course, the cost of acquisition as well. In this post, I have used Optical and Radar Imagery datasets released by ESA's Copernicus network of Earth Observation Satellites. These are free and accessible to registered users.

 

Workflow 1: Extracting Water Body Footprint from Optical Imagery

This Cloud-corrected Sentinel-2 Optical Satellite Imagery over Masurian Lake District has been acquired in March, 2018 and is being visualized here in Natural Color mode
Figure 2 - This Cloud-corrected Sentinel-2 Optical Satellite Imagery over Masurian Lake District in Poland has been acquired in March, 2018 and is being visualized here in Natural Color mode
 
MSI False-color Infrared RGB Visualization of Optical Imagery
Figure 3 - MSI False-color Infrared RGB Visualization of Optical Imagery

Here, the Optical Satellite Imagery is being visualized in False-color Infrared mode. Visualizing the Imagery in different modes (using the spectral bands in the dataset) helps one to understand and interpret the data better. For example, this False-color Infrared visualization helps to distinguish Water Bodies (blue) better from Vegetation (red) in the figure above.

 
Pre-processed Masks representing Cloud, Snow and Water pixels
Figure 4 - Pre-processed Masks representing MSI Land/Water Visualization, Cloud, Snow and Water pixels respectively

Satellite Imagery can be acquired raw or pre-processed. Acquiring pre-processed Imagery, when available and relevant, saves computing resources and time. The three pre-processed masks in the figure above (excluding the image on the top left) contain white pixels which are representative of snow, clouds and water respectively as estimated by the pre-processing Algorithms deployed by the Satellite Service (ESA Copernicus). This helps one to interpret the Imagery better and exclude those pixels from subsequent analysis which may not be relevant for the study or whose processing can lead to inaccurate results (clouds & snow pixels in our case).

 
Resampled Optical Imagery in 10m resolution visualized in MSI Land/Water RGB mode
Figure 5 - Resampled Optical Imagery at 10m resolution visualized in MSI Land/Water RGB mode

Just as we often tend do in a Spreadsheet-based Analysis, we need to clean the Imagery dataset and organize it first before proceeding to do actual analysis on it. The Sentinel-2 Optical Imagery used here contains 13 spectral bands of information captured in 3 different spatial resolutions (10m, 20m & 60m). To analyze the imagery for the Water Body Extraction workflow, however, we need to make each pixel comparable i.e. have the same spatial resolution across the geographic extent of our study area. Therefore, I have 'Resampled' the entire Imagery dataset to 10m resolution here for the processing step to execute properly later on. Post resampling, the imagery is visualized, as depicted in the figure above, in MSI Land/Water RGB mode (using B2, B3, B8 spectral band combination - all three are originally acquired in 10m resolution by the Sentinel-2 Satellite).

 
Extent Subset & Band Subset of Optical Imagery
Figure 6 - Extent Subset & Band Subset of Optical Imagery

To continue cleaning & structuring the Imagery dataset further, I have proceeded to remove components within which would not be relevant for this workflow. After all, Satellite Imagery is large in size - the dataset I am using right now is 1 GB in size and to run analysis on the entire Imagery is computationally-intensive and takes more time. Therefore I have proceeded to 'Subset' the Imagery. Subsetting an Imagery Dataset can involve a) clipping the Geographic Extent of the Imagery to the Study Area and/or b) removing those spectral bands of information which are not useful for the study and/or c) removing the metadata included in the Imagery package. Performing a subset helps to reduce the size of the Imagery dataset considerably, making the processing quicker. Now that the Imagery is cleaned and set-up as depicted in the figure above (I've used both Extent subset as well as Band subset), I can proceed to perform the actual analysis to extract the Water Body Footprint.

 

Normalized Difference Water Index (NDWI) is a Remote Sensing indexing technique developed by S.K. McFeeters in 1996 which I will use to amplify certain characteristics of the pixels in the Imagery dataset in order to demarcate the footprint of Water Bodies effectively.

NDWI computation formula as derived by McFeeters
Figure 7: NDWI computation formula as derived by S.K. McFeeters

The formula is straightforward - basic arithmetic operations involving a couple of spectral bands - although its derivation i.e to understand the technicalities of how different types of Electromagnetic Waves interact with different types of Surface features would require some domain knowledge.


Sunlight, the primary source of illumination in Optical Imagery, is composed of Visible Light (which we can see with the human eye) as well as Near Infrared and Short Wave Infrared energy (which we cannot). The fundamental aspect for our surface features delineation workflow (refer Figure 8 below) is that while Water reflects Visible Light about just as much as other common Land Surfaces do, however, when exposed to Near Infrared (NIR) waves, it is unable to reflect it. Hence, the Water pixels in NIR mode appear very dark in the Imagery dataset. In contrast, Soil, Vegetation and other Land surfaces typically reflect NIR in significant quantities and hence their pixels appear very bright in the Optical Satellite Imagery dataset.


The bottom line to understand is that surface features can have drastically contrasting reflective properties when exposed to different wave ranges of the Electromagnetic Spectrum. The NDWI formula would help me to 'amplify' the already large contrast in reflectivity between Land and Water pixels under Green and NIR wavelength which will subsequently help me to conveniently identify / classify whether a pixel is Water or not. Subsequently, a cluster of Water Pixels would form the footprint of a Water Body which I'll proceed to isolate and extract.

Reflectance Properties of Water, Soil & Green Vegetation to Visible, NIR and MIR waves. Source RUS Copernicus
Figure 8: Reflectance Properties of Water, Soil & Green Vegetation to Visible, NIR and MIR waves. Source RUS Copernicus

To summarize, NDWI output will help me to determine whether the Near Infrared reflectance value of a pixel exceeds the Visible Light reflectance value of that pixel or not. If it does, then both the numerator and resulting output will turn negative, implying it is a Land feature. On the other hand, if the Visible Light reflectance value of a pixel exceeds the Near Infrared reflectance value of that pixel, then both the numerator and resulting output will remain positive, implying it is a Water feature.

 
NDWI Output (left) and Reclassification of Pixels (right)
Figure 9 - NDWI Output (left) and Reclassification of Pixels (right)

The image to the left in Figure 9 above is the NDWI output performed on the pre-processed Sentinel-2 Optical Imagery. The white pixels have positive NDWI values and can be classified as Water whereas the dark pixels have negative NDWI values and can be classified as Land. In the image to the right, I've 'reclassified' the pixels by running an equation in the Band Maths tool - I want to apply a blanket change - converting all the dark pixels (reflectance value of <0) in the NDWI output to jet black pixels (i.e. having zero / null value). This is because the Land cover type doesn't matter to me in this workflow - I am only interested in the cluster of White pixels i.e. which represent Water. Thus, by performing Reclassification, I will isolate away the pixels which represent Land from the dataset.


The NDWI technique is not without flaws - for example, when the Water Bodies are situated close to Urban Infrastructure, the reflectance emanating from the latter affects the reflectance of the former i.e. it adds Noise - refer Page 23 in this document. As you'll observe, a few adaptations to NDWI have been developed which enhances a user's ability to demarcate Water pixels more precisely across varying surface environments.


Lastly, I have extracted and saved the white Water pixels as a separate Mask layer on the SNAP software, and will use it subsequently to compare it with the Water Pixels Mask layer extracted from Radar Imagery analysis over the same study area as demonstrated in the next workflow.

 

Workflow 2: Extracting Water Body Footprint from Radar Imagery


One utility of Microwaves, the energy wavelength whose reflectances are used to form Radar Imagery, is that it is not impacted by the Earth's Atmosphere - for example, it can penetrate Cloud cover because the waves are so tiny i.e. 'micro' in nature compared to the larger Visible Light waves from the Sun. Also, as the Microwaves are emitted by the Satellite's transmitter i.e. an 'Active' source of illumination, it can be deployed (as it is in reality) to capture Imagery even during night hours. This renders Radar Imagery to be very useful in certain workflows where utilizing Optical Imagery is either ineffective or impossible.


The way to process Radar Imagery also differs significantly from that of Optical Imagery and I'll elaborate the procedure now. Also note that while I was able to extract the Water pixels using just a single dataset of Optical Satellite Imagery in the previous workflow, I will use five Radar Satellite Imagery datasets acquired between October and December 2020 to identify and extract the Water pixels in this workflow. The rationale behind doing so is covered later in this post.

 

This is how a single Radar Imagery dataset over the same study area (Masurian Lake District in Poland) looks like-

Sentinel-1 Radar Imagery visualized over Masurian Lake District in Poland, as captured on October 2020.
Figure 10 - Sentinel-1 Radar Imagery visualized over Masurian Lake District in Poland, as captured on October 2020.

In the depiction above, I've already done the first pre-processing step - which is to 'Subset' the Imagery to match the study area extent over the Masurian Lake District, Poland that I had used in the previous Optical Imagery workflow. This would allow me to perform a like-for-like comparison between both the Imagery outputs subsequently.


Interpreting Figure 10 above is difficult as all one can see is a splattering of salt and pepper pixels. Nonetheless, now that you are familiar with the Optical Imagery output from the previous workflow, you'll be able to discern the location of large Water Bodies in the image - where there is a conglomeration of dark / black pixels.


'Why are the Water pixels dark?', you may wonder. It is because the Microwaves emitted from the Satellite's transmitter that go on to hit Water surface features reflect 'away' from the Satellite i.e. do not return to the Satellite's receiver. Hence, the pixels corresponding to Water surfaces have minimal reflectance values - resulting in it being visualized as dark / black pixels. In contrast, Microwave energy reflects back towards the Satellite in significant quantities if it interacts with Land surfaces (such as Vegetation, Barren Land and Urban Infrastructure). Thus, the more the reflectance from a surface towards the satellite (technically known as Backscatter), the whiter the corresponding pixel shall appear in the Radar Imagery dataset. Goes without saying, having some knowledge about how Microwaves interact with various surface features aids immensely during Imagery analysis.

 
Applying Orbit Files preprocessing step
Figure 11 - Applying Orbit Files preprocessing step

My next pre-processing step is to apply the latest Orbit files metadata to the five Radar Imagery datasets. A Satellite captures data as it navigates in its orbit. Knowing the accurate location of the Satellite when the reflectance was received helps in precisely attributing the reflectance data values (also known as geocoding) to the corresponding pixels in the Imagery Dataset. The reason why one typically needs to perform this step is because precise Orbit Files take some time to be prepared by the Satellite Service Provider. As a result, they may not be included in the Radar Imagery package that one downloads initially, particularly if it is from a very recent time period.

 
Pre (left) & Post (right) Thermal Noise Removal
Figure 12 - Pre (left) & Post (right) Thermal Noise Removal

My next pre-processing step is to apply Thermal Noise Removal. The Satellite's receiver that captures the reflected Microwaves also generates its own background energy which interferes with the reflectance readings and is incorrectly added to the corresponding pixel's data value (making them whiter / brighter than they should appear). I would like to avoid this possibility as it would hamper in the correct classification the surfaces features - crucial for this workflow. Thus, you will notice that certain sections of the Thermal Noise-corrected image (on the right in the figure above) are visibly darker than the pre-Thermal Noise-corrected version. The background noise of the receiver has been removed from the affected pixels.

 
Pre (left) & Post (right) Radiometric Calibration
Figure 13 - Pre (left) & Post (right) Radiometric Calibration

The next pre-processing step is to apply Radiometric Calibration to the Imagery dataset. Radiometric Calibration converts the digital numbers recorded by the Satellite's receiver into physical values as reflected by surface features at source i.e. on Earth, which is subsequently allocated to corresponding pixels in the Imagery dataset. To negate the radiometric bias inherent in the digital numbers, constant offsets and other technical procedures may be needed to be performed in order to facilitate precise allocation. While this step is not important for Qualitative studies, it cannot be done without if one's objective is to perform Quantitative analysis and comparison between Imagery datasets captured by the same satellite from multiple timelines or by different satellites from the same timeline - the former is what I have to consider in this workflow and hence, performing this step becomes mandatory.


Also, just for your knowledge, each pixel in our Synthetic Aperture Radar (SAR) Imagery dataset acquired by Sentinel-1 satellite captures surface information in 5m by 20m spatial resolution i.e. each pixel covers 100 sq.m. of area.

 
Terrain-corrected Output
Figure 14 - Terrain-corrected Output

Next, I've applied another pre-processing step known as Terrain Correction. Data captured by the Satellite is, by default, in Radar Geometry form and not in Geographic Projection form (i.e. how surfaces appear on a Map). The orientation of Imagery and distances between objects are distorted as a result.


Map Projections is an entire topic within itself so I'll not delve into that. However, if you observe the Terrain-corrected output in the Figure 14 above closely, you'll realize that the Imagery has been inverted as well as is now narrower than the input Radar Imagery (as depicted in Figure 10). The Terrain-corrected Imagery above is, geographically speaking, now similar to Figure 9 depicting the final processed output from the Optical Imagery Analytics workflow.


To summarize, the Terrain-corrected image resembles what we see on a regular 2D map in terms of

a) the relative distance between objects as well as b) the Arctic Circle being located at the True North. Originally, in its raw form, the Imagery dataset was prepared in the way the Satellite received the information in its orbit i.e. in Radar Geometry.

 

Instead of repeating the same pre-processing steps on all the remaining four Radar Imagery datasets that I am using in this workflow, which will take considerable time and increases the chances of human error, in order to automate this workflow I will harness the power of the Imagery Analytics Software that I am using (ESA's SNAP) which has two high utility features known as 'Graph Builder' and 'Batch Processing'. Essentially I will, at the click of a button, perform the pre-processing steps on all the 5 Radar Imagery Datasets together and sequentially.

(For this intervention to work, all the Imagery datasets need to have common processing step flow which happens to be just so in our case)

Utilizing Graph Builder tool to form a Processing steps chain
Figure 15 - Utilizing Graph Builder tool to form a Processing steps chain

Batch Processing the Graph in Figure 15 for all the five Radar Imagery datasets
Figure 16 - Batch Processing the Graph in Figure 15 for all the five Radar Imagery datasets

All the five sets of Radar Imagery acquired across multiple timelines between Oct - Dec 2020 are batch-processed at once and the processed outputs are laid out side-by-side
Figure 17 - All the five sets of Radar Imagery acquired across multiple timelines between Oct - Dec 2020 are batch-processed at once and the processed outputs are laid out side-by-side
 

The last pre-processing step is Coregistration. For the Imagery Analytics software, each Imagery dataset that we have batch-processed is a separate entity by itself. Thus, it is not yet prepared for temporal analysis i.e. like-for-like comparison of same pixels across timelines. Therefore, I have to coregister the images in order to form a stack i.e. converting all the datasets it into a single Imagery product which facilitates pixel-to-pixel comparison.

Coregistered product with the five Radar Imagery pre-processed datasets can be seen under the Product Explorer view
Figure 18 - Coregistered product with the five Radar Imagery pre-processed datasets can be seen under the Product Explorer view

Notice in Figure 18 above that each Imagery dataset is stored as an individual band in the stacked product. This allows me to perform the actual analysis i.e. the Band Maths processing step, next.

 
Albert Einstein was supposed to have famously remarked on these lines: 'If I had an hour to solve a problem I'd spend 55 minutes thinking about the problem and 5 minutes thinking about the solution.'

I find this to be particularly applicable for Satellite Imagery workflows. The actual analysis takes little time to perform but the pre-processing steps are time-consuming and need to be done in an error-free manner in order to ensure that the data is set up in a right manner.

RGB Visualization of a different Coregistered Radar Imagery Product comprising datasets stacked from three timelines
Figure 19 - RGB Visualization of a different Coregistered Radar Imagery Product comprising datasets stacked from three timelines

To deviate from the workflow for a bit, Figure 19 depicted above is that of a Multi-temporal Radar Imagery product visualized in RGB mode. I have merged 3 Radar Images into one stacked product. While the Stacked Imagery looks colorful, this is not a natural color photograph rendition. What you are seeing is a single image which captures the changes in reflectance value on the same surface across the 3 timelines. The Brightest pixels and the Black pixels represent those surfaces whose reflectance has not changed over the three timelines, Green pixels represent those surfaces whose reflectance has changed between the first and the second timeline whereas the Blue pixels represent those surfaces whose reflectance has changed between the second and the third timeline.


Isn't this a useful way to observe surface-level changes at a granular level!


For example, these kind of stacked temporal Imagery visualizations can be used to obtain insights about whether and when Agricultural harvesting has occurred - this is because Water Body reflectances will tend to remain the same across the three timelines (black pixels) whilst changes in Vegetation due to harvesting will have a marked change in the reflectance value in the subsequent Imagery acquired (green or blue pixels depending on when the harvesting has occurred).

 
Mean Averaging the Coregistered Radar Imagery Stack into a sixth band depicted on the bottom-right
Figure 20 - Mean Averaging the Coregistered Radar Imagery Stack into a sixth band depicted on the bottom-right

Let me return to the workflow now. I have performed here the first analysis step - that of Mean Averaging i.e. I have averaged the data values for the same pixel locations across all the five Imagery datasets and transferred it to a sixth Imagery band which has been visualized on the bottom right in Figure 20 above.


Why did I do this? Despite all the pre-processing steps, all the Imagery datasets have inherent noise due to a variety of reasons which lead to a little inflated data values. By applying Mean Averaging across all the timelines, I can balance out these unwanted variation so that the noise is tempered and I can get a more smoother, evened-out Imagery stack. You may not be able to discern the smoothness from the Figure 20 above, however, in full-screen mode in the software, the effect is very pronounced.


This is why I had opted for Multi-temporal analysis using five Radar Imagery Datasets - doing so helps me to balance out any noise and outliers in the reflectance values thereby facilitating more precise data analysis and interpretation.
 
Pixel Intensity of Coregistered Multi-temporal Radar Imagery Stack visualized using a Histogram
Figure 21 - Pixel Intensity of Coregistered Multi-temporal Radar Imagery Stack visualized using a Histogram

Having smoothened the Coregistered Imagery stack, all that is remaining to be done now is to extract a Water Pixels mask as I had demonstrated in the previous workflow involving Optical Imagery datasets.


For this, I will need to delineate Land pixels from Water pixels and I will do it using a method called Thresholding. However, as I figured subsequently, it is not very straightforward to perform this step for my coregistered stacked Imagery .




Just see the Histogram depiction in Figure 21 above - the pixel intensities are clustered so close to each other! there is very little one can do to delineate Land and Water surface features from these values.


Then how do we Threshold the Imagery datasets properly, in order to delineate Water Pixels from Land? Don't worry, there is a smart solution to this challenge-

 
Log-scaling the Pixel Intensity Values for Precise Thresholding
Figure 22 - Log-scaling the Pixel Intensity Values for Precise Thresholding

Observe the image on the right in Figure 22 above. The Black pixels appear darker whereas the White pixels appear brighter.


What has happened here? I have log-scaled the Pixel intensity values i.e. converted it from Linear scale to Logarithmic scale - (dB=10*log10()).


Essentially, what I have done is exponentially amplified the Intensities by converting Linear reflectances into Logarithmic reflectances. This will help me to conveniently delineate the Water Body Pixels from Land.

 
Histogram of Log-scaled Intensity Values of the Multitemporal Coregistered Imagery Stack. Twin Peaks allows for better classification
Figure 23 - Histogram of Log-scaled Intensity Values of the Multitemporal Coregistered Imagery Stack. Twin Peaks allows for better classification

Thresholding to delineate two types of surface features works well when the Histogram view of an Imagery dataset appears clearly bi-modal i.e. has twin peaks. On the Figure 22 to your left - you can see the Histogram view of the 'log-scaled' pixel intensities of our Multitemporal Coregistered stacked product - it clearly shows two distinct peaks. As we know that Water Bodies have a lower Intensity value than Land, I can reasonably conclude that the pixels with an intensity value of <23.54 corresponds to Water bodies whilst those pixels that have an intensity value >23.54 corresponds to non-water i.e. Land pixels.


Logarithms are useful after all! 😁

 
 Delineating Water Pixels from Land and subsequently, nullifying all the Land Pixels
Figure 24 - Delineating Water Pixels from Land and subsequently, nullifying all the Land Pixels

Using the threshold of 23.54, I've used the Band Maths tool to delineate Water Bodies from Land features on the SNAP software and subsequently, as I had done in the Optical Imagery workflow previously, I've proceeded to reclassify all the Land pixels as null values - as depicted in the image on the right of Figure 24 above. This final Imagery output is now our Water Mask of the Masurian Lake District in Poland extracted using Radar imagery.

 

COMPARING OPTICAL IMAGERY OUTPUT WITH RADAR IMAGERY OUTPUT


The final step is to compare both the Water Masks, extracted from Optical Satellite Imagery and Radar Satellite Imagery respectively, on a Geographic Information System.

Comparison of Optical Imagery Water Body Output with that of Radar Imagery
Figure 25 - Comparison of Optical Imagery Water Body Output with that of Radar Imagery

The Red pixels are Water pixels identified using Optical Satellite Imagery alone whereas the Blue pixels represent those Water pixels identified using Radar Satellite Imagery alone. The Water pixels commonly identified across both the types of Satellite Imagery datasets (which most pixels are) are colored in Violet.

In my view, the reasons behind the minor deviations in the classification are a) the Optical Satellite Imagery over the study area had been acquired in Spring time, whereas the Radar Satellite Imagery had been acquired during Autumn / Winter season, therefore, there is bound to be certain changes in the Water landscape between the two timelines (which are also more than one year apart). Also, b) since the method of processing is different for both the types of Imagery datasets, small variations in detected output is normal. By and large, the detections classified by both the Imagery types are similar.
But is the detected Water Body Footprint accurate? See for yourself from the slider below-

A PowerPoint slide comparing both the the Imagery types (Optical and Radar) can be found here.


Slider depicting both the Water Body Footprint Masks overlaid on Google Map Basemap.


Haven't the Water Bodies been classified with a high degree of accuracy?


I hope you had fun knowing about the process involved in performing Satellite Imagery Analytics, something that I had promised at the beginning of this long post. Thank you for reading!

 

ABOUT US


Intelloc Mapping Services | Mapmyops.com is based in Kolkata, India and engages in providing Mapping solutions that can be integrated with Operations Planning, Design and Audit workflows. These include but are not limited to - Drone ServicesSubsurface Mapping ServicesLocation Analytics & App DevelopmentSupply Chain Services, Remote Sensing Services & Wastewater Treatment. The services can be rendered pan-India, some even globally, and will aid an organization to meet its stated objectives especially pertaining to Operational Excellence, Cost Reduction, Sustainability and Growth.


Broadly, our area of expertise can be split into two categories - Geographic Mapping and Operations Mapping. The Infographic below highlights our capabilities.

Mapmyops (Intelloc Mapping Services) - Range of Capabilities and Problem Statements that we can help address
Mapmyops (Intelloc Mapping Services) - Range of Capabilities and Problem Statements that we can help address

Our 'Mapping for Operations'-themed workflow demonstrations can be accessed from the firm's Website / YouTube Channel and an overview can be obtained from this flyer. Happy to address queries and respond to documented requirements. Custom Demonstration, Training & Trials are facilitated only on a paid-basis. Looking forward to being of service.


Regards,

Much Thanks to EU Copernicus, RUS Copernicus & QGIS for the training material and software. A video output using older imagery can be viewed from here.

1,671 views
bottom of page