top of page

From Point to Plot - Processing LiDAR datasets

Writer's picture: Arpit ShahArpit Shah

Updated: 2 days ago

Laser Beam in a Lab Environment. Image Source: news.mit.edu
Figure 1: Laser Beam in a Lab Environment. Image Source: news.mit.edu











INTRODUCTION


Laser beams are fascinating, aren't they? Focused & Incisive. A higher form of Intelligence, perhaps.

You can fight with it in a movie. Dance to its neon hue at shows. Shine it on a night sky to see if it reaches the clouds or beyond. Or at the very least, flash it on sportspersons to distract them in crunch match-situations😁. It feels liberating that one can use and admire the applications of a technology without knowing how it is formed or interacts with the surroundings.


Light amplification by the stimulated emission of radiation. That is Laser for you - I was unaware till I sat down to pen this post. And monochromatic, directional, coherent is why a laser beam feels different to visible light - it has a single wavelength, single color and two waves from the same source are always in synchrony. No wonder these high-intensity beams evoke a strong feeling...

..and feelings do matter. Laser pulses can feel the bare earth, the terrain and our surroundings comprising natural & built-up features in ways that many other modes of illumination fall short of.
A LiDAR Point Cloud would look similar, albeit denser. Image Source: Brecht Denil on Unsplash
Figure 2: A LiDAR Point Cloud would look similar, albeit denser. Image Source: Brecht Denil on Unsplash

And this is what LiDAR (Light Detection & Ranging), a remote sensing technique which uses Laser as an active mode of illumination takes advantage of - the sensor can throw pulses at very high rates (150 kHz) and obtain dense point returns (150 points per square foot). Upon stitching these reflectances into a Point Cloud, one can generate high resolution and three-dimensional digital models of the area of interest.

Tip: In case you have an ancestral home and worry it will get demolished, and along with it your fond memories of the place will be lost forever - then get it LiDAR scanned so that it can be recreated in the future from the point cloud! If Notre Dame Cathedral can benefit from it, so can you.

 

Those who are familiar with my recent posts would know that I tend to share an elaborate context before diving into the central subject matter. In case you'd like to skip the background, here are the section hyperlinks to the workflows-


- Extracting 3D Buildings Footprint 

- Extracting 3D Roof Forms (extension to the previous workflow)

- Classifying Power Lines using Deep Learning 


In case you wish to see the demonstrations (recommended), here's a compilation-

Video 1: Narrated one-hour video on processing LiDAR Data - three workflow demonstrations compiled.

Video Timestamps


00:05 - Case Details


00:19 - Caselet 1 - Extracting 3D Building Footprint from LiDAR Imagery

00:23 - C1 - Workflow 1 : Setting up & exploring the dataset

03:43 - C1 - Workflow 2 : Classifying the LiDAR Imagery Dataset

10:44 - C1 - Workflow 3: Extracting Buildings Footprint

14:12 - C1 - Workflow 4: Cleaning up the Buildings Footprint

17:25 - C1 - Workflow 5: Extracting 'Realistic' 3D Building Footprint


20:47 - Caselet 2 - Extracting 3D Roof Forms from LiDAR Imagery

20:51 - C2 - Workflow 1 : Setting up the Data & Creating Elevation Layers

30:16 - C2 - Workflow 2 : Creating 3D Buildings Footprint

33:54 - C2 - Workflow 3 : Checking Accuracy of Building Footprints & Fixing Errors


42:06 - Caselet 3 - Classifying Power Lines using Deep Learning model on LiDAR Dataset

42:10 - C3 - Workflow 1 : Setting up and Exploring the Dataset

46:23 - C3 - Workflow 2 : Training the DL Classification Model using a Sample Dataset

51:31 - C3 - Workflow 3 : Examining the Output of the Sample-Trained DL Classification Model 53:27 - C3 - Workflow 4 : Training the DL Classification Model using a Large Dataset

58:12 - C3 - Workflow 5 : Extracting Power Lines from the LiDAR Point Cloud Output


59:46 - Summary Note & Contact Us


Much thanks to Esri's Learn ArcGIS team for preparing the tutorial and developing the methodology.

 

Laser is a form of amplified radiation and the pulses are discharged at rapid rates. LiDAR sensor operates in the near-infrared, visible light and ultraviolet region of the electromagnetic spectrum - just like solar radiation, albeit at a much higher frequency and intensity. The wavelength which a LiDAR sensor utilizes depends on the application - Topographic Surveys on land typically require the use of Near-infrared (NIR) radiation due to its advantages, while Bathymetric Surveys to measure seafloor elevation make use of Green light as it is able to penetrate water with relative ease.

LiDAR sensor emits radiation in the Near-infrared, Visible & Ultraviolet range of the electromagnetic spectrum.
Figure 3: LiDAR sensor emits radiation in the Near-infrared, Visible & Ultraviolet range of the electromagnetic spectrum. Source: Adapted from NASA ARSET

With LiDAR, one is able to generate high-resolution and three-dimensional Elevation Models of the area of interest. Elevation (Z values) is what transforms a 2D image into 3D and lends context to the surface. The density of LiDAR point returns and the precision of its Z values determine the accuracy of the 3D rendition.


Elevation Models come in various types - two of the commonly used ones are-


a) DSM - Digital Surface Model - As LiDAR sensors are placed above the surface, the first point returns contains information on how the laser pulses interacted with natural or built-up objects above the bare earth, subject to such features being there. A DSM is a 3D representation of the surface with all natural and built-up features intact (refer the left image in Figure 4 below). 3D modelling of over-ground assets such as buildings, bridges, solar panels and power lines entail the need to generate and utilize this all-important type of Elevation Model


b) DEM - Digital Elevation Model - A substantial quantity of the emitted laser pulses do not hit the top-most object or feature. Rather, they proceed towards interacting with the bare earth surface before returning to the LiDAR sensor - this would take marginally longer than the first return - and it is the conglomeration of final returns which is utilized to form the Digital Elevation Model or DEM - essentially, it is a 3D rendition of the surface stripped of all natural and man-made structures over it (refer the right image in Figure 4 below).

Digital Surface Model / DSM (left) and Digital Elevation Model / DEM (right). Source: An Introduction to LiDAR for Archaeology - AOC Archaeology Group 2015
Figure 4: Digital Surface Model / DSM (left) and Digital Elevation Model / DEM (right). Source: An Introduction to LiDAR for Archaeology - AOC Archaeology Group 2015

This specific property of being able to capture point returns from bare earth is what makes DEM useful to be utilized at Archaeological sites where the historical remnants have been overlaid by newer features with the passage of time. For example, from the DSM view in Figure 4, you will be (not be) able to observe how the prehistoric ramparts and ditches in Shropshire, England have been obscured by vegetation.


Height of a tree can be derived by subtracting the first return from the last (fourth) return. Source: Geospatial Romania
Figure 5: Height of a tree can be derived by subtracting the first return from the last (fourth) return. Source: Geospatial Romania

If not for techniques such as LiDAR or GPR, archaeologists would have to strip the site of vegetation to a great extent before they get a whiff of the discovery underneath, as evident from the DEM view in Figure 4. Not just the existence of such remnants, one can also identify the extent and nature of it. As you can imagine, this information is highly valuable to plan and schedule the digging work, resulting in cost-effective operations.


There are several other applications and sectors which utilize these high-resolution bare-earth elevation models such as Roads and Highway construction, Railway projects, Forestry (Figure 5), offshore Wind Turbine and Landslide / Deformation studies.


A variant of DEM - called Digital Terrain Model or DTM is also utilized for specific workflows such as Shoreline Analysis.


There are four distinct modes to acquire LiDAR data. The laser-emitting sensor can be installed-


a) Spaceborne i.e. placed on Satellites

Satellite-based LiDAR. Source: intechopen.com
Figure 6: Satellite-based LiDAR. Source: intechopen.com

b) Airborne - placed on Aircrafts and Drones

Airborne LiDAR. Source: researchgate.net
Figure 7: Airborne LiDAR. Source: researchgate.net

c) Stationary Terrestrial i.e. placed stationary at surface-level (on-ground or perched)

Stationary LiDAR
Figure 8: Stationary Terrestrial LiDAR. Source: Earth Observatory of Singapore, NTU




















d) Mobile Terrestrial i.e. can be placed on automobiles and multi-terrain vehicles (Trivia: iPhone 12 comes equipped with LiDAR and here's how it helped a blind person navigate.

Terrestrial LiDAR on mobile van
Figure 9: Mobile Terrestrial LiDAR depiction. Source: Geospatial World & Counterpoint Research respectively

There are multiple other ways to classify LiDAR operations- explore a few of them here.

 

PROCESSING LiDAR DATA


LiDAR output isn't as refined as it appears in Figure 4. The raw output looks more like this-

Video 2: LiDAR raw output is just a dense cluster of point returns

As you have observed in Video 2 above, raw LiDAR acquisitions are just a dense cluster of point returns (laser reflectances) technically known as Point Cloud. That being said, the high density by itself not sufficient to create a seamless depiction of the surface as seen in Figure 4 which begs the question - How did the transformation occur?


The magic lies in predicting the gaps in the geospatial dataset through statistical interventions through a technique known as Interpolation. Know more about it and some of the methods involved here.

Processing LiDAR data is just as interesting as acquiring LiDAR data -the Point Cloud can be refined to generate surface elevation models and to detect and classify natural or built-up features on / over it.

Allow me to demonstrate LiDAR data processing for you through these three interesting workflows-

 

Caselet 1 - Extracting 3D Building Footprint from LiDAR data


Building Footprint is a dataset which contains geospatial information of the built-up infrastructure in the area of interest. I have demonstrated the utility and application of this dataset in the Rooftop Solar Potential, Line-of-Sight and Automated Features Extraction posts previously - it can also be used in multiple other workflows involving Urban Planning and Risk Management.


Here, I will demonstrate the extraction a 3D Footprint of the Buildings from within the LiDAR Point Cloud using the powerful GIS software - ArcGIS Pro. Broadly, the process involves filtering aside the less important portions from the raw output initially (ground point returns, noise) so that what is left behind is the Point Cloud over the top section of buildings and built-up infrastructure (why is just the top section left behind? - because the LiDAR data has been acquired using an Airborne medium).


Thereafter, I will set footprint generation parameters in a geoprocessing tool and extract individual building shapes with length, breadth & height dimensions (X,Y,Z) from the Point Cloud. Subsequently I will use the ground point returns to generate a Digital Elevation Model. Thereafter, I will pair this high-resolution DEM with the generated footprint in order to create a more realistic-looking digital twin of the study area.

Video 3: Using LiDAR Point Cloud to extract 3D Building Footprint

While this workflow is mostly automated, one also needs to manually inspect the output, iterate the parameters, and edit the defective building shapes. Overall, the processing chain is wholesome - blending technology with human ingenuity.


Slider 1: Raw LiDAR data versus Processed Output

 

Caselet 2 - Extracting 3D Roof Forms from LiDAR Data


In this workflow, besides repeating the data visualization and digital elevation model generation from LiDAR Point Cloud on another study area, I will demonstrate the utilization of a geoprocessing tool to extract just the Roof Forms from the Building Footprint. This dataset can be used by Local Government / Municipality, for example, to understand the level of infrastructure development in a neighborhood. Going beyond the extraction, I will additionally deploy Statistics through Root Mean Square Error Analysis - RMSE in order to assess the accuracy of the extracted Roof Forms.


'What will the Roof Form elevations be compared to, in order to assess its accuracy?'


Recollect that LiDAR acquisitions contain first return information through the help of which one can generate a Digital Surface Model which contains dimensional data of the surface with all the natural and built-up features over it. I will use this data layer to statistically assess the derived Roof Form elevations. I will also demonstrate the use of editing tools within the GIS software to manually repair a couple of Roofs with high RMSEs (marked in red and orange in Slider 2 below).

Video 4: Using LiDAR Point Cloud to extract 3D Roof Forms

Commonly-used processing chains, such as this one involving LiDAR data, are often pre-packaged by Esri, the GIS software developer. This makes it very convenient for users who can run sequential steps in a semi-automated manner, saving time and reducing the chances of errors and omissions.


Slider 2: Raw LiDAR data versus Processed Output

 

Caselet 3 - Classifying Powerlines using Deep Learning on LiDAR Data


Power Transmission Infrastructure, by virtue of being critical for residential and industrial purposes, needs to be routinely inspected for damages and obstructions (natural or otherwise). I had received an actual project requirement for this very workflow during the coronavirus pandemic - the prospect wanted to be automatically alerted, by applying Deep Learning on Drone acquisitions, whether there was any vegetation growing around the powerline or if a hawker had set up his stall underneath it. I wish I had known this workflow at that point in time!


I will apply of Esri's Deep Learning algorithm (previously utilized in this post) to detect and classify the LiDAR point cloud with the intention to identify those point returns which have interacted with the powerlines (not the transmission tower or any other built-up or natural features). I've tried to understand the technicalities involved and the parameters to be used during geoprocessing in-depth so that I can explain it in a clear manner during the demonstration (have even used a proprietary 'sunflowers-in-a-park' analogy😊).


In case you would like to understand the fundamental concept of Deep Learning - Artificial Neural Networks - here's a lucid video explainer.


The processing chain in this workflow entails training a Deep Learning model on a cross-section of the LiDAR point cloud which has already been classified (be it powerline or some other feature - built-up or natural). Then, I will test the trained model's efficacy on another cross-section of the point cloud which also been classified prior - this cross-section is technically known as the validation dataset - the objective is to see how well the model is learning i.e. is able to correctly classify the features on a new surface based on its initial training.


The rate of accurate classification is technically known as recall - and you will observe from the demonstration that my Deep Learning model had a good recall rate but it wasn't top-notch - this is because of the lenient parameters that I had set in order to reduce processing time as well as due to the consumer-grade 2GB GPU which I was using - the stronger the computing resources at one's disposal, the better the Deep Learning Model will learn and the faster will it be able to process data.


In order to highlight this aspect, I have used another Deep Learning Model in the latter half of the video for validation purposes - this was trained using an industrial-grade 24GB GPU. As anticipated, the recally was much better (refer Slider 3 below) - most of the points were classified accurately.

Video 5: Using Deep Learning Algorithm to identify Powerlines from LiDAR Point Cloud

Slider 3: Raw LiDAR data versus Processed Output

One can't help but be mesmerized with the prowess of Deep Learning (a subset of Machine Learning) and imagine the vast number of applications where it can be used, in isolation, or in conjunction with Artificial Intelligence and other complex technologies, to solve real-life problems.

 

CONCLUSION

LiDAR point cloud and the 3D Building Footprint which was generated using it
Figure 10: LiDAR point cloud and the 3D Building Footprint which was generated using it

I hope you enjoyed reading this post and had a chance to see the video demonstrations. It took me a while to prepare everything - slowly, steadily and lazily over a period of three months. I came to know about LiDAR upon reading this article which highlighted how the technology helped to discover sites of archaeological relevance. Some time back, I had even received a project enquiry from an architectural firm who wanted to LiDAR map some of the Hindu temples in the state of Karnataka, India with the objective to unravel new facets about the design and structure of these ancient places of worship - an opportunity which I wish surfaced now when my firm is equipped to execute such projects. Here are the top 5 applications of LiDAR - feel free to reach out with your LiDAR data acquisition or processing requirements.

 

ABOUT US


Intelloc Mapping Services | Mapmyops.com is based in Kolkata, India and engages in providing Mapping solutions that can be integrated with Operations Planning, Design and Audit workflows. These include but are not limited to - Drone ServicesSubsurface Mapping ServicesLocation Analytics & App DevelopmentSupply Chain ServicesRemote Sensing Services and Wastewater Treatment. The services can be rendered pan-India, some even globally, and will aid an organization to meet its stated objectives especially pertaining to Operational Excellence, Cost Reduction, Sustainability and Growth.


Broadly, our area of expertise can be split into two categories - Geographic Mapping and Operations Mapping. The Infographic below highlights our capabilities.

Mapmyops (Intelloc Mapping Services) - Range of Capabilities and Problem Statements that we can help address
Mapmyops (Intelloc Mapping Services) - Range of Capabilities and Problem Statements that we can help address

Our 'Mapping for Operations'-themed workflow demonstrations can be accessed from the firm's Website / YouTube Channel and an overview can be obtained from this flyer. Happy to address queries and respond to documented requirements. Custom Demonstration, Training & Trials are facilitated only on a paid-basis. Looking forward to being of service.


Regards,

370 views

Mapmyops I Intelloc Mapping Services

Mapmyops
  • LinkedIn Social Icon
  • Facebook
  • Twitter
  • YouTube
Intelloc Mapping Services - Mapmyops.com
bottom of page