Register | Login
Attackpoint - performance and training tools for orienteering athletes

Discussion: Lidar and vegetation mapping

in: Orienteering; Gear & Toys

Apr 12, 2012 11:57 PM # 
TheInvisibleLog:
The available descriptions of vegetation mapping with lidar seem to rest on the assumption that lower vegetation height equates with slower vegetation. Has anyone developed a reliable method of detecting thicker understory beneath a forest canopy?
Advertisement  
Apr 13, 2012 12:15 AM # 
igor_:
So far in my limited experience I've been making the grid using all the last returns and rendering a half-meter resolution relief image using OL Laser, and creating a mask from that by manually painting over in gimp, then exporting into OCAD using a custom tool.
Apr 13, 2012 1:30 AM # 
Greg_L:
You can find plenty of articles publishing various ways to estimate near-ground vegetative characteristics for forestry and ecological purposes. Almost all bin the number of data points within a given 3D cube (sometimes called a voxel) or other shape above the bare earth (ground) surface as calculated. But it's bound to be tricky, and to depend on plenty of factors including the overall point density and the vegetative density, and the exact nature of the vegetation in a given area. Results obtained in one habitat may have little applicability in another.

Anyway, it's not lower vegetation height that may correlate with "slower" vegetation; it's the density of the near-ground (0 - 2 meters) vegetation that may correlate, at least for some terrains.
Apr 13, 2012 1:37 AM # 
TheInvisibleLog:
"Anyway, it's not lower vegetation height that may correlate with "slower" vegetation; it's the density of the near-ground (0 - 2 meters) vegetation that may correlate, at least for some terrains."
That has been my approach so far. I have selected the returns above ground and less than 2 metres and taken the density function of these. Being highly uninformed in this area, wondered if there was a better approach.
Apr 13, 2012 1:50 AM # 
Greg_L:
Other than varying your parameters until they give results closest to what a good O'map of the area shows, so you can then apply those settings to a similar but as-yet-unmapped area, I'm not aware of any better approach using (aerial) LiDAR on its own.

It is possible to combine aerial LiDAR in various ways with other datasets, including infrared data, and laser scanning itself is also possible horizontally using side-scanning systems to get density measurements. I'm not aware of anyone who's gone to that much trouble for an O'map, however.
Apr 13, 2012 2:36 AM # 
eddie:
I select all returns between 0.5-2.5m above bare earth and then count the number (measure the density of those returns) in a 6x6m spatial bin. I've played a bit with the binsize - its just a trade between S/N and resolution. Will depend some on your post spacing. Then I resample back to my original grid spacing - usually 1m/pix - and use a green color scale (white to green, tuned to roughly match the O colors of corresponding density). I write this image to a .bmp and use it manually as a template - its more useful in the field than in the armchair. Trying to draw in areas from the template alone is less fruitful except for the highest density blobs, but its quite nice to have while fieldchecking. An example with existing stereophoto produced O-map to compare to on the last few pages of this .ppt presentation. Most of the understory under canopy here is Mt. Laurel, which is a broadleaf evergreen shrub.
Apr 13, 2012 2:48 AM # 
TheInvisibleLog:
Thanks for that. The binning might be what I need.
Apr 13, 2012 8:37 PM # 
blegg:
Thanks for sharing that ppt eddie, a nice collection of examples.
Apr 15, 2012 11:22 PM # 
Tundra/Desert:
Eddie, did you try the kernel density estimator? It kinda takes the guesswork out of the bin size. You can then evaluate the density function on a grid of your choosing.
Jun 12, 2013 11:35 AM # 
Terje Mathisen:
I have worked on this problem for a while now, as Eddie writes you have to work with sufficiently large bins so that you get significant numbers of returns.

I use a weighted 10-meter circle, i.e. the central part is more important than the periphery, as my sampling bin.

The best estimator I've found for actual vegetation classes is to split returns into ground/low/medium/high and use the relative percentage in each class as the determinator.

To refine this I started surveying, then as I found areas in the forest that deserved a specific classification (white forest, dark green, green stripes etc) I noted down the location and the class and put these into a reference file:

The reference file (if it exists) is read in and used as the norm: For each spot in the terrain I pick the reference classification that best matches the current distribution.

This results in a very noisy image, so I also run a second low-pass filter which for each area picks the majority-vote winner, i.e. the most common classification.

I held a presentation about this recently at the Norwegian Mapping conference and wrote an article about it which includes links to all the source code:

http://tmsw.no/mapping/basemap_generation.html
Jun 12, 2013 1:32 PM # 
Tundra/Desert:
Bins vs. kernels

If I were to throw ideas out there without doing any actual work, I'd say this (the better math people correct me): The problem is that you don't know exactly what the characteristics are of the lidar returns that represent the vegetation density in its human interpretation.

So, make some educated grouping first, e.g. returns 0 to 0.5 m, returns 0.5 m to 1.5 m, returns 1.5 m to 2.5 m, returns greater than 2.5 m. Calculate the kernel density for each set, for an intelligently chosen normal kernel, say 10 m indeed or so in size.

Then, you are trying to get at a (hopefully) linear combination of the four or so densities that represents runnability. So do least-squares with your human-surveyed runnability data. The more surveyed data, the better the fit. You can also go nonlinear, etc.

My intuition says there will be less need for multiple smoothings this way, and small features are less likely to be missed. Maybe next year I'll try this myself.
Jun 12, 2013 8:39 PM # 
blegg:
I have had reasonable success turning tree-canopy DEMs into vegetation maps that can distinguish forest, from chaparral, from bare ground. I did this using the FIJI trainable segmentation algorithm (now supplanted by the Advanced Weka Segmentation algorithm).

This particular algorithm only works on raster data (like DEM's), but it's quite effective.

This "trainable segmentation" is not so different from the linear combination analysis that T/D is suggesting. What is interesting though, is that the range of pixel characteristics it looks at are not just intensity, but various "texture" properties such as local gradient, variance, etc...

The cool part is, the operator doesn't have to chose which texture properties are most important. You simply give the computer a list of texture features to consider, and select a few pixels that you "know the answer" for. The computer than decides how to segment the image most effectively.

This can provide MUCH better segmentation than simple intensity thresholds, and can really cut down on the noise problems that Terje described. I'm sure that similar approaches could be developed for processing LIDAR data. I believe the newest version can process multi-channel data (RGB), and so by feeding it a proper selection of raster data, you could do some really nice segmentation.
Jun 12, 2013 9:02 PM # 
igor_:
I've been using threshold based approach probably similar to Terje's, with 5m radius point height histograms followed by 10m radius class histograms, using smooth weighting depending on the distance. Do not have much time to implement anything more involved or ML based really. The point height histograms take the longest to compute, otherwise it is okay, no manual work/clicking from lidar to ocad, for an okay first-approximation vegetation map, great improvement over what I used to do.
Jun 12, 2013 9:21 PM # 
Jagge:
The problem I had is not having any points at 1-5 meters at all even if it was dense bush all the way. There was so much branches there was no enough gaps so the intresting elevation ended up being a no-return-dead-zone described here:
https://www.e-education.psu.edu/lidar/book/export/...

What I did is a formula/algorithm for calculating parts of the higher hits if forest roof is low enough

http://sphotos-g.ak.fbcdn.net/hphotos-ak-ash3/p480...

So here dark blue points are counted in but light blue points not. At the bottom of the image there is point cloud visualizatiopn of the problem. This approach fixes it well enough or at least partly.

I also compare hits inside my pseudo-voxel against hits below certain elevation (~ground) to determine how green it is, sort of points at my voxel againts points gotten trough my voxel. Somehow this gave me more balanced result between green under tall trees and just low green forest.

The remaining problem is dedicudous forest vs evergreen. Scanning is done when there is no leaves so there evergreen easily dominate green and to get some dedicudous green the evergreen part easily turn too green. I have played with intensity and stuff but it becomes too scientific if I need to detect forest types first and detect green with different parameters based on that.
Sep 11, 2014 7:13 PM # 
Greg_L:
A quite useful review of the LiDAR extension available for Global Mapper has just been published here.

Since Global Mapper is one of the more affordable GIS data processing tools, seeing how to classify vegetation features (as well as other features like buildings) semi-automatically from LiDAR data using this extension is one more step towards being able to generate more useful foot-O basemaps or even "almost-ready-to-use" O'maps for sports like MTBO ...

This discussion thread is closed.