Has anyone had success recently with the lastools-based batch system put together by Terje Mathison?
Described here: http://tmsw.no/mapping/basemap_generation.html
A few years old now. I am trying to evaluate it against other approaches (such as Karttapullautin) as an aid in basemap production for a couple of current projects. It is partially working, but fails to generate the classified tiles. The end of the process produces the following:
Classified tiles generated, ready to generate contours, slope/cliff/dem and vegetation
Usage: gen-dxf-iso.pl lazfile|file_with_list_of_laz_files [output_file]
Usage: gen-slope-asc.pl lazfile|file_with_list_of_laz_files [output_file]
Could Not Find C:\data\orientering\tiles_classified\tile*.asc
Usage: gen-laz-veg.pl lazfile|file_with_list_of_laz_files output_file
Usage: veg2dxf.pl giffile|file_with_list_of_gif_files
Could Not Find C:\data\orientering\tiles_classified\tile*.gif
Could Not Find C:\data\orientering\tiles_classified\tile*.gfw
I have tried with a few different laz files from different areas and projects with the same result. It says the classified tiles were generated, but the output folder is empty. Any insights appreciated. cheers,
Interested here as well. So commenting to keep it in the current discussion list.
I have been reworking this process significantly, mostly in order to handle new higher density lidar scans: The new adaptive process use 256x256 tiles (with a 32 m buffer), but then splits those tiles into 128x128 or even 64x64 when the original scans are very high density.
In your case I'm guessing this is a PATH issue, i.e. in addition to the LAStools binaries you must also make sure that all the .pl files are either in the local directory, or somewhere in the PATH, and you must also check that Windows is setup to treat .pl files as executable.
Thanks Terje. I was assuming you had probably worked further on this system since 2013.
I had all the files, scripts etc in the same directory, so I don't think it's that. I will check the .pl executable issue.
Right now we still have 30-40 cm snow in the woods here, but in a week or 2 it will be mapping season again...
Maybe changes in lastools command line opting is making it fail. At least clip option changed at one point and made KP fail before I made it try the other commas too if first try failed.
Thanks for the feedback ideas. Not sure what to try next....I am running Windows 7, the pearl scripts are all correctly associated on the system as far as I can see. I terms of possible PATH issues, I added .pl to the path environment variable, but this made no difference. The batch process 'ran' (as before), but produced no classified tiles. I was using a single 1 km2 laz tile for this test.
A single LAZ tile?
My batch process used to depend on first having lastile split the input area into multiple (in both x & y direction) tiles, but since the default was 250x250 m that should have been OK.
Do you get a tiles_raw directory with a bunch og tile_xxx_yyy.laz files?
Ditto for tiles_classified?
Is the link above the correct place to download the latest tool set?
@JimBaker: Yes, I just checked: The zip posted there is from Nov 15th 2016, it should contain a consistent set of files from my previous pipeline.
The new version is not complete, most crucially it is completely missing a ground point search, i.e. it depends on having already classified input files like we're currently getting in Norway.
I get the tiles_raw directory with 250x250m tiles. They all seem fine when opened and examined etc.
The classified tiles directory is created, but is empty. No files.
Tried this with laz files from several areas and sources and always the same result.
Rob H: If you will setup something like TeamViewer on your PC and text me the link and a suggested time (From now until 22:00 Oslo time) I can try to connect to your machine and then we can figure out together why it isn't working for you.
My cell phone is +47-91728852, or you can email me at terje.mathisen (at) tmsw.no
Thanks I appreciate the offer. I will be in touch. Assuming we figure it out, I/we can post the learnings back here.
Rob & I were able to connect and we found the problem in ~10 minutes:
It turned out that when Rob had downloaded the lastools.zip package a year ago either his download was broken or he accidentally omitted one of the critical binaries from that package when he unpacked it, i.e. las2txt.exe.
Both my batch pipeline and Kartapullautin uses las2txt to convert a binary las/laz file into text so it becomes easy to parse, and in Rob's case with the program missing he never got any output data.
I guess I should invest in a little up front checking in my script to make sure that all the critical parts are available!
Big thanks to Terje for helping troubleshoot this. So - the learning is to check/refresh your lastools package when trying this batch process. It's a cool system... many output files to manage, so if you have a big project this may become challenging? But I especially like the fact that the output vegetation data is in vector form, and so easily manipulated in whatever mapping package you prefer.
Maybe Rob H wjen trying karttapullautin moved las2txt.exe to an other folder (insturctions sugest that, even if it s not necessary, it only needs to be in path). And that's why it was missing.
But I especially like the fact that the output vegetation data is in vector form, and so easily manipulated in whatever mapping package you prefer.
Is it now? That's a new feature since I last updated my copy of Terje's package plus Lastools. I'll have to go grab the latest version for next time I make a basemap. Maybe by then ground point detection will be working for the version he's now working on.
BTW, Terje, I used your tools to make the basemaps for both areas that will be used in the QOC-hosted US Classic Champs in November. Also a new map that I fieldchecked last summer in Gatineau Park, north of Ottawa.
@jtorranc: Great to hear that you're getting some use out of my sw, after the race I'd appreciate a (link to) a copy of the finished map!
I wrote my raster-to-vector vegetation conversion sw quite early, but the first versions were not stable enough/required too much "hand-holding" for me to include any of them in the mapping.zip package.
The current version is still not quite general, in particular it depends on knowing that the initial raster vegetation bitmap will always use a 2x2m cell size.
To get from raster to vector I first process the image and create individual 2m long line segments along all cell boundaries with different colors on each side, with a virtual white boundary around the current tile.
This list of segments is indexed twice, by each of the end points.
Next I join those 2m horizontal & vertical line segments together into polylines, but only for junctions where exactly two segments meet, so at the end I have a bunch of polylines and a set of junction points where 3 or 4 of the lines meet.
I.e. foreach (list of (x,y) junction coordinates) do
if (nr of lines meeting here == 2) then
remove both lines from the index
merge them into a single polyline
stuff the endpoints of that polyline back into the index
Repeat until there are no more junctions to merge.
At this stage I also record the color (vegetation type) used for each patch.
The next step is to low-pass filter each polyline while keeping the endpoints fixed, i.e. I round off all 90 degree corners so that for instance a stair-step pattern will turn into a 45 degree slope.
Finally I go through the list of vegetation patches and determine which polylines enclose each patch so that I can output a single area object for each of them. At this stage I must also determine if I have any holes or islands, i.e. a green patch in the middle of an open yellow area, and if so, record this as a hole in the surrounding vegetation type. (BTW, this last part required several rounds of back & forth with the OCAD developers in order to come up with an encoding which follows the DXF standard and which OCAD can understand.)
What about compatibility with Open Orienteering Mapper, which I suspect will become the predominate O mapping tool (due to a number of advantages)?
That already works on the development version of OOM, I'm advocating giving OOM + open lidar and either local community vector data or OpenStreetmap for the topo stuff in order to have a totally free mapping solution.
Off topic, did they solve the problems of importing shp files?
The GDAL module which is what allows dxf area and areas with holes to be imported should also handle Shape.
@Terje - I'll have to find a link for the map in Gatineau Park, which was used last fall. Not sure there's RouteGadget but maybe it's in someone's DOMA. In the meantime, I neglected to mention I also used your software to make a basemap of a state park here in MD, which we just held an event on without doing any field checking beyond what I recorded with my GPS during two visits to the park to design the courses - this was practical since the map is lousy with charcoal burning platforms, which show up beautifully in the slope raster your software produces. You can see the map, at least the portion used this past Sunday (there's a lot more of the park, mostly to the west and north) at http://qocweb.org/routes/cgi-bin/reitti.pl?act=map...
@jtorranc : This one
, I suppose?
I've started wondering if the near future of much O mapping is lidar plus GPS trails, no other field work. Were the boulders captured using lidar? How accurate and complete were they? (I've noticed the cliffs on my list basemap are quite complete.)
The map that jtorranc made worked without fieldwork largely because of the nature of the vegetation, which was a mix of incredibly open hardwood forest with almost unlimited visibility, and well-bounded mountain laurel, which showed up well enough to be pretty accurate on the map. (There was also greenbriar in spots, which wasn't mapped, and it would have been a bit better if it was, but the courses generally stayed away from it.) And then there was rock all over the place which you pretty much ignored.
It still could prove useful to create navigation sport maps without thousands of dollars or hundreds of hours of investment. If I know what a map includes and what it doesn't, I can generally adapt. Variety in terrain is useful. Our current cost model makes that hard.
Oh, definitely useful. It wasn't good enough for a championship, but it was as good as a lot of local meet maps created by volunteers that I've run on. It probably wouldn't work well in some terrain, but in a lot of places it would be just fine, I think.
I'm amazed at how good the contours, cliffs and vegetation (yellow and light green, but not vertical light green) are for the area I'm mapping using a lidar base. Very little change needed; no change would be adequate. Trails and power line were easy to put on by GPS. It's the boulders that are proving time consuming. If I just punted those, the club would have a nearly instant map. I'll probably soldier on, but the effort for a feature that's mostly too common to be terribly useful is making me think whether I should just map most of the boulder areas as boulder field, and get a map out with far less effort. Given how much free lidar Colorado has, I wonder how many nearly instant maps it could have. I'd be happy to orienteer in a new area even if the boulders were missing.
@JimBaker: If you have access to high-density lidar you might be able to locate many of those boulders as small dot knolls on the base map my sw generates, I have seen that previously (from Colorado), as well as boulder fields that turn up as green stripes because the lidar looks like low brush:
A glance at orto photos might suffice to convert a bunch of these to the proper object type.
I'll try running your tools. I recall that you processed the lidar for my current project, and it showed a few boulders (out of hundreds). Aerial photos do show more of the rock, but not how high of course, and thus show a mix of mappable and not.
The main problem is that boulders normally have exactly the same lidar footprint as a small round bush, and such a bush is almost certainly too insignificant to show on the map. :-(
Very large boulders however can show up as ground points which I then convert to a dot knoll since the area is too small for a form line contour.
I have considered doing multiple lasground_new runs, with hand-tweaked parameters to avoid interpolating the ground underneath boulders, but this will also lead to all dense bushes turn up at the same time. It is possible that lidar intensity would be able to differentiate between a naked rock surface and leaves, but not if the rock is (partly) moss-covered.
Do bushes not have a sufficient subsequent reflection beneath them? I thought that vegetation was distinguished by multiple reflections, the last being the ground? Are bushes that dense? (In much Colorado terrain, there are few dense bushes more than knee high except in swamps, though there can be deadfall or logging, so perhaps different heuristics would work?)
Small bushes (maybe somewhat species-dependent?) try to keep all the leaves on the surface, so with maybe 2 pulses per sq meter you only get a couple of hits, and seldom any secondary returns.
Get up to 5 (or 10-20) ppm and you can start to delineate the actual rock surface, while even very dense vegetation will tend to get a few secondary returns and/or develop a somewhat fuzzy surface.
My data has ten per square meter, so maybe something could be done? There's an enormous amount of Colorado lidar with seven to ten per sq m, so this could help us enormously.
Assume it's all rock and no bushes. The fieldwork would be a lot easier if you were just deleting bushes than if you had to locate and add rock. (For that terrain, with few bushes.)
Jim, do you have a link to a typical sample of the ~10 p lidar? I can try to see if a manual tweak of the lasground_new parameters make it capable of locating these boulders as very small but steep dot knolls.
NVDI on 4-band aerials also might help find big boulders.
I've checked the first area with default settings, I can see the boulders easily on Google Earth, and at least some of them turn up as vegetation points in the 10 points per sq m dense lidar.
For this particular type of open forest I would probably just use orto photos to identify the exact placement of each boulder, after first jogging through the area with a GPS watch and mark each interesting block with a split time.
Boulder are patches without ground classified points but instead last returns 0~6m above ground. So if you plot out those you get pretty good guestimate layer for boulder mapping. And with trying to
For instant orienteering run I sometimes with Karttapullautin plot those last returns with buffer and then override them with ground points with bluffer and draw remaining as purple with black outline.
With appropriate buffers based on cloud density (and detected boulder size expectations) it result isn't that bad if one is prepared to see some bushes mapped with purple. Not sure does it make any sense here with the settings I used here.
Is anyone aware of a similar pipeline running under Linux?
The LAStools binaries run fine under Wine and I started converting Terje's batch file to bash scripts, but the Perl code keeps crashing on me. I was under the impression that perl scripts were supposed to be relatively platform independent, but apparently that's not the case.
I suppose I could bite the bullet and learn Perl, but figured I'd ask here first in case someone has already done the heavy lifting of doing this under Linux.. Thanks in advance..
Jagge, what are the Karttapullautin settings that you used to do that? Or was it special code?
I did a Karttapullautin to Linux conversion a couple of years ago (older version), and it wasn't too hard. Mostly path slash issues, which could be fixed in the code to be 'either way'. I did it so I could run in a cluster and really crank out the area. I've been planning on a redo, for some time.
Some basemap generation points are outlined on this site including how to get karttapullautin working on Linux under the "Ubuntu 16.04 Setup" section: https://xc-racer2.duckdns.org/OABC/basemap-generat...
I know the guy who did the necesary change for Linux functionality, but he was uncomfortable posting the modified pullautin.pl directly without Jagge's permission since it is his code. If Jagge says that he is okay with it being redistributed he is happy to provide the altered code.
For Terje's tools, he ran into some issues with piping lastool's output into the perl scripts. After he got karttpullautin working, he didn't really try that hard. He'd be willing to try again, but will probably wait for Terje to release the updated version...
@cmorse & @dbakker:
My perl code is indeed intended to be platform-agnostic, but there might still be an issue with the way I am using "las2txt -stdout" to pipe the output directly into my scripts:
Since all the LAStools binaries have to run under wine, pipe operations might be impacted? If so it should be trivial to split the operation into two, first sending las2txt output to a file and then reading that file instead of the pipe.
Here is the message I got from my friend:
There were two issues I came across with the perl scripts running on linux (apart from having to append "wine " before each of the lastools commands).
1) The paths weren't being parsed properly, so removing any instances of "s/\//\\/g;" was necessary
2) Newlines weren't being parsed properly with lastxt piping, so you have to explicitly open the files with crlf line endings (see http://perldoc.perl.org/perlport.html#Newlines
Otherwise, the perl code should just "work", presuming the lastools binaries are in Wine's PATH (which differs from Linux's) - easiest if you just put them in the same folder as the perl code.
Here is a zip file which contains all of the perl files as well as a diff file for anyone interested in seeing exactly what was changed.https://1drv.ms/f/s!AmVFm0Oq9tQFlxmJGlXSXsebWbPv
Unless you have enough understanding of what's going on (which in my case comes from things I haven't done in many years), "s/\//\\/g;" is comically difficult to read. (For the uninitiated, the letter V does not appear in there, it's seven consecutive slashes and backslashes.)
Perl on Windows seems to accept unix style forward slashes for paths, so it might be possible to change my scripts to be effectively agnostic, I'll take a look at that.
Similarly for the newlines: I'm assuming the issue was that las2txt.exe outputs Windows style CRLF line endings, while perl on unix defaults to only expect a single LF.
I use the perl chomp() command to get rid of the line terminator, but I guess I might need to also open the pipe as an explicit 't'(ext) file? I could of course do a global remove of any CRs...
David, Thank you very, very much for the linux conversions. Just ran a quick test, and aside from needing to massage my input tile through las2las, it looks like everything worked very nicely... Pulled into OOM and everything lines up with the OSM reference and orthos...
And more importantly, thanks to Terje for writing the code in the first place, looking forward to playing with it some more..
How does OCAD12's own lidar processing compare to the methods discussed here? Anyone have experience?
OCAD 11 & 12 went from a toy to something you can use at least for small and intermediate-sized projects, and possible large ones as well:
The key change they made was to go to tile-based processing since this meant that the processing time was linear in the project size instead of quadratic.
The algorithm they use internally for contour generation is still big O(n*n) for each internal tile, but since n is now fixed to their tile size, it doesn't blow up on larger projects.
BTW, using tiles was my third suggestion after evaluating their source code/algorithms back around the ocad 10/11 change-over: My first and second suggestions would have made the actual contour processing at least an order of magnitude faster, even for a small tile. (In my testing I found that I could generate 1m contours for a 10 sq km plot in about 6 seconds, but in my batch scripts I simply offload that step to one of the LAStools binaries.)
Besides the processing speed, what functionality does it have? What control does the user have over the contour generation? Does it support processing for cliffs, vegetation density or any other symbols? Does it support generating a shaded relief image?
What is the workflow and ease-of-use like?
Does it support processing for cliffs, vegetation density or any other symbols? Does it support generating a shaded relief image?
Yes to all the specifics above. As to other symbols, my experience is it gives you more dot knolls than actually exist and also a few cup depressions (slightly fewer, IIRC, than are actually out there but I haven't used it on any terrain where small depressions and pits are common).
What control does the user have over the contour generation?
I'll be interested in Terje's answer. I found it easy to manipulate the contour interval for index contours, main contours, and formlines. I haven't tried delving deeper to see what hard-coded parameters there might be that affect the appearance of the contours - smoothed versus in all their jagged glory.
What is the workflow and ease-of-use like?
Lacking an agreed standard, all I'll say is it's pretty easy to use, including use of the various things in LAStools to prepare your LiDAR data, provided you have a basic level of comfort with doing things from a command line.
ETA: Sorry, didn't read attentively enough/far enough back and thought pi was asking about Terje's process. I'm only somewhat familiar with OCAD capabilities through OCAD 11, when it could do contours, a slope image, and, IIRC, cliffs, but not vegetation. I ought to take the trial of OCAD 12 for a spin myself just to be aware where it stands.
Depending on the format/coordinate system/datum of whatever LiDAR data you can get your hands on, my experience with OCAD 11 makes me think you might be well advised to use LAStools or QGIS or something to get your data in all metric units, UTM/WGS84 before doing anything with OCAD, but maybe OCAD 12 is a lot better at all that than 11.
Now I'm confused. Ocad's own lidar processing surely is not done through command line? Or do you mean that I first have to pre-process the data with LAStools to be able to use Ocad's own processing?
@pi: If you are NOT comfortable working from the command line, my batch tools are probably not what you want to start with. :-(
Jarrko's Kartapullautin is also basically a perl script process, but he has made it a little bit more feasible to point and click as long as you are perfectly happy with his defaults.
If you do it all inside OCAD then you basically get contours plus digital elevation models which can then give an indication of slope angles as well as some vegetation hints but I have not looked seriously at this for a few years:
My approach when I want to create a new map is to go to the LiDAR download site (https://hoydedate.no/
here in Norway) and zoom/scroll my way to the area, select a rectangle around it and ask for a download.
When I get that 5 min to a few hours later I create a new directory for the project, typically named the same as the map to be, and then unpack the LAZ files into a subdir 'laz'.
At this point I start the map creation process from the command line:
and then I just wait anywhere from a couple of minutes to 24 hours for all the processing to happen.
At the end I open a new OCAD (or OpenOrienteeringMapper!) project and import the contours (hvaler_contours.dxf) and the first vegetation estimate (hvaler_vegetation.dxf) and load the cliff and slope background raster images (hvaler_cliff.gif & hvaler_slope.gif).
The final part consists of getting access to government topo data, here in Norway that is preferably in the form of SOSI files, but OpenStreetMap data can also be useful.
Thanks again Terje.
Well, I am very comfortable with command line, but not all mappers are computer engineers...
I'm just trying to get an idea how Ocad's built in lidar processing compares to these various command line options. Is it easier to use through a graphical user interface and does it have all the functionality I get with LAStools/Kartapullautin/OL Laser etc?
pi, you can try out OCAD 12 for free and see for yourself how its lidar function works.
I have no idea what a command line is nor able to master paulautin. I have been using ver11 for a while and lidar of CT. Easily can get contours of whatever interval I choose. Hill shading of the DEM provides an indication of cliffs, knolls, stone walls, charcoal platforms, trails and roads. Very useful to create a rough base map before venture out. The walls and platforms were an unexpected pleasant surprise and have been accurate. Some of the walls though are just hints of long ago bumps when sought in the field. As always, best to check before finalizing.
I am using a much more sophisticated vegetation model than any of the others, with the possibility to provide feedback and redo the processing in multiple stages in order to get significantly better classifications.
I also provide these objects as true ocad/oom vector objects instead of just a green raster.
Since starting this thread I see the topic has morphed around a bit but it's all interesting.
Maybe I can get some help here on a somewhat related topic.
I am having no luck using dxfmerge in Karttap. and cannot find any detailed guidance on where the files need to be in relation to the exe when using this command, folder names etc. I have nine 1 sq km tiles for which I want to merge the dxf contour files. Each tile ends up with the contour dxf file named identically (when each tile is run as a separate operation). If I run in batch mode, the individual vector files do not seem to be saved/generated. When I import the individual contour data there is overlap in the buffer areas and it is very time consuming to try and sort out. Any guidance appreciated.
I'm not on windows and not running this particular software, but would it be easier/cleaner to merge your lidar data for the 9 tiles, then generate a single dxf contour file with no overlaps?
You threw me with the "Karttap." abbreviation. I'm used to calling it "KP".
In the pullauta.ini file, set the flag to 1 to save temp files. You do NOT need to save the temporary folders unless you're doing other things.
Look in the readme.txt for how to merge the files. I never want to see an image without depressions, so I do three commands only, in the folder with the pullauta.exe executable:
pullauta pngmergedepr 1
If you didn't have the save files flag set to 1, then set it, and rerun the batch. If it was already set, you should be able to just run the commands above.
I remembered another important detail.
If you want to rerun a batch, you need to delete certain finished png files in the out directory. You can delete "out" outright, or rename it out_old or something. If you've changed the number of input lidar files, also delete the files with "list" in their filename in the main directory (with pullauta.exe). I typically sort by date, and delete all the new files (except pullauta.ini) when I rerun a batch.
I also do something I'm sure the hardcore computer folks will disapprove of. I have an unused KP installation in a KP template directory, and I just copy that directory and paste it into my "map name" working directory. Then I copy the single laz file and paste in into my working copy of my KP directory, *or* I create an "in" directory and paste laz files into that. That way the extra files in the KP directory don't mess up later runs of different areas. It's wasteful of disk space and isn't elegant, but it's quick and doesn't cross-contaminate stuff. I can also see from the file dates if an older project used an older version of KP.
That's exactly what I do too, @cedarcreek (complete re-copy of the important bits). Those files aren't big, so what the heck. I have a cleanup command that cleans all that out, if needed, too. And I do like that I can keep track of what version I'm using that way, too.
I have some small areas it would be neat to map, and I have been looking around for a solution for Linux. Being a Perl hacker, it is interesting to see it being used. :-)
I'd like to help in the area of making it more accessible. I can help turn the scripts into modules, and getting them packaged in shape for CPAN. Also, I'd warmly recommend using App::Cmd for it:https://metacpan.org/pod/distribution/App-Cmd/lib/...
It would be easier to have the code on github or something too.
I suppose most of these are yours, Terje, do you want to do this?
@KjetilK: I'd be happy for anyone who would like to work on making my sw more useful, so please go for it!
Great, Terje! Just to be sure that I'm not doing it wrong: What is the canonical source of your scripts, and have you included a license with them? If not, may I suggest that you write something like "under the same terms as Perl itself"?
cedarcreek: thanks for the tips. On a new project I got the merge command to work. I had not noted before that when running KP in batch mode, the names given to the files are different than if running one file/tile at a time. I think this was the source of my problem. I was assembling files generated by individual runs and trying to merge them.
cmorse: I presume if multiple laz tiles were merged before processing this would also work, but have not tried this. The merged file would be very large, and I'm not sure if this would cause memory issues. cheers.
KP automatically does bounding box processing (this is lastools' term), so the contours are seamless. There is seldom a reason to merge into one big file.
OL-Laser definitely has issues with big tiles. I've got 6GB of RAM, but OL-Laser is 32-bit, so it can't handle files bigger than 2GB (I believe). When laz files get above 60-80 MB, I have trouble processing them. Typically, tiling to 1km tiles works. I've had to go to 500m tiles several times, for really good lidar. One set of flood plain lidar I processed was so dense with points I had to thin it 90% (saving 1 point in 10) to get a reasonable tile I could process. (I've forgotten the tile size.) KP can go bigger, well over 100MB per tile, but it also has issues for really big files. KP is so good for tiles, there's no reason to merge into one big file.
A lot of the DEM algorithms I use in QGIS don't work well for tiled data. I definitely need to figure out how to work with tiles in QGIS. Right now, I tend to make DEMs/Grids with larger pixels---instead of 1m x 1m, I use 2m x 2m, 3m x 3m, or bigger for really big stuff. I've done 6m x 6m and 10m x 10m and gotten decent results, but for things like hydrology, very rough contours of big areas, and for bad lidar (few ground points). I tend to make DEMs in OL-Laser and save the grid as an ".asc" file. It's the first extension I tried that worked---I think there are other options that save as smaller file sizes.
I have had issues with lidar tiles that don't fill the tile. I think sometimes the tiles are aligned to true north (like WGS 84 in lat-long), but the tiles are UTM with a cant to true north. This means the lidar data has four "wedges" of no data in the lat-long tile. If you've got this, It's best to retile into UTM tiles. (In lastools (and other tiling tools I've tried), you can just retile using existing tiles---You don't need to merge to a big file first.)
This discussion thread is closed.