I'm new-ish to mapping and OCAD (v2020.5.7), just upgraded my PC to a i9-9900K CPU with 64GB RAM, solid state hard drive. ASUS Z390-P motherboard.
I've been running the DEM import wizard on a large merged LiDAR file (22km^2, 5.6GB) and find it excruciatingly slow, >= 20mins to process just main and index contours. It takes the DEM Import Wizard 45mins-1hr to process all contours plus veg height, veg base, hill shading, etc., on this file.
While all of this is going on, my CPU utilization never gets past 10%, and total memory used is under 10GB (15%). I've been chasing the various fixes the internet offers for "slow computer, low usage," etc., to no avail. I've fiddled with the BIOS, XMP, fan settings, etc., etc., as much as I dare.
- Is this just an unreasonably large amount of data to try to jam through OCAD's DEM Import Wizard (regardless of the low usage information)?
I know that I can clip the big LiDAR file to a smaller rectangle specified at the beginning of the import process. This would exclude about 18% of the area,but would leave the slow processing, low resource utilization question unanswered.
Grateful for any suggestions.
Thank you .
22 km2 is a lot of data . I usually do 4 km2 chunks (15 minutes or more) and take a break from the computer and do something else .
Not familiar with OCAD import wizard in general, but it sounds as if it might be single-threaded. The i9-9900K is 8-core with 16 threads when hyperthreading is enabled.
You could try disabling hyperthreading in the BIOS - this will reduce your logical core count to 8 (from 16, assuming hyperthreading is enabled) but does mean that the remaining 8 cores will run something like 50% faster. What hyperthreading does is split a physical core into two logical cores, which is useful when running multiple applications at once. If you're doing heavy single-core processing, then it is detrimental.
I agree, symptoms sound like widard being single-threaded is the reason. Disabling hyperthreading will just make it display CPU usage twice higher, but it will not be running any faster. Hyperthreading just lures OS to believe there is 16 cores instead of 8. I think the main point and advantage with hyperthreading is core time is assigned more at lower lower level where such switch creates less overhead. So it gives little advantage when you have more active threads runnig parallel than you have cores.
Maybe you can tile it and process tiles, then it might process tiles parallel and make it much faster.
Jagge is almost certainly correct. Tiling and multi-processing/multi-threading is badly needed.
I helped OCAD with their original contour generation code which was orders of magnitude slower than what you see now, as soon as the project got into the 10 km^2 range. The fix they settled on here was to process small tiles but still use the original O(n*n) (or even worse) algorithm: As long as the tile size was small enough it still works OK. At that point in time OCAD used more than an hour for just contour generation while my own code needed less than 10 sec, but I had a far better data structure which only needed to visit each triangular cell once, and only for the contours that would pass through it.
For my own maps and the 25+ TB I have processed for https://mapant.no/
I use 8-12 cores at all stages, split the input data into 200x200m tiles, and basically maxes out both the 8*2-core cpu and the 2 TB flash SSD which I use for all the temporary data.
I realized afterward that I was basically asking a computer question and not strictly an OCAD question. Many thanks for each of your replies! You saved me from blindly trying the (pointless) Windows Repair that Microsoft recommended, and nobody wants to do that if it can be avoided.
The two apps I've used for this size of processing are OCAD and R. They are both single threaded, so I'm
I cleverly (cough, hack) used R to use the Lon/Lat of my desired rectangle of LiDAR to select (after converting lon/lat to the appropriate projectin) from amongst the 6,800 tiles (in one county's set) the dozen or so that contributed to the desired rectangle, and then merge them into a single file for DEM.
I'll re-write the code so that it selects and then copies the necessary tiles to a folder for easy by-tile selection and DEM import. If that doesn't help, I'll try disabling hyperthreading.
Many thanks again, stay well!!
@galen, you really need to look into using the proper tools, i.e. LAStools!
lasinfo to pick up header info from each tile and store in a DB table
Then when you want to generate a map:
SELECT * from laztable WHERE minx >= ? AND miny >= ? AND maxx <= ? AND maxy <= ?
gets you the relevant tiles, which you save to a test file 'tiles.lst'
lasindex -cores 8 -lof tiles.lst
lastile -cores 8 -lof tiles.lst -tile_size 200 -refine 2500000 -odir tiles -olaz -o tile -keep_xy minx miny maxx maxy
lastile -cores 8 -refine_tiles 2500000 -i tiles\tile*.laz -olaz
Now you can start to generate proper classification, using lasheight & lasclassify, then you generate contours with las2iso.
Nice idea with the DB for header info Terje. Thanks!
@Terje - Sverre Froyen is a big fan of LAStools as well, and he's also told me that I should get comfortable with LAStools. :-)
Wrt to processing in R, I'm totally guilty of being the guy with a hammer for whom every problem looks like a nail. The lidR package has worked quite well for me to-date, but I'm new at this. It by default works with 8 threads. It made no difference when I set the number of threads to work with to 16.
@Jagge - I switched off Hyperthreading in my BIOS. It seemed much faster on the DEM Import Wizard's 1st -> 2nd screen transition, but far slower in the Summary screen. Thanks for the suggestion, though.
I tried working with the non-merged tiles (15) singly and in groups of 3 and concluded that the total time would not be reduced and that the hassle would be greatly increased.
There is another option that may help. As the only detail you will be using at the end of the day and as there are other good sources with OCAD to get vegetation boundaries you could not bother asking the program to process vegetation height, slope gradients and the other 'nice-to-haves' and just go right to the meat of the LiDAR- the contours.
If you only need contours, and have LiDAR with proper ground classification (class=2) for the terrain surface, then you can run las2iso (or even better, the blast2iso streaming version) directly to generate the contours you need. This will be _far_ faster than ocad or pretty much any other toolset, but if you want to do it using just the free versions, then you should instead run at least a lasindex/lastile round first, with the tiling set to generate ~35m buffers around each tile: This way you avoid all visible glitches along the tile boundaries.
@Gord and Terje -
Good points, thank you both. I've been bringing in the other LiDAR-sourced info out of a habitual reluctance to say 'no' to any additional info, but it's been clear that I'm not getting much other value out of the LiDAR (at my level of processing/analysis skill, anyway). I do find the hill shading to be very helpful, though.
Wrt veg and boulders, I've found much more helpful and directly useful info from imagery (especially now that Sverre Froyen and Ron Birks have nudged me into using QGIS and Quick Map Services to get hi-res, georeferenced imagery) than from LiDAR.
There's always more to learn, though...
This discussion thread is closed.