0

I'm trying to convert a set of tiled .la[a|z] files into one massive copc.laz file. I have dozen of files and more than one billion points to process on a regular laptop with 11 GB of free RAM. I'm using the following pipeline for 100 millions points. It uses a lot of memory but is ok. But with a billion point the amount of RAM required no longer permits to write the COPC file. Is there a more efficient/correct pipeline to write COPC files with PDAL?

[
    "file_1.laz", 
    "file_2.laz", 
    "file_3.laz", 
    "...", 
    "file_n.laz", 
    {
        "type": "filters.merge"
    },
    {
        "type":"writers.copc",
        "filename":"file.copc.laz"
    }
]
JRR
  • 9,389
  • 1
  • 13
  • 28

1 Answers1

2

Your definition of efficient is memory efficient.

Untwine spools a disk cache instead of using memory to sort and build the COPC file. You can control how much disk cache it uses by selecting which dimensions are cached with the --dims argument.

Howard Butler
  • 4,013
  • 22
  • 26
  • Yes indeed I meant memory efficient. I was redirected to untwine in another question here: https://gis.stackexchange.com/questions/461799/apply-a-where-expression-to-a-pdal-merge-script/462272#462272. Thank you for confirming. – JRR Jun 26 '23 at 20:51