This is a follow-up question to my earlier question about the memory management of some PostGIS/PostgreSQL functions:
How do PostGIS functions handle buffers and external storage?
As suggested by the accepted answer, some of the PostGIS functions (e.g. that rely on GEOS or those in pgrouting)
do not have any special handling for data sets that are larger than can fit in memory. If you run them on such data sets, you'll just OOM the backend.
So, my question here is that
If I have a machine with lots of memory, how do I config PostgreSQL/PostGIS so that I don't get OOM error for larger datasets?
I see here that there are a few parameters:
work_mem, shared_buffers, maintenance_work_mem
For example, if I have a pgrouting memory taking 100M worth of memory. Which ones in above should be set and what values should I use to ensure I don't get an OOM? (assuming no other concurrent use of the database).
work_memvalue will be used per process per node in the query execution). I'd say it's sth. you have to adjust with the queries you run. Naturally, the general goal should be to limit data to be cached the very minimum, e.g. I doesn't make sense to load 100.000.000 edges into memory to route within a city; rather, limit the graph to the proximity of the input, exclude foot path when routing for cars etc. – geozelot May 17 '19 at 11:56