From this post about Binary data performance in PostgreSQL, I read that storing (large) binary data can be very slow in PostgreSQL (compared to storing the same data in file systems). This seems to apply to PostGIS data such as rasters or geometries with many points. And there are many reports of slowness associated with pgRaster (e.g. Poor performance with storing large rasters in PostGIS and visualising in QGIS, Why is pgraster much slower?)
According to the linked post and this discussion, one reason why TOAST slows down access is that it can involve compressing and decompressing data. And there is mention of using the external storage strategy to speed up access as it disables compression.
Based on this, I would imagine that compression should be disabled by default. Because it messes up the ability to predict where coordinates and pixel values are from index numbers. I guess sequential search has to be used (instead of random access), and the whole BLOB has to be read into memory in order to to find a individual coordinate or pixel value.
These are just thoughts, but I don't really know much about PostGIS implementation/desgin.
My questions are:
Does storage strategy have a significant impact on the speed of data access?
Are PostGIS binary data (geometry and raster) external (uncompressed) by default?
If the impact is significant, is there a way to change the default storage strategy in PostGIS to, e.g. external?