10

Is there some way to select the page size of a Postgres database?

I know that the page size is usually 8K. I also read this email thread from 2003. I don't know that pg_controldata will work as I don't have access to the filesystem that the database is hosted on.

John
  • 203
  • 2
  • 5

1 Answers1

16

If you don't have access to pg_controldata ...

There is a simple way:

test=# SELECT current_setting('block_size');
 current_setting
-----------------
 8192

The manual:

The following “parameters” are read-only, and are determined when PostgreSQL is compiled or when it is installed. [...]

block_size (integer)

Reports the size of a disk block. It is determined by the value of BLCKSZ when building the server. The default value is 8192 bytes. The meaning of some configuration variables (such as shared_buffers) is influenced by block_size. See Section 19.4 for information.

To verify

Create a dummy table with only 1 small row: one data page is allocated. Then check the size of the "main" relation fork with pg_relation_size()

test=# CREATE TEMP TABLE foo AS SELECT 1 AS id;
SELECT 1
test=# SELECT pg_size_pretty(pg_relation_size('pg_temp.foo'));
 pg_size_pretty
----------------
 8192 bytes
(1 row)

So the page size is 8 kB, which is hardly surprising like you mentioned. The manual:

Every table and index is stored as an array of pages of a fixed size (usually 8 kB, although a different page size can be selected when compiling the server).

Erwin Brandstetter
  • 175,982
  • 27
  • 439
  • 600
  • +1 - but just as a matter of interest, why would one compile the server with a different page size? – Vérace Jun 06 '19 at 05:47
  • Never done this myself, but I imagine if you have large rows (that cannot be "toasted") it might save a bit of overhead. Not sure it would be worth the trouble. – Erwin Brandstetter Jun 06 '19 at 11:12
  • @Vérace A btree index entry is limited to about 1/3 of the block size. That could be one motivation to increase it. You might also hope to get better performance, but I've never seen that pan out myself. – jjanes Jun 06 '19 at 15:00
  • @ErwinBrandstetter - Thanks for that Erwin. You might be interested in a bonus that I'm offering Postgresql related - it should be right up your alley! – Vérace Jun 06 '19 at 15:03
  • 1
    One plausible reason is if you want to match underlying storage block size (and not vice versa, for one reason or another). Like RAID stripe size, or ZFS recordsize. Also compression ratio might increase, at least for indexes, if data is sorted in larger chunks before compression. – stox Nov 05 '19 at 17:08