0

I'm trying to add a INTEGER DEFAULT 0 field to a very large (partitioned) table spread across two tablespaces.

Each tablespace is on a different disc (one on C drive, the other on D). I get the following error

ERROR: could not extend file "pg_tblspc/31575/PG_10201707211/31576/1155134.27": No space left on device
HINT: Check free disk space
SQL state: 53100

Both C and D drive have a directory called pg_tblspc/31575/PG_10201707211/31576 but neither has a file called 1155134.27 (I guess it's a temporary file?)

The C drive has 70 Gbyte of free space, the D drive 350 Gbyte.

Question 1 - how can I work out which of the two disks is actually short of space?

Question 2 - how can I work out how much free space is actually required? An integer requires 4 bytes storage, the table consists of monthly partitioned data, each month has about 60 million rows and the table has 4 years of data. 4 x 60 x 12 x 4 = about 12,000 Mbytes or 12 GB so according to my simple maths there should be enough space on either disc for all the extra space required. So why does postgres require more?

(postgres version 10.4)

The file structure on each disc is probably the same because each tablespace was created with very similar commands...

CREATE TABLESPACE fastDb OWNER dbOwn LOCATION 'c:/pgdata/fdb'; 
CREATE TABLESPACE slowDb OWNER dbOwn LOCATION 'd:/pgdata/fdb';

I have managed to answer question one simply by re-running the query (takes hours) and monitoring free-space from the OS - d: drive is the one running out of space.

Still can't work out how much space should be required. I've freed up 800Gb on d: drive and have switched to a SMALLINT field (2 bytes instead of 4) but I'm still running out of space.

ConanTheGerbil
  • 1,155
  • 4
  • 26
  • 43

2 Answers2

4

Still can't work out how much space should be required. I've freed up 800Gb on d: drive and have switched to a SMALLINT field (2 bytes instead of 4) but I'm still running out of space.

PostgreSQL is a row-store database, not a column-store database. Adding a new column means rewriting the entire table to add the new value onto each row. For transactional purposes, the old table's data can't be deleted until the new table is completed and committed. It will also need to create new indexes, while again also keeping the old indexes until commit.

Added in v11 was a fast-default feature, where adding a new column with a constant default value could be done without a rewrite. It just stores the at-add-time constant as metadata, and knows to use that value if it finds a row which is missing an entry for the new column. But on v10, you will need enough storage to rewrite the table and indexes.

jjanes
  • 39,726
  • 3
  • 37
  • 48
  • Thanks @jjanes. Theres's another quick fix I missed. My alter "table...add column..." included a default value (zero) which was causing the full-re-write (adding a default to all the existing rows) removing the default removes the need to re-populate old rows – ConanTheGerbil Jan 15 '23 at 09:16
2

Lets break the path /31575/PG_10201707211/31576/1155134.27 into parts

  • pg_tblspc -> it's in a table space
  • 31575 -> tablespace oid references pg_tablespace.oid
  • PG_10201707211 -> Catalog Version
  • 31576 -> database oid references pg_database.oid
  • 1155134 -> storage oid references pg_class.relfilenode
  • 27 -> it's the 28th file (1 GB each) of the relation (lets say table).

You can use pg_tablespace_location functions to get the path of the tablespace. Here is an example for your case;

select pg_tablespace_location(31575)
Laurenz Albe
  • 51,298
  • 4
  • 39
  • 69
Sahap Asci
  • 2,889
  • 1
  • 14
  • 26