-1

I am using postgree 9.4 database and I have a table snapshots with 700+ million rows and I want to delete about 9 million rows. I have tried different queries to delete records, but no success.

Query to delete all records for a specific camera:

DELETE FROM snapshots where camera_id = 1248

Another query to delete recodes between snapshot_id:

DELETE from snapshots where snapshot_id >= '1248_201511010000000' and <= '1248_20151130235959000'

Suggest me what will be the fastest way of deleting records from 700+ millions records.

azharmalik3
  • 101
  • 1
  • 2
  • Are there any foreign keys pointing at the snapshots table? Can you post an EXPLAIN ANALYZE of a slow DELETE? – Josh Kupershmidt Dec 21 '15 at 14:59
  • 3
    "no success" is neither a valid error message nor a useful description of the problem. Start by providing relevant details for your performance question as advised here: http://dba.stackexchange.com/tags/postgresql-performance/info – Erwin Brandstetter Dec 21 '15 at 15:09
  • There is no foreign key on table. It take too much time to executing query and at the end it gave message connection timeout. – azharmalik3 Dec 22 '15 at 08:52

3 Answers3

0

Other than breaking the operation into chunks, I don't think there's a silver bullet method. This answer may come in handy

0

You can use something like pg_repack to keep the table clustered by camera_id, or snapshot_id whichever one is responsible is more problematic. This will make the marking of the rows as dead a much faster operation. It's probably better to use snapshot_id especially if that is chronological, and you're not inserting old snapshots.

On the other operation, you can use an index.

Evan Carroll
  • 63,051
  • 46
  • 242
  • 479
-1

Sometimes it is faster to create a new table and insert only the records you would like to keep.

alonk
  • 303
  • 1
  • 4
  • 11