Is it possible to move data to the new table sourcing other tables without losing progress in the middle?
I think of plpgsql that read data by chunks and commit after every N inserts... If the process stops in the middle the condition NOT EXISTS (...) allows to add missing data by repeating process again.
I think it is important pattern and there might be support for it directly with the syntax:
INSERT INTO ... SELECT ... WHERE ... AND NOT EXISTS (...)
The reason for copying interruption can be a constraint or user pause an operation. Still we want not to lose progress and resume process (with a help of NOT EXISTS (...) guard).
INSERTcommand, even if it's a bulk insert of many records, if an error occurs during, then all of the changes from that command are rolled back and nothing is committed. Therefore there is nothing to resume from, and you'd want to start the sameINSERTover again from the beginning. This is part of the ACID principles of a relational database system. So it's unclear what you're asking if you're seeing otherwise?...are you saying your intention is to do multipleINSERTs and you want to prevent issues if that process is interrupted betweenINSERTs? – J.D. Dec 06 '21 at 12:36INSERT ... SELECTquery that might fail after two hours. In that case I'll "fix" data integrity and want to continue process without waiting another 2 hours.Seems I have to rewrite SQL query into
– gavenkoa Dec 06 '21 at 12:55plpgsqland commit bulks of inserts...plpqsql. – gavenkoa Dec 06 '21 at 12:57INSERTit's all or nothing, and you'd have to start from the beginning again if it fails in the middle, with multipleINSERTs you could have failures between them, after some of the data was committed. Just curious how much data is being inserted?...perhaps the reason for the slowness can be performance tuned, which may be more favorable of a solution to you? – J.D. Dec 06 '21 at 13:00