7

I have integration tests that require test database to function.
Since tests should generally be independent, I reset the database in the beginning of each test.

I can not use RESTORE, since some parts of architecture (which I do not control) are caching the connections, and will fail with connection lost on next call.

Right now I am creating a snapshot, then calling DELETE + INSERT on each table to synchronize data with snapshot. However, it takes 1 second per reset which is way too much (150 tests = 150 seconds). I have a lot of tables, but they are almost empty so there is no reason for it to be so slow.

So how can I replace database with its previous version in less than 1 second without losing connections?

My next idea would be to add some kind of change tracking since each test only affects some tables, but it would make reset code even more complicated.

UPDATE: I added SET STATISTICS TIME ON and I am getting

SQL Server parse and compile time: 
   CPU time = 327 ms, elapsed time = 343 ms.

for my reset SP. I assume it is due to ALTER TABLE ... NOCHECK CONSTRAINT ALL calls in the beginning of the SP. I wonder if it is possible to suppress recompilation in this case.

Andrey Shchekin
  • 221
  • 3
  • 5
  • 2
    what about rename table and create new ones from snapshot? Is less expensive rename than delete. You can delete renamed tables one time for day. –  Jan 24 '12 at 22:17
  • what is the fastest way to create new ones from snapshot? –  Jan 24 '12 at 22:24
  • Can you truncate instead of delete? –  Jan 24 '12 at 22:25
  • @Blorgbeard, We must to ask if tables have foreign keys. –  Jan 24 '12 at 22:28
  • @AndreyShchekin, fast way is insert into ... Select from snapshot. It isn't? This do a bulk insert. –  Jan 24 '12 at 22:30
  • @Blorgbeard using truncate will make me drop/recreate FKs which I am not yet sure is worth the work, since DELETE does not seem to be the slowest part by itself. –  Jan 24 '12 at 22:45
  • @danihp INSERT ... SELECT is what I am doing right now, does not seem fast enough. but I'll do SET STATISTICS TIME ON to see what's slowest. –  Jan 24 '12 at 22:48
  • @Andrey, if you are clearing all the tables you might be able to get away with disabling constraints –  Jan 25 '12 at 01:14
  • @Blorgbeard unfortunately no, TRUNCATE actually requires FK not to be present, rather than just be disabled. also it seems more time is spent on query parsing than DELETE. –  Jan 25 '12 at 01:41

5 Answers5

6

Run each of your tests inside of a single transaction... at the end of each test, ROLLBACK the transaction.

  • Great idea if it will work for his needs! – mwigdahl Jan 24 '12 at 22:24
  • 1
    I did it in previous project, unfortunately this has side effects: issues with transaction management in code (for example, using TransactionScope with Suppress overrides it), does not work with FTS, has issues with multi-threaded tests. –  Jan 24 '12 at 22:43
  • 1
    @AndreyShchekin What about using transactions where possible, but then using your existing cleanup method for tests that aren't compatible with transactions? That could cut down on the run time for the suite of tests... – Michael Fredrickson Jan 24 '12 at 22:50
  • @MichaelFredrickson unfortunately it creates a lot of work for developers not all of whom are aware on why that is even needed. for example, component A calls component B calls component C that suppresses the transaction for a certain reason. Or does an insert, then forks in two threads to do other operations expecting INSERT to be complete and non-locking. Then all tests on component A would have to know about that. –  Jan 24 '12 at 22:53
2

I know it's an old topic but times have changed and here is what is possible now.

  1. You could create and use a dockerimage with the Database State you need for the tests and run a new container for eacht test. Pro: You can run multiple tests in parallel because you have more than one database Pro: You can be sure you'll get the desired state! Con: If you don't run it in parallel the Container Startup could take a while

  2. You can create a Dump and reset the database with this dump before each test Pro: You can be sure you'll get the desired state! Pro/Con: (Depending of the amount of data) the reset may take a while

  3. Combine both ways and run the Tests in Parallel. You could keep the prepatched Docker container per thread/process/cpu/whatever and reset it with the dump between each test. This may be faster depending on the amount of data as well

Depending on your setup you could manage the Containers with tools like "testcontainers" if you want to make sure the containers are stopped afterwards

Mariano
  • 121
  • 1
1

Use an SSIS package to exec an sp which deletes all the rows from each of the tables, then restores them from your snapshot source?

  • 1
    How is this different from executing this SP myself? My current DELETE and INSERT statements are already in a SP (just for convenience). –  Jan 24 '12 at 22:17
  • Running this SP takes 1 sec in Management Studio as well. –  Jan 24 '12 at 22:19
  • the difference is that after execing the sp, it can copy data from your source snapshot to your destination tables. It might not be any faster, just automates the process. –  Jan 24 '12 at 22:33
0

If your database is just for testing, you can go to a more minimally-logged operation, which might speed up the DELETE and INSERT behavior substantially.

EDIT:

You wouldn't actually DELETE and INSERT if you wanted the less logging on your operations. You can TRUNCATE TABLE instead of DELETE, and SELECT INTO instead of INSERT. Unfortunately, you can't SELECT INTO a table that already exists, so you'd either have to rename or drop the existing table and then SELECT INTO.

EDIT 2:

Modified answer to reflect the fact that TRUNCATE AND SELECT INTO are not truly non-logged, although they incur less log overhead than DELETE and INSERT do.

Thanks to the folks that corrected me!

mwigdahl
  • 263
  • 2
  • 10
  • How do I do a nonlogged DELETE+INSERT in SQL Server? –  Jan 24 '12 at 22:54
  • 3
    TRUNCATE is not "nonlogged" - http://sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-%281930%29-TRUNCATE-TABLE-is-non-logged.aspx I'll agree that it is LESS logged, but it is incorrect to say it is not logged. – Aaron Bertrand Jan 25 '12 at 01:05
  • 1
    As @Aaron said, there is no such thing as a non-logged user operation in SQL Server. TRUNCATE and SELECT INTO are minimally logged, meaning they can be rolled back but generate fewer log records for the work they do. – Nick Chammas Jan 25 '12 at 16:18
0

How much data across how many table are we talking about here? Depending on volume, doing it all in under 1 second might be a pretty tall order.

If you have 150 tests, and each test must be run on the exact same set of data, that means you have to undo the effects of (say) test N before you can run text N_1. However, this also implies that, as you are doing testing, you know the before/after state of the data. Can you incorporate something like:

  • Perform Test
  • Check Results
  • Reset data back to initial state by undoing only what just changed?
  • Yes I can, though it is a lot of complexity just for the sake of the tests, I would prefer more straightforward solution if possible. This will require me to put triggers or change tracking on all tables and check it correctly. –  Jan 24 '12 at 22:50
  • Interesting comment, but shouldn't be an answer – Hybris95 Dec 07 '21 at 15:06