2

I have a few SQL Server 2005 databases in simple recovery mode. The disk volume in which my transaction logs are stored had only 1GB free, when I started a stored procedure (massive delete in a single table) the log grows a lot and the disk is running out of space, and the stored procedure seems to work indefinetly. How can I manage this transaction log?

mustaccio
  • 25,896
  • 22
  • 57
  • 72
maurice
  • 41
  • 1
  • 2
  • 1
    Change the delete to run in batches instead of a single execution. I usually go for deleting 10K records at a single shot. Or if your stored procedure is using a date value for the delete, then walk the delete forward in time, deleting a day of data at a time. – Jonathan Fite Dec 28 '15 at 17:07
  • I am using MSSQL2005. Unfortunately I cannot change the way this stored procedure is ran. I have already modified a parameter in order to delete the minimum amount of data. – maurice Dec 28 '15 at 17:09
  • Is the delete a "regular" thing you need to do, or is it just a one-off? I'm assuming as there is a SP then it's a regular thing (?) but please confirm. – seventyeightist Dec 28 '15 at 20:07
  • It's not a regular thing, it's one shot. We need to perform one time bulk delete on a table that growed out of control. – maurice Dec 29 '15 at 07:41

1 Answers1

4

You have two choices:

  1. Break the delete into chunks, as Jonathan explained and as I blogged about here:

  2. Give the transaction log more space to grow to accommodate the log activity (and don't shrink it after every delete; all this does is make the next delete take longer). If you run a delete command that deletes 100K rows, the log has to be able to record all of that activity, regardless of recovery model.

For a much more thorough treatment of the subject, see:

Aaron Bertrand
  • 180,303
  • 28
  • 400
  • 614