3

I am on a team that develops a product that uses several SQL Server databases. Some of them are mostly read-only. Tables do not get changed, but some views and stored procedures get changed as a way of configuring the app. These views lack indexes. They are not materialized views. Changes to the tables only happen during upgrades and involve shipping the clients new MDFs that are attached. What steps should I perform to put these databases in a "ship ready" state? UPDATE Note that there is a LOT of free space in these MDFs, therefore a shrink followed by re-indexing.

We are doing the following currently:

  • Databases are in simple recovery mode (and we generally recommend keeping the original copy of the MDFs as the backups anyway)
  • We make the databases owned by sa and remove all logins (the deployment process adds appropriate logins)
  • We run DBCC SHRINKDATABASE on the databases
  • We run Ola Hallengren's Integrity and Maintenance scripts:

    EXECUTE dbo.DatabaseIntegrityCheck
        @Databases = 'DB_IN_QUESTION',
        @CheckCommands = 'CHECKDB',
        @ExtendedLogicalChecks = 'Y',
        @TabLock = 'Y'
    GO
    
    EXECUTE dbo.IndexOptimize
        @Databases = 'DATABASE_IN_QUESTION',
        @FragmentationLow = NULL,
        @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
        @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
        @FragmentationLevel1 = 5,
        @FragmentationLevel2 = 30,
        @SortInTempdb = 'Y',
        @MaxDOP = 0
    GO
    

Should we use other parameters for Ola's scripts? Should we take any other steps? Making the database read-only would be a pain because the views and stored procedures are often modified by the end user.

Andriy M
  • 22,983
  • 6
  • 59
  • 103
Justin Dearing
  • 2,677
  • 6
  • 35
  • 51

3 Answers3

6

If you want to distribute a copy of your database, why don't you use LocalDB?

Shrinking the data file causes very high fragmentation. Running index maintenance grows the file back out. Why are you shrinking it?

Furthermore, if you're only going to ship the mdf file, why are you using DBCC SHRINKDATABASE? SHRINKDATABASE calls DBCC SHRINKFILE for every file in the database. If you're not shipping the log file, why are you shrinking it at all? Seems like a waste of time.

Delivering just an mdf file is not a supported method for sharing a database. There's always a risk that the file cannot be reattached. Shipping a backup or at the very least the full set of files is the best way to ship a database.

Robert L Davis
  • 674
  • 3
  • 9
  • LocalDB might be a good option. Can we tune max memory and such? I guess clients can reattach to a full instance if they wish to. I updated my question to point out that there is over a gig of unused space in the MDF. I believe we do ship the LDF as well and will check. – Justin Dearing Mar 14 '14 at 04:01
3

I'm going to assume you actually have the following goals:

  • Provide maximum performance at the clients post ALTER DATABASE X SET READ_ONLY
  • Send the minimum amount of data to the clients consistent with maximum performance

I'm also going to assume you do NOT have any of the following goals:

  • Reduce excessive amounts of work your server may have to do
  • Complete the deployment work in a short amount of time
  • Provide security for data transmission

Thus, I would recommend a few alterations. Order is absolutely critical, since we're going to first cause fragmentation, and then reduce it.

Therefore, in the following order:

  • Run whatever Agent Job you normally use to do Ola's backup routine, either Full or Diff, and note that date/time of this "Ready to Deploy" backup
    • You may continue with your normal work in your normal database once this backup is complete, while the rest of this is running.
  • Restore the backup you just took to a new database which I will call PreparingToDeploy in this answer
  • Change PreparingToDeploy to make the databases owned by sa and remove all logins
  • Ensure PreparingToDeploy is in the SIMPLE recovery model
  • Ensure PreparingToDeploy is using CHECKSUM for PAGE_VERIFY
  • Drop every table and column and view and stored procedure and ... ... ... not explicitly required by the clients in PreparingToDeploy
  • Verify every single table in PreparingToDeploy has a Clustered index. If not, stop here, delete PreparingToDeploy, fix that, retest, and start deploying over again.
    • You really should do this check regularly, but your clients won't care about tables they won't get.
  • Set AUTOGROWTH on PreparingToDeploy MDF files to a stupidly small fixed amount - perhaps as little as 8 to 32 MB at a time.
    • this will cause filesystem level fragmentation, but that won't matter post-deployment, so in this rare edge case it's all right.
  • Run Ola's maintenance script (modified below) on PreparingToDeploy to reduce the total number of pages in use
  • CHECKPOINT PreparingToDeploy just in case
  • SHRINKFILE only your PreparingToDeploy MDFs to 0% free space.
    • You have just caused index fragmentation, which is horrific for performance, but we're about to fix that. The extra effort is ok in this rare edge case.
  • Run Ola's maintenance script (modified below) on PreparingToDeploy to get rid of the index fragmentation we just caused
  • Now SHRINKFILE TRUNCATEONLY the MDF files to remove any trailing free space without causing any more fragmentation
  • CHECKPOINT just in case
  • SHRINKFILE your LDF files does to the minimum size they can be for your clients to actually run with the largest amount of data they expect to have in the next N years, where N is large.
    • You have just caused index fragmentation, which is horrific for performance, but we're about to fix that. The extra effort is ok in this rare edge case.
  • take a FULL backup of PreparingToDeploy
    • Ideally with SQL Server backup compression turned on to save space, if it's SQL 2008 Enterprise or SQL 2008 R2 Standard or above.
  • DROP DATABASE PreparingToDeploy
  • Ship the FULL backup of PreparingToDeploy to your clients

You will, however, need to change your deployment methodology to restore the backup rather than copy MDF's around - use sqlcmd if you need to. As a bonus, your clients won't have a problem even if they're on a later version of SQL Server than you are :).

Alter the call to Ola's maintenance script for deployment purposes only to the following, since the SOLE purpose is to provide the maximum benefit to the client - it can take as long as it takes, you really don't care, and INDEX REBUILD provides a better end result than INDEX REORGANIZE does. Every option that might perhaps provide some benefit to the READ_ONLY client copies has been selected.

EXECUTE dbo.IndexOptimize
    @Databases = 'DATABASE_IN_QUESTION',
    @FragmentationLow = NULL,
    @FragmentationMedium = 'INDEX_REBUILD_OFFLINE',
    @FragmentationHigh = 'INDEX_REBUILD_OFFLINE',
    @FragmentationLevel1 = 1,
    @FragmentationLevel2 = 2,
    @SortInTempdb = 'Y',
    @MaxDOP = 0,
    @PageCountLevel = 9,
    @Fillfactor = 100,
    @PadIndex = 100,
    @LOBCompaction = 'Y',
    @UpdateStatistics = 'ALL',
    @StatisticsSample = 100,
    @MSShippedObjects = 'Y',
    @Execute = 'Y';
GO

You may be able to get rid of the second fragmentation parameter entirely, of course. Test first.

Anti-weakpasswords
  • 1,708
  • 10
  • 13
1

You should NOT shrink your database and then perform reindexing. Shrinking is not a good practice. Refer to Mike's answer for more details.

Your reindexing script looks good. No need to change any more parameters.

When you are saying "making database readonly" -- what do you actually mean ? Not allowing users to modify the data ?

Also, why are you shipping the mdf file ? You might be doing a detach of the current database and then copying the mdf and then shipping it to the client - which is not a good practice.

A FULL backup with compression (if you are on sql server 2008 R2 - standard edition and up has native backup compression) is sufficient to deliver an exact copy of your database at that particular time.

Kin Shah
  • 62,225
  • 6
  • 120
  • 236
  • I'll edit my question to make it quite clear that Mike's second caveat applies "we freed a lot of space that we know we'll never ever need again."

    As far as the MDF, I agree a backup is best practice. However, that would be a hard sell to non technical stake holders. Also, what exactly does a backup buy in the case of a single MDF in single user mode? It slightly complicates restores because you have to RESTORE WITH MOVE. Yes I am personally adept at FILELISTONLY/WITH MOVE. This will be deployed dozens of times to dozens of networks. Automating the restore would be buggy.

    – Justin Dearing Mar 14 '14 at 03:52