I was hoping a career in consulting would possibly spark blog ideas!
One thing I am passionate about with SQL Server is Disaster Recovery. Having worked for a hospital during hurricane season as a DBA, I truly had some sleepless night wondering if I could bring a 2nd system up successfully in the event of a total disaster. Medical data is of the utmost importance when it comes to the field of health care. And I am sure, everyone has the stance that “their data is the most important data”! That is why you must protect it at all costs.
Either way, as a DBA it is my job to be able to stand up a 2nd server as soon as possible in the event of a total disaster. If it was only as easy as pushing a button, our job as DBAs would be much easier. Unfortunately though, proper disaster recovery requires fore thought, planning and testing.
I have a 4-fold plan of Disaster Recovery and I would like to discuss my thoughts on this. Part one will be discussed here with later parts in subsequent posts.
- Integrity Checks
- Off-Site duplicates
- Recovery Testing
1. Integrity Checks
Most everyone is aware of DBCC CHECKDB commands, and they are vital to the stability of your database. If you are not familiar with them, then please for the love of the SQL Gods learn about it!
It is of utmost importance to run these as often as feasible. I typically run integrity checks once a day during non-business hours or downtime. This process will be resource intensive which is why it needs to be done during downtime.
Now some of you are going to say, I tried running DBCC CHECKDB on my 350 GB database and it brought my server to its knees so I stopped doing them. All I can say is I hope your data pages are not corrupt.
Per MSDN DBCC CHECKDB does the following:
Runs DBCC CHECKALLOC on the database.
Runs DBCC CHECKTABLE on every table and view in the database.
Runs DBCC CHECKCATALOG on the database.
Validates the contents of every indexed view in the database.
Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
Validates the Service Broker data in the database.
Now that’s a whole lot of checking! If CHECKDB command does all these commands, then possibly we can shorten the duration by manually executing CHECKALLOC one night, then CHECKTABLE another, and maybe CHECKCATALOG a 3rd night. It is a thought.
There are many options to integrity checks that can shorten the execution time. Many blogs posts by Paul Randal, Aaron Bertrand (to name a few) have written many articles about DBCC CHECKDB and how to effectively use the different options and actually the different commands of DBCC to shorten the duration of the integrity check.
The one thing to remember here is if you databases is corrupt, so are your backups! SQL Backups are only copies of what is in the databases, if the data pages are corrupt so is your backup! This is why it is impetrative to regularly perform integrity checks on your databases.
One of my favorite methods because I can script it out right after Step #4 of My Methodology is off-load integrity check. If your databases are too large or too busy to do checks in production, after you have test restored your backups (hopefully you are doing this), that is an ideal time to run DBCC CHECKDB. The restored database is ideally on another server that will not have any impact on users if you run the DBCC CHECKDB commands.
In my next post, I will discuss my thoughts on SQL Backups: discussing native vs. 3rd Party and local vs. network backups.
3 thoughts on “Preventing a Disaster: My Methodology”
[…] Twitter Facebook ← Preventing a Disaster: My Methodology […]
[…] back to part 3 of my 4 part series on Preventing a Disaster. In Part 1, I discussed the importance of Database Integrity checks and possibly ways to run them against VLDB […]
[…] is my last post in a series called Preventing a Disaster: My Methodology. In Part 1 I discussed the importance of running DBCC CHECKDB on your databases and provided tips […]