DBCC Checks For Large Databases

David Alcock gives a couple methods for performing consistency checks against gigantic databases:

One way to achieve this is to  split up the consistency checks covering smaller objects and native functionality allows us to do just that, we can perform the checks at the table level or indeed if they are implemented at the filegroup level too using the DBCC CHECKFILEGROUP command.

How to go about this is pretty straightforward; take the list of tables, split them into equal(ish) groups. The groups now form a pool of objects and within a nightly (or daily) window perform the check on each object in the pool. This effectively spreads a database consistency check over multiple days, you avoid the impact on production activities but also ensure all objects are checked over time.

Read on for the solution.  I’m also a big fan of Minion CheckDB, which is designed to handle this type of scenario as well.

Related Posts

Data Discovery And Classification In SQL Server

Gilad Mittelman explains how the SQL Information Protection (aka Data Discovery and Classification) process works in SQL Server and Azure SQL Database: SQL Information Protection (SQL IP) introduces a set of advanced services and new SQL capabilities, forming a new information protection paradigm in SQL aimed at protecting the data, not just the database: Discovery […]

Read More

Changing Int To Bigint

Danny Kruge shows one way to change a table’s identity value from integer to bigint: The table was around 500GB with over 900 million rows. Based on the average number of inserts a day on that table, I estimated that we had eight months before inserts on that table would grind to a halt. This […]

Read More

Categories

February 2017
MTWTFSS
« Jan Mar »
 12345
6789101112
13141516171819
20212223242526
2728