DBCC Checks For Large Databases

David Alcock gives a couple methods for performing consistency checks against gigantic databases:

One way to achieve this is to  split up the consistency checks covering smaller objects and native functionality allows us to do just that, we can perform the checks at the table level or indeed if they are implemented at the filegroup level too using the DBCC CHECKFILEGROUP command.

How to go about this is pretty straightforward; take the list of tables, split them into equal(ish) groups. The groups now form a pool of objects and within a nightly (or daily) window perform the check on each object in the pool. This effectively spreads a database consistency check over multiple days, you avoid the impact on production activities but also ensure all objects are checked over time.

Read on for the solution.  I’m also a big fan of Minion CheckDB, which is designed to handle this type of scenario as well.

Related Posts

Breaking Down the MAXDOP Guidance Change

Joe Obbish digs into Microsoft’s new guidance for maximum degree of parallelism: I’ve heard some folks claim that keeping all parallel workers on a single hard NUMA nodes can be important for query performance. I’ve even seen some queries experience reduced performance when thread 0 is on a different hard NUMA node than parallel worker […]

Read More

Offloading Code Review Burdens with Automation

Ed Elliott argues that automation and testing can make code reviews easier: OK so if we break this down into what a DBA should be doing as part of a code review: – Ensure formatting is correct and any standards followed– Have they introduces a SQL injection vulnerability?– Consider any side effects of the actual […]

Read More

Categories

February 2017
MTWTFSS
« Jan Mar »
 12345
6789101112
13141516171819
20212223242526
2728