Press "Enter" to skip to content

Category: DBCC

Last Known Good DBCC CHECKDB In Powershell

Rob Sewell shows off a cmdlet to check DBCC DBINFO for each database to get the last known good CHECKDB run:

This time we get more information. The server name, database name, when the database was created, the last good DBCC Checkdb, how long since the database was created, how long since the last known good DBCC Checkdb, a status and a Data Purity enabled flag. If you look at the image above it shows that the DBA_Admin database has a status of “New database, not checked yet” even though it has a date for the last known good DBCC CheckDb. This is because it was restored after this server was upgrade from CTP 1.3 to CTP 1.4 and there has not yet been a DBCC CheckDb run yet. The system databases have a status of “CheckDb should be performed”. This is because the last known good DBCC CheckDb is more than 7 days ago. Lets run a DBCC CheckDb and check again

Do read the caveats, and also check out a previous Arun Sirpal blog post on DBCC DBINFO.

Comments closed

Columnstore Indexes On Cloned Databases

Parikshit Savjani has a script to update columnstore index statistics before running DBCC CLONEDATABASE:

Unlike traditional Btree indexes, when a columnstore index is created, there is no index statistics created on the columns of the columnstore indexes. However, there is an empty stats object created with the same name as columnstore index and an entry is added to sys.stats at the time of index creation. The stats object is populated on the fly when a query is executed against the columnstore index or when executing DBCC SHOW_STATISTICS against the columnstore index, but the columnstore index statistics aren’t persisted in the storage. The index statistics is different from the auto created statistics on the individual columns of columnstore indexes which is generated on the fly and persisted in the statistics object. Since the index statistics is
not persisted in storage, the clonedatabase will not contain those statistics leading to inaccurate stats and different query plans when same query has run against database clone as opposed to production database.

Click through for the script.

Comments closed

Understanding DBCC OPENTRAN

Kevin Hill goes into detail on what DBCC OPENTRAN does:

I have verified that new records I inserted have been read by the log reader, AND distributed to the subscriber(s).  This means that while you are seeing

Oldest distributed LSN : (37:157:3)

There is not an error…just info.

If you have non-distributed LSNs, there is something to troubleshoot in the replication process which is way outside the scope of this post.  A non-distributed replicated transaction/LSN CAN cause some huge Log file growth, and need to be investigated.  If this happens frequently, use the TABLERESULTS option to log to a regular table and alert on it.

Good information here.

Comments closed

Understanding DBCC TEC

Ewald Cress explains (but does not document!) an undocumented DBCC command:

Boring old disclaimer: What I am describing here is undocumented, unsupported, likely to change between versions, and will probably make you go blind. In fact, the depth of detail exposed illustrates one reason why Microsoft would not want to document it: if end users of SQL Server found a way to start relying on this not changing, it would hamstring ongoing SQL Server improvement and refactoring.

With that out of the way, let’s dive right into DBCC TEC, a command which can dump a significant chunk of the object tree supporting a SQL Server session. This result is the same thing that shows up within a dump file, namely the output of the CSession::Dump() function – it’s just that you can invoke this through DBCC without taking a dump (cue staring match with Kendra). Until corrected, I shall imagine that TEC stands for Thread Execution Context.

I appreciate Ewald’s ability to make sense out of the madness of database internals.

Comments closed

Rolling The CHECKDB Dice

Arun Sirpal tells a harrowing story of a Recovery Pending situation:

Looks like I had open transactions while my transaction log got lost during an outage. I tried switching the database online but that failed.

Msg 5181, Level 16, State 5, Line 4 Could not restart database “FAT”. Reverting to the previous status. Msg 5069, Level 16, State 1, Line 4 ALTER DATABASE statement failed.

Accessing the database is the real challenge now.

Moral of the story:  have backups and have good luck.

Comments closed

PFS Page Repair

Paul Randal explains why DBCC CHECKDB cannot repair Page Free Space pages:

PFS pages occur every 8088 pages in every data file and store a byte of information about itself and the following 8087 pages. The most important piece of information it stores is whether a page is allocated (in use) or not. You can read more about PFS pages and the other per-database allocation bitmaps in this blog post.

So why can’t they be repaired by DBCC CHECKDB, when all the other per-database allocation bitmaps can?

The answer is that the is-this-page-allocated-or-not information is not duplicated anywhere else in the database, and it’s impossible to reconstruct it in all cases.

In case you’re not particularly familiar with PFS pages, Paul has a blog post from 2006 describing GAM, SGAM, and PFS pages.

Comments closed

Automated Database Restoration And CHECKDB

Anthony Nocentino shows how to use dbatools to automate testing restoration of database backups and running DBCC CHECKDB against these restored backups:

Requirements

  1. Automation – Complete autopilot, no human interaction.
  2. Report job status – Accurate reporting in the event the job failed, the CHECKDB failed or the restore failed.

Solution

  1. Use dbaltools cmdlets for restore and CHECKDB operations
  2. Use SQL Agent Job automation, logging and alerting

So let’s walk through this implementation together.

You won’t get a turnkey solution from this blog post, but you will get a good process to follow.

Comments closed

DBCC Checks For Large Databases

David Alcock gives a couple methods for performing consistency checks against gigantic databases:

One way to achieve this is to  split up the consistency checks covering smaller objects and native functionality allows us to do just that, we can perform the checks at the table level or indeed if they are implemented at the filegroup level too using the DBCC CHECKFILEGROUP command.

How to go about this is pretty straightforward; take the list of tables, split them into equal(ish) groups. The groups now form a pool of objects and within a nightly (or daily) window perform the check on each object in the pool. This effectively spreads a database consistency check over multiple days, you avoid the impact on production activities but also ensure all objects are checked over time.

Read on for the solution.  I’m also a big fan of Minion CheckDB, which is designed to handle this type of scenario as well.

Comments closed