Press "Enter" to skip to content

Category: DBCC

Scaling CheckTable With Respect To CPUs

Lonny Niederstadt has a couple of posts on scaling DBCC CHECKTABLE based on degree of parallelism.  First, he looks at running the command with physical_only:

So we can use this formula when dop, elapsed_ms, and cpu_ms are known:

DOP * elapsed_ms = cpu_ms + idle_ms

That allows the 8 checktable operations to be summarized in this graph.  From DOP1 to DOP 8 the cpu_ms of the operation is extremely steady.  From DOP 1 to DOP 4 there are significant decreases in elapsed time as dop increases.  After dop 4, reduction in elapsed time is slight.  Throughout the tested range, idle_ms increased at a nearly linear rate.

In his second post, he looks at full CHECKTABLE runs and not just physical_only:

So the good news for today is that checktable operations without the physical_only option scale farther/better on my test tables than checktable with physical_only.  While with physical_only scaling benefits in elapsed time are primarily seen only to dop 4, without the physical_only option elapsed time benefits to increasing dop extend at least to dop 8.
And we saw that the shape of scalability graphs is pretty volatile 🙂  That’s largely because modest changes in elapsed time are multiplied by dop in this calculation to arrive at the idle_ms number – that idle_ms number is the one that changes shape most readily.

These are prologue posts to a discussion on the OLEDB wait type.

Comments closed

Doubly Linked Lists And Bugs

Ewald Cress explains a bug in DBCC PAGE:

Let’s start with the safety convention. The “null” of a null pointer isn’t a magic value, but in real-life implementation is simply zero, which is a perfectly valid virtual address. However, on the premise that trying to access address zero or addresses near it probably indicates a program error, the OS will map that page in such a way that trying to access it causes an access violation. This is not a bug or an accident, but a damn clever feature! Robert Love explains it very nicely over here for Linux, and it applies equally to Windows.

Now recall the convention that trying to retrieve the head or tail of an empty list will – by convention – bring you back a null pointer. When iterating, a related convention may also return a zero when you’ve gone all the way around and come back to the list head. Clearly the onus is on the developer to recognise that null pointer and not dereference it, but attempting to do so sets in motion the safety feature of an access violation, which can then be neatly caught through standard exception handling, for instance yielding a diagnostic stack dump.

Very interesting article, and also a good juxtaposition of supported, “production-safe” code versus undocumented processes.

Comments closed

Last Known Good DBCC CHECKDB In Powershell

Rob Sewell shows off a cmdlet to check DBCC DBINFO for each database to get the last known good CHECKDB run:

This time we get more information. The server name, database name, when the database was created, the last good DBCC Checkdb, how long since the database was created, how long since the last known good DBCC Checkdb, a status and a Data Purity enabled flag. If you look at the image above it shows that the DBA_Admin database has a status of “New database, not checked yet” even though it has a date for the last known good DBCC CheckDb. This is because it was restored after this server was upgrade from CTP 1.3 to CTP 1.4 and there has not yet been a DBCC CheckDb run yet. The system databases have a status of “CheckDb should be performed”. This is because the last known good DBCC CheckDb is more than 7 days ago. Lets run a DBCC CheckDb and check again

Do read the caveats, and also check out a previous Arun Sirpal blog post on DBCC DBINFO.

Comments closed

Columnstore Indexes On Cloned Databases

Parikshit Savjani has a script to update columnstore index statistics before running DBCC CLONEDATABASE:

Unlike traditional Btree indexes, when a columnstore index is created, there is no index statistics created on the columns of the columnstore indexes. However, there is an empty stats object created with the same name as columnstore index and an entry is added to sys.stats at the time of index creation. The stats object is populated on the fly when a query is executed against the columnstore index or when executing DBCC SHOW_STATISTICS against the columnstore index, but the columnstore index statistics aren’t persisted in the storage. The index statistics is different from the auto created statistics on the individual columns of columnstore indexes which is generated on the fly and persisted in the statistics object. Since the index statistics is
not persisted in storage, the clonedatabase will not contain those statistics leading to inaccurate stats and different query plans when same query has run against database clone as opposed to production database.

Click through for the script.

Comments closed

Understanding DBCC OPENTRAN

Kevin Hill goes into detail on what DBCC OPENTRAN does:

I have verified that new records I inserted have been read by the log reader, AND distributed to the subscriber(s).  This means that while you are seeing

Oldest distributed LSN : (37:157:3)

There is not an error…just info.

If you have non-distributed LSNs, there is something to troubleshoot in the replication process which is way outside the scope of this post.  A non-distributed replicated transaction/LSN CAN cause some huge Log file growth, and need to be investigated.  If this happens frequently, use the TABLERESULTS option to log to a regular table and alert on it.

Good information here.

Comments closed

Understanding DBCC TEC

Ewald Cress explains (but does not document!) an undocumented DBCC command:

Boring old disclaimer: What I am describing here is undocumented, unsupported, likely to change between versions, and will probably make you go blind. In fact, the depth of detail exposed illustrates one reason why Microsoft would not want to document it: if end users of SQL Server found a way to start relying on this not changing, it would hamstring ongoing SQL Server improvement and refactoring.

With that out of the way, let’s dive right into DBCC TEC, a command which can dump a significant chunk of the object tree supporting a SQL Server session. This result is the same thing that shows up within a dump file, namely the output of the CSession::Dump() function – it’s just that you can invoke this through DBCC without taking a dump (cue staring match with Kendra). Until corrected, I shall imagine that TEC stands for Thread Execution Context.

I appreciate Ewald’s ability to make sense out of the madness of database internals.

Comments closed

Rolling The CHECKDB Dice

Arun Sirpal tells a harrowing story of a Recovery Pending situation:

Looks like I had open transactions while my transaction log got lost during an outage. I tried switching the database online but that failed.

Msg 5181, Level 16, State 5, Line 4 Could not restart database “FAT”. Reverting to the previous status. Msg 5069, Level 16, State 1, Line 4 ALTER DATABASE statement failed.

Accessing the database is the real challenge now.

Moral of the story:  have backups and have good luck.

Comments closed

PFS Page Repair

Paul Randal explains why DBCC CHECKDB cannot repair Page Free Space pages:

PFS pages occur every 8088 pages in every data file and store a byte of information about itself and the following 8087 pages. The most important piece of information it stores is whether a page is allocated (in use) or not. You can read more about PFS pages and the other per-database allocation bitmaps in this blog post.

So why can’t they be repaired by DBCC CHECKDB, when all the other per-database allocation bitmaps can?

The answer is that the is-this-page-allocated-or-not information is not duplicated anywhere else in the database, and it’s impossible to reconstruct it in all cases.

In case you’re not particularly familiar with PFS pages, Paul has a blog post from 2006 describing GAM, SGAM, and PFS pages.

Comments closed

Automated Database Restoration And CHECKDB

Anthony Nocentino shows how to use dbatools to automate testing restoration of database backups and running DBCC CHECKDB against these restored backups:

Requirements

  1. Automation – Complete autopilot, no human interaction.
  2. Report job status – Accurate reporting in the event the job failed, the CHECKDB failed or the restore failed.

Solution

  1. Use dbaltools cmdlets for restore and CHECKDB operations
  2. Use SQL Agent Job automation, logging and alerting

So let’s walk through this implementation together.

You won’t get a turnkey solution from this blog post, but you will get a good process to follow.

Comments closed