Press "Enter" to skip to content

Category: Administration

Pulling Non-Clustered Index Data

Kenneth Fisher shows using a non-clustered index potentially to reconstruct corrupted data on a clustered index:

So why would you want to do this? Well lets say for example you have a table in a database where the clustered index has become corrupted. Let’s further say that no one mentioned this to you for .. say a year. (No judging!) So your only option at this point might be to use the REPAIR_ALLOW_DATA_LOSS of DBCC CHECKDB. But when you are done how much data has actually been lost? Can you get any of it back?

If you’ve lived a good life and are very lucky, you might recover all data this way.  Otherwise, it’s a good idea to run CHECKDB more frequently and check those backups regularly as well.

Comments closed

Identify Page Split Sources

Dennes Torres shows us how to figure out where bad page splits are occurring:

When one page becomes full and a new page is needed this is reported as a page split, but this is a regular operation with no bad consequences for our queries. The problem happens with updates and non-sequential inserts, when the row needs to be inserted in the middle of the pages of the object and there is no space for this. SQL Server creates a new page, transfers half of the page data to the new page and writes the row data. This creates page fragmentation and is very bad for performance and is also reported as page split.

We can find the bad page splits using the event sql_server.transaction_log. This event monitors all the activities in the transaction log, because that we need to use with caution. We can filter the ‘operation’ field looking for the value 11, which means LOP_DELETE_SPLIT. This is the deletion of rows that happens when SQL Server is moving rows from one page to another in a page split, a bad page split.

It’d be nice to be able to find the particular query causing the page split, and it’d also be nice to find a less resource-intensive method of displaying this information.

Comments closed

Sys.Messages Error

Anup Warrier had an issue recently with sys.messages queries generating an error:

What you normally expect when you run SELECT * FROM sys.messages? The query will return a row for eachmessage_id or language_id of the error messages in the system, for both system-defined and user-defined messages.

Rather than returning the rows, system generated an error for me:

Msg 18058, Level 17, State 0, Line 1
Failed to load format string for error 362, language id 1033. Operating system error: 317(The system cannot find message text for message number 0x%1 in the message file for %2.). Check that the resource file matches SQL Server executable, and resource file in localized directory matches the file under English directory. Also check memory usage.

Anup follows up with the answer, so if you’re running into this issue, check out his post.

Comments closed

Restoring A Striped Backup

Steve Jones shows us how to restore a backup striped across multiple files:

Each of these is part of a striped backup, a piece of a backup file. To restore the backup, I need all the files to reassemble the backup. This is fairly simple, but you should be aware of how this works and how to perform a restore.

In my case, you can see I have 7 files for each database. They are the same name with an _0x at the end, with x being the number of the file.

Take Steve’s advice and script out the restore process.  That way you know how to do it next time, can start building automation scripts, and can make your life easier.

Comments closed

Are Trace Flags Dead?

Brent Ozar notes that several common trace flags are getting turned into all-grown-up commands:

SQL Server 2016 does away with these unintuitive trace flags by adding new ALTER DATABASE commands:

ALTER DATABASE SET MIXED_PAGE_ALLOCATION ON (or OFF, which is the new default behavior)

ALTER DATABASE MODIFY FILEGROUP [myfilegroup] AUTOGROW_ALL_FILES (or AUTOGROW_SINGLE_FILE, which is still the default)

I think trace flags will still be around for quite some time as a troubleshooting mechanism, but I certainly prefer clearer naming (was that trace flag 1117…or 1171…or maybe…).

Comments closed

Get SQL Server Configuration Aliases

Chris Bell shows us how to get the list of aliases set up on a server:

It is also a pain to sit and transcribe the various alias settings to be able to rebuild them all on the next machine.

There is an export list option for the aliases on your server, that’s nice and all, but there isn’t a corresponding import option.

Plus you have to deal with 32 and 64 bit lists.

The very simple script below helps since you can use to get the details of both the 32 and 64bit SQL Server aliases you have setup on your system.

Ready for it? It’s a long convoluted one:

If you use server aliases, you’ll want to check out this script.

Comments closed

Why Use Instant File Initialization?

Erin Stellato shows the benefits to using Instant File Initialization in terms of file modification and database restoration:

The time to zero out a file and write data is a function of sequential write performance on the drive(s) where the SQL Server data file(s) are located, when IFI is not enabled.  When IFI is enabled, creating or growing a data file is so fast that the time is not of significant consequence.  The time it takes to create or grow a value varies in seconds between 15K, SSD, flash, and magnetic storage when IFI is enabled.  However, if you do not enable IFI, there can be drastic differences in create, grow, and restore times depending on storage.

There’s a huge performance benefit with turning IFI on.

Comments closed

DBCC UPDATEUSAGE

Slava Murygin has some information on the DBCC UPDATEUSAGE command:

As you can see, UPDATEUSAGE found three problems with that table, while was running in “ESTIMATEONLY” mode. When it run in “Fix” mode it also found and fixed only three problems. The number of rows was still left unfixed.

I think it’s fair to say that this is a relatively uncommonly-used DBCC command, but can definitely be useful in a subset of circumstances.

Comments closed

CPU Co-Stop

David Klee discusses an important hypervisor-level metric:

VMware’s CPU Co-Stop metric shows you the amount of time that a parallelized request spends trying to line up the vCPU schedulers for the simultaneous execution of a task on multiple vCPUs. It’s measured in milliseconds spent in the queue per vCPU per polling interval. Higher is bad. Very bad. The operating system is constantly reviewing the running processes, and checking their runtime states. It can detect that a CPU isn’t keeping up with the others, and might actually flag a CPU is actually BAD if it can’t keep up and the difference is too great.

This is extremely useful information for DBAs in virtualized environments.  My crude and overly simplistic answer is, don’t over-book vCPUs on hosts running important VMs like your SQL Server instances.

Comments closed