Are Trace Flags Dead?

Brent Ozar notes that several common trace flags are getting turned into all-grown-up commands:

SQL Server 2016 does away with these unintuitive trace flags by adding new ALTER DATABASE commands:

ALTER DATABASE SET MIXED_PAGE_ALLOCATION ON (or OFF, which is the new default behavior)

ALTER DATABASE MODIFY FILEGROUP [myfilegroup] AUTOGROW_ALL_FILES (or AUTOGROW_SINGLE_FILE, which is still the default)

I think trace flags will still be around for quite some time as a troubleshooting mechanism, but I certainly prefer clearer naming (was that trace flag 1117…or 1171…or maybe…).

Get SQL Server Configuration Aliases

Chris Bell shows us how to get the list of aliases set up on a server:

It is also a pain to sit and transcribe the various alias settings to be able to rebuild them all on the next machine.

There is an export list option for the aliases on your server, that’s nice and all, but there isn’t a corresponding import option.

Plus you have to deal with 32 and 64 bit lists.

The very simple script below helps since you can use to get the details of both the 32 and 64bit SQL Server aliases you have setup on your system.

Ready for it? It’s a long convoluted one:

If you use server aliases, you’ll want to check out this script.

Why Use Instant File Initialization?

Erin Stellato shows the benefits to using Instant File Initialization in terms of file modification and database restoration:

The time to zero out a file and write data is a function of sequential write performance on the drive(s) where the SQL Server data file(s) are located, when IFI is not enabled.  When IFI is enabled, creating or growing a data file is so fast that the time is not of significant consequence.  The time it takes to create or grow a value varies in seconds between 15K, SSD, flash, and magnetic storage when IFI is enabled.  However, if you do not enable IFI, there can be drastic differences in create, grow, and restore times depending on storage.

There’s a huge performance benefit with turning IFI on.

DBCC UPDATEUSAGE

Slava Murygin has some information on the DBCC UPDATEUSAGE command:

As you can see, UPDATEUSAGE found three problems with that table, while was running in “ESTIMATEONLY” mode. When it run in “Fix” mode it also found and fixed only three problems. The number of rows was still left unfixed.

I think it’s fair to say that this is a relatively uncommonly-used DBCC command, but can definitely be useful in a subset of circumstances.

Backups To URL Containers

Rob Sewell notes that Azure blob storage container names are case-sensitive:

If you use SQL Backup to URL to backup your databases to Azure blob storage remember that for the container name case is important

Despite all of my Linux love and C-based language tolerance, case sensitivity for filenames and development languages is a relic of a barbaric past.  Nevertheless, that’s the minefield we must traverse.

CPU Co-Stop

David Klee discusses an important hypervisor-level metric:

VMware’s CPU Co-Stop metric shows you the amount of time that a parallelized request spends trying to line up the vCPU schedulers for the simultaneous execution of a task on multiple vCPUs. It’s measured in milliseconds spent in the queue per vCPU per polling interval. Higher is bad. Very bad. The operating system is constantly reviewing the running processes, and checking their runtime states. It can detect that a CPU isn’t keeping up with the others, and might actually flag a CPU is actually BAD if it can’t keep up and the difference is too great.

This is extremely useful information for DBAs in virtualized environments.  My crude and overly simplistic answer is, don’t over-book vCPUs on hosts running important VMs like your SQL Server instances.

Auto-Update Stats Threshold Change

Erik Darling points out that automatic statistics updates will happen more frequently in 2016 for large tables than in prior versions:

Here’s an abridged version of 10-20 million and 30-40 million rows, and how many modifications they took before a stats update occurred. If you follow the PercentMod column down, the returns diminish a bit the higher up you get. I’m not saying that I’d prefer to wait for 20% + 500 rows to modify, by any stretch. My only point here is that there’s not a set percentage to point to.

And, because you’re probably wondering, turning on Trace Flag 2371 in 2016 doesn’t make any difference.

This is a good change, though as Erik points out, if you’re managing very large tables, you might already have the trace flag on and thereby won’t see any difference.

DBCC Extended Checks

The SQL Server CSS team has a new post on DBCC performance improvements:

Starting with SQL Server 2016, additional checks on filtered indexes, persisted computed columns, and UDT columns will not be run by default to avoid the expensive expression evaluation(s.)  This change greatly reduces the duration of CHECKDB against databases containing these objects.  However, the physical consistency checks of these objects is always completed.  Only when EXTENDED_LOGICAL_CHECKS option is specified will the expression evaluations be performed in addition to already present, logical checks (indexed view, XML indexes, and spatial indexes) as part of the EXTENDED_LOGICAL_CHECKS option.

 

For filtered indexes, CHECKDB has also been improved to skip records that do not qualify as being indexed by target NC index. 

This is telling me that we’ll want to have two separate CHECKDB processes, one which regularly runs CHECKDB (or CHECKTABLE) and one which occasionally runs CHECKDB with EXTENDED_LOGICAL_CHECKS.

Table Partitioning

Slava Murygin walks through partitioning:

Usually new Partition Ranges are added to the end of a Function/Schema, but in my example I’ll demonstrate how to split middle Partition.
At first we create new File Group. Then make it next within a schema and then split a Function:

All records, which are Less than 1/1/2015 and more than 1/1/2014, were moved to the new partition.

Partitioning is one of those topics I run into frequently enough to need to know it, but not frequently enough to memorize it; every time it feels like I’m starting from scratch.

System Database Restrictions

Dave Mason ran into a system database restriction regarding number of files:

One thing I recently learned is that you can’t add files to [model]. I was hoping to impact the number of files a database has when it is created. It’s common for db’s to be created, with just the standard mdf/ldf pair of files. Then the db is loaded with varying amounts of data. After a certain threshold, it makes more sense to have multiple data files, based on the environment. Sure, I could add files later and “balance” the data across the files. But it would be nice to *not* have to do this; some down time may be required and LOB data presents its own challenges

There aren’t too many restrictions on the model database, but this is a biggie.

Categories

July 2017
MTWTFSS
« Jun  
 12
3456789
10111213141516
17181920212223
24252627282930
31