Press "Enter" to skip to content

Category: Administration

CPU Co-Stop

David Klee discusses an important hypervisor-level metric:

VMware’s CPU Co-Stop metric shows you the amount of time that a parallelized request spends trying to line up the vCPU schedulers for the simultaneous execution of a task on multiple vCPUs. It’s measured in milliseconds spent in the queue per vCPU per polling interval. Higher is bad. Very bad. The operating system is constantly reviewing the running processes, and checking their runtime states. It can detect that a CPU isn’t keeping up with the others, and might actually flag a CPU is actually BAD if it can’t keep up and the difference is too great.

This is extremely useful information for DBAs in virtualized environments.  My crude and overly simplistic answer is, don’t over-book vCPUs on hosts running important VMs like your SQL Server instances.

Comments closed

Auto-Update Stats Threshold Change

Erik Darling points out that automatic statistics updates will happen more frequently in 2016 for large tables than in prior versions:

Here’s an abridged version of 10-20 million and 30-40 million rows, and how many modifications they took before a stats update occurred. If you follow the PercentMod column down, the returns diminish a bit the higher up you get. I’m not saying that I’d prefer to wait for 20% + 500 rows to modify, by any stretch. My only point here is that there’s not a set percentage to point to.

And, because you’re probably wondering, turning on Trace Flag 2371 in 2016 doesn’t make any difference.

This is a good change, though as Erik points out, if you’re managing very large tables, you might already have the trace flag on and thereby won’t see any difference.

Comments closed

DBCC Extended Checks

The SQL Server CSS team has a new post on DBCC performance improvements:

Starting with SQL Server 2016, additional checks on filtered indexes, persisted computed columns, and UDT columns will not be run by default to avoid the expensive expression evaluation(s.)  This change greatly reduces the duration of CHECKDB against databases containing these objects.  However, the physical consistency checks of these objects is always completed.  Only when EXTENDED_LOGICAL_CHECKS option is specified will the expression evaluations be performed in addition to already present, logical checks (indexed view, XML indexes, and spatial indexes) as part of the EXTENDED_LOGICAL_CHECKS option.

 

For filtered indexes, CHECKDB has also been improved to skip records that do not qualify as being indexed by target NC index. 

This is telling me that we’ll want to have two separate CHECKDB processes, one which regularly runs CHECKDB (or CHECKTABLE) and one which occasionally runs CHECKDB with EXTENDED_LOGICAL_CHECKS.

Comments closed

Table Partitioning

Slava Murygin walks through partitioning:

Usually new Partition Ranges are added to the end of a Function/Schema, but in my example I’ll demonstrate how to split middle Partition.
At first we create new File Group. Then make it next within a schema and then split a Function:

All records, which are Less than 1/1/2015 and more than 1/1/2014, were moved to the new partition.

Partitioning is one of those topics I run into frequently enough to need to know it, but not frequently enough to memorize it; every time it feels like I’m starting from scratch.

Comments closed

System Database Restrictions

Dave Mason ran into a system database restriction regarding number of files:

One thing I recently learned is that you can’t add files to [model]. I was hoping to impact the number of files a database has when it is created. It’s common for db’s to be created, with just the standard mdf/ldf pair of files. Then the db is loaded with varying amounts of data. After a certain threshold, it makes more sense to have multiple data files, based on the environment. Sure, I could add files later and “balance” the data across the files. But it would be nice to *not* have to do this; some down time may be required and LOB data presents its own challenges

There aren’t too many restrictions on the model database, but this is a biggie.

Comments closed

Plan Handles

Andy Mallon talks about plan handles (sql_handle):

If you’ve ever looked at sys.sysprocesses or sys.dm_exec_requests (or a number of other DMVs), you’ve noticed there is a column called “sql_handle” that contains some binary gobbledygook. Books Online gives the (un)helpful definition as “Hash map of the SQL text of the request.”

“Binary gobbledygook” is probably the best description of the plan handle available.

Comments closed

Service Account Password Changes Without Rebooting

Andrea Allred shows that you can change the password of a service account without restarting an instance:

Did you know that you can change the password on the SQL Service account that is running your SQL instance without a reboot or restart?  Turns out this is true.  We have a new round of password requirements and it means that we need to change passwords on servers more often. But, since we need our servers up and reboots have to be heavily planned, we needed a solution that kept us from having to restart an instance after a password change. This lovely msdn article explains all the details, but let me give you the cliffs notes.

This is helpful for those max uptime scenarios where even a momentary service restart requires planned downtime.

Comments closed

Maintaining SSISDB

Ginger Grant shows us how to manage SSISDB:

A client asked me recently why he should back up the SSISDB database. While you can recreate everything inside of the SSISDB, it will take time and you will have to remember exactly how all of your variables were set. Restoring the backup decreases this issue and having a backup allows a server to be redeployed quickly. When you do back up the database, make sure that you remember to backup the database certificate, which is created when the SSISDB is created as well, as you will need this to do a restore. By default. the recovery model of the SSISDB is set to Full. If the packages in SSISDB are changing minute by minute, full would make sense, but given that an SSISDB contains packages which are run on a scheduled basis, most likely the changes made are infrequent. Change the recovery model to simple.

SSISDB is a real database, just like ReportServer, so don’t neglect it just because you didn’t create it.

Comments closed

DBCC Run Frequency

Erik Darling talks about DBCC CHECKDB frequency:

The shorter the period of time you keep backups, the more often you need to run DBCC CHECKDB. If you keep data for two weeks, weekly is a good starting point. If you take weekly fulls, you should consider running your DBCC checks before those happen. A corrupt backup doesn’t help you worth a lick. Garbage backup, garbage restore. If your data only goes back two weeks, and your corruption goes back a month, best of luck with your job search.

Erik provides some good guidelines on where to begin, but as always, your answer will depend upon your particular circumstances.

Comments closed

DBCC Scales Better In 2016

The CSS SQL Engineers have a new series called “It Just Runs Faster.”  In their first post, they discuss DBCC improvements:

Internally DBCC CHECK* uses a page scanning coordinator design (MultiObjectScanner.)  SQL Server 2016 changes the internal design to (CheckScanner), applying no lock semantics and a design similar to those used with In-Memory Optimized (Hekaton) objects, allowing DBCC operations to scale far better than previous releases.

Making DBCC checks significantly faster for large databases makes administration that much easier.  I’m looking forward to seeing this.  James Anderson, however, is concerned that things might be worse at the extreme low end.

Comments closed