Press "Enter" to skip to content

Author: Kevin Feasel

Text-Based Execution Plans

Erik Darling looks at the old SET STATISTICS PROFILE command:

Before you think this is to perf tuning what boxed wine is to pest extermination; it’s not. It’s another tool that has pros and cons. The plan cache is cool too, but cached plans don’t have all the information that actual plans do. You can run Traces or Profiler or Extended Events, but they all sort of have their own caveats, gotchas, and overhead. If you don’t have a monitoring tool, though, what are you left with?

Let’s take a look at what you can do with STATISTICS PROFILE, and then the (rather obvious) limitations. Here’s the setup and a simple query.

I’ll admit that outside of learning what they are, I’ve never used text execution plans.  I’ll read the XML, view the graphical results, pipe them out to SentryOne Plan Explorer (formerly SQL Sentry), etc.  But the text plans never held much allure for me.

Comments closed

Range Lock Deadlocks

Dmitri Korotkevitch looks at one scenario in which range locks can cause deadlocking:

The range locks are usually acquired only in SERIALIZABLE isolation level; however, there is another, pretty much undocumented case, when SQL Server can use those locks. It happens even in READ UNCOMMITTED and READ COMMITTED SNAPSHOT modes when you havenonclustered indexes that have IGNORE_DUP_KEY=ON option. In that case rows with the duplicated index keys would not raise an error but rather being ignored. SQL Server would not insert then into the table.

This behavior leads to very hard to explain cases of blocking and even deadlocks in the system. Let’s look at the example and create the table with a few rows as shown below. As you see, nonclustered index on the table has IGNORE_DUP_KEY option enabled.

This is an interesting risk when using IGNORE_DUP_KEY.

Comments closed

Foresight

Anders Pedersen shares an easily-avoidable tale of woe:

ETL.  Spec said only Address Line 1 is needed to be loaded, so the developers only bring that line in (plus name, city etc.).  Fast forward 8 years, I get a request on my desk: “Please add Address Line 2 to import, and all tables.  Oh, and we need historical data for previously loaded files.  And for all address types”.

Groan.
No normalization in this database (which is just one of about 40 databases with SIMILAR structure, but not identical).

Read on for the damage done, as well as another example of foresight saving the day.

Comments closed

Review Your Process

Chris Sommer wants you to think about why you follow certain processes:

We’re still dealing with the same problems because we’re dealing with the problems in the same way.

I think it can be cultural and can propagate from the senior level DBA’s right on down to the new hires. Sometimes it’s just lack of knowledge or understanding. Sometimes it’s just pure laziness to not want to do a deep dive and find a better solution to a recurring problem.

Here is a pretty extreme example but I think it portrays all of these.

Given some of the things I’ve seen, I’d say his example is not at all extreme.

Comments closed

SSMS Grids

Riley Major airs grievances with SQL Server Management Studio’s old-timey grids:

After years of using SQL Server Management Studio (and its predecessor Query Analyzer), I’m struck by how incapable the results grids still are. Unlike Excel, you can’t sort them, you can’t filter them, you can’t search within them, and you can’t easily change their font size. In any commercial software product, grid tools are table stakes. For some reason vendors still like to run through them, but they’re never a differentiator. That’s because you can just buy a grid component and use it in your application. Even the basic grid control which came with .NET 2.0could sort.

Click through to read more, and also check out the Trello board that Riley mentions.

Comments closed

Attaching Databases With CDC Enabled

Amit Banerjee notes an issue with detaching a database with Change Data Capture enabled and moving it to a new version of SQL Server:

When you detach a database with Change Data Capture enabled on SQL Server 2014 and below and attach it to a SQL Server 2016 instance, you could run into the error mentioned below while execute Change Data Capture (CDC) related procedures.

Error:

Msg 22832, Level 16, State 1, Procedure sp_cdc_enable_table_internal, Line 639 [Batch Start Line 0]

Could not update the metadata that indicates table [<schema name>].[<object name>] is enabled for Change Data Capture. The failure occurred when executing the command ‘insert into [cdc].[captured_columns]’. The error returned was 213: ‘Column name or number of supplied values does not match table definition.’. Use the action and error to determine the cause of the failure and resubmit the request.

Read on to learn more, including how to fix the issue.

Comments closed

(Re-)Design For Today’s Needs

Andy Levy sees common problems when dealing with brownfield applications:

The primary system I deal with on a daily basis was originally developed as a DOS application and several of the above examples are drawn from it. Looking at the core tables and columns, it’s easy to identify those that began life in those early days – they all have 8-character names. Time moved on and the system grew and evolved. DOS to Windows. Windows to the web. But the database, and the practices and patterns used in the database, haven’t come along for the ride.

Data schema conversions can be hard and disruptive – you need to update your application, your stored procedures, and provide customers/users with a clean migration path. Code changes require testing. Complexity and cost grows every time you introduce changes. I get that.

There’s a lot of effort in Andy’s advice, but it’s well worth it.

Comments closed

Test Restores

Steve Jones implores you to test those database backups by restoring them somewhere:

What do you do? Hopefully you recognize the issue and can fix the issue. Maybe more importantly, you have a backup of the missing certificate.

Most people don’t deal with encryption, but you never know when your backup job might start failing, perhaps writing to a damaged file that appears to work (if you write as a device) but really isn’t capturing the backup file. Perhaps you don’t know that your backups are being written to a location and deleted a day later, but the process that is supposed to copy them to tape or a remote file share is broken.

Any number of things can happen. The point is that you want to be sure that you are actually getting useable backup files.

That means testing restores.

Read the whole thing.

Comments closed

CAP_CPU_PERCENT

Robert Davis looks at the CAP_CPU_PERCENT option in Resource Governor:

The need for this setting came about because MAX_CPU_PERCENT is not applied unless the server is busy. This could lead to a situation where queries in a low priority resource pool starts running while the server is idle and are allowed to consume all the CPU they can. Then high priority queries spin up, and they can’t immediately get the CPU they need due to the low priority queries not being capped. CAP_CPU_PERCENT came along and was designed to set a hard limit that the queries in a pool could not go over even if the server is idle. For example, if you cap the CPU at 25%, the queries in the pool will not exceed 25% no matter how idle the server is.

Problem solved, right?

When the end of a section is a yes/no question, the answer is usually “no.”  Read on before this burns you.

Comments closed

Nothing New Under The Sun

Kevin Hill reminisces and warns:

Installation defaults that are going to bite you (not version specific, and the installer is getting better):

  • Files all on the C drive

  • One TempDB data file (improved in SQL 2016)

  • Backups on C drive

  • No automated backups

  • Allow SQL to use ALL the memory

  • Allow SQL to use ALL the CPUs

  • Builtin\Administrators group not default*

  • Compressed backup set to OFF

There’s good advice here, so read on.

Comments closed