Press "Enter" to skip to content

Author: Kevin Feasel

Code Coverage In SSDT

Ed Elliott continues to amaze me.  This time, he’s got a code coverage tool for T-SQL code:

If we execute this stored procedure we can monitor and show a) how many statements there are in this and also b) which statements have been called but we can’t see which branches of the case statement were actually called. If it was a compiled language like c# where we have a profiler that can alter the assembly etc then we could find out exactly what was called but I personally think knowing which statements are called is way better than having no knowledge of what level of code coverage we have.

Yet another reason to grab the SSDT Dev Pack.  By this point, I expect there to be a couple more reasons next week…

Comments closed

Reducing Ad Hoc Query Risk

Kenneth Fisher has some tips to reduce the risk of running ad hoc queries:

  • Make sure that this is the ONLY code in your window or that you are protected by a RETURN or SET EXECUTION OFF at the top of your screen. I have this put in place by default on new query windows. This protects you from running too much code by accident.

  • Make a habit of checking what instance you are connected to before running any ad-hoc code. Running code meant for a model or test environment in production can be a very scary thing.

This is good advice.

Comments closed

TVF Actual Execution Plans

Kevin Eckart shows us how to get table-valued function execution plan details:

While the estimated gives us all kinds of information, the actual plan keeps the underlying operations hidden in favor of a Clustered Index Scan and a TVF operator. This isn’t very useful when it comes to troubleshooting performance issues especially if your query has multi-table joins to the TVF.
Thankfully, this is where Extended Events (EE) comes into play. By using EE, we can capture the Post Execution Showplan that will give us the actual full plan behind the Clustered Index Scan and TVF operators.

As Kevin notes, this extended event runs the risk of degrading performance, so don’t do this in a busy production environment.

Comments closed

Powershell + SQL Server

Shawn Melton provides an introduction to various ways to interact with a SQL Server instance via Powershell:

The most commonly known cmdlet out of this module is, Invoke-Sqlcmd. This is generally thought of as a PS replacement for the old sqlcmd command-line utility, that to date is still available in currently supported versions of SQL Server. You utilize this cmdlet to execute any T-SQL query that you want against one or multiple instances. The advantage you get using Invoke-Sqlcmd over the command-line utility is the power of handling output in PS. The output from the cmdlet is created as a DataTable (System.Data.DataRow is the exact type).

This is a good overview of the different methods available.

Comments closed

JSON Parsing Performance

Jovan Popovic answers a question I’ve had on my mind:

One of the first questions that people asked once we announced JSON support in SQL Server 2016 was “Would it be slow?” and “How fast you can parse JSON text?”. In this post, I will compare performance of JSON parsing with JSON_VALUE function with the XML and string functions.

The short answer is, JSON parsing should be faster than XML but slower than our historical T-SQL parsing functions.

Comments closed

Tracking Changed Data In Standard Edition

Mickey Stuewe wants to track changed data, but has to use Standard Edition:

I use a pattern that includes four fields on all transactional tables. This (absolutely) includes lookup tables too. The two table types that are an exception to this pattern are audit tables and error tables. I’ll cover why later in this article.

Four fields include CreatedOn, CreatedBy, UpdatedOn, and UpdatedBy. The dates should be DateTime2. CreatedOn is the easiest to populate. You can create a default on the field to be populated with GetDate().

This is a common pattern and works pretty well.  The trick is making sure that you keep that metadata up to date.

Comments closed

Don’t Rebuild Heaps

Steve Jones notes the issues around rebuilding tables lacking clustered indexes:

What about adding a clustered index and dropping it? Nooooooo, and again, I learned something new. This causes two rebuilds of the non-clustered indexes as they are rebuilt with the cluster addition and then rebuilt when the table changes back to a heap (to get the heap locations). That’s crazy, and certainly not what we want.

Also read Matthew Darwin’s comment, as “Don’t do X” usually has an “Except when Y” corollary.

Comments closed

Building A Baseline

Erin Stellato has put together a set of scripts to collect baseline stats for an instance:

The topic of baselines in SQL Server is one that I’ve had an interest in for a long time.  In fact, the very first session I ever gave back in 2011 was on baselines.  I still believe they are incredibly important, and most of the data I capture is still the same, but I have tweaked a couple things over the years.  I’m in the process of creating a set of baseline scripts that folks can use to automate the capture of this information, in the event that they do not have/cannot afford a third-party monitoring tool (note, a monitoring tool such as SQL Sentry’s Performance Advisor can make life WAY easier, but I know that not every can justify the need to management).  For now, I’m starting with links to all relevant posts and then I’ll update this post once I have everything finalized.

If you don’t know what “normal” looks like, you’ll have a hard time discerning whether something is wrong.  The better you understand a normal workload, the easier it is to spot issues before end users call you up.

Comments closed

Talking People Out Of SA

Brent Ozar walks through one way to reduce SA account usage.

Part one:  the nice way.

In a perfect world, you’d create a minimally-privileged AD login that only has limited access to specific databases.

However, when you’ve got a toddler running with scissors and razors, sometimes you’re happy just to get the razors out of their hands first, and then you’ll work on the scissors next. One step at a time. Preferably not running.

For now, create another SQL account with DBO permissions on all of the databases involved with the application. (If you’re dealing with multiple different tenants on the same server, give them each their own SQL login.) Let them be complete owners of their databases for now.

Part two:  the not-as-nice way.

Power User: “EVERYTHING IS DOWN! THE SA ACCOUNT PASSWORD ISN’T WORKING! DID YOU RESET IT?”

Me: “Of course not. You told me not to.”

Power User: “THEN WHO DID IT?”

Me: “Oh, I have no way of knowing. Anyone who uses the account can change the password with theALTER LOGIN command. And you said everyone has it, right?”

That’s a nice account you have; it’d be a shame if something…unfortunate…were to happen to it.

Comments closed

Automated Database Shrinking

Chris Shaw talks about auto-shrink:

If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically.  If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.

When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else.  The downstream effect of this is going to be the fact your indexes are going to become fragmented.  This can be demonstrated in a simple test.

Friends don’t let friends auto-shrink.

Comments closed