Press "Enter" to skip to content

Day: January 18, 2016

Finding Object Counts

SQLWayne shows how to break down counts of objects by type:

And while it did the trick, I was wanting, for no particular reason, to also have the total number of objects and the percentage.  Again, no particular reason.  It might be able to be done with a window function, but that is also something that I have limited familiarity with, so I decided to approach it as a CTE.  And it works nicely.  The objs CTE gives me a count of each object type while the tots CTE gives me the count of all objects.  By giving each CTE a column with the value of 1, it’s easy to join them together then calculate a percentage.

That’s one of the nicest things about SQL as a language:  you access metadata the same way you access regular data, so that technique can be used to query other data sets as well.

1 Comment

Code Coverage In SSDT

Ed Elliott continues to amaze me.  This time, he’s got a code coverage tool for T-SQL code:

If we execute this stored procedure we can monitor and show a) how many statements there are in this and also b) which statements have been called but we can’t see which branches of the case statement were actually called. If it was a compiled language like c# where we have a profiler that can alter the assembly etc then we could find out exactly what was called but I personally think knowing which statements are called is way better than having no knowledge of what level of code coverage we have.

Yet another reason to grab the SSDT Dev Pack.  By this point, I expect there to be a couple more reasons next week…

Comments closed

Reducing Ad Hoc Query Risk

Kenneth Fisher has some tips to reduce the risk of running ad hoc queries:

  • Make sure that this is the ONLY code in your window or that you are protected by a RETURN or SET EXECUTION OFF at the top of your screen. I have this put in place by default on new query windows. This protects you from running too much code by accident.

  • Make a habit of checking what instance you are connected to before running any ad-hoc code. Running code meant for a model or test environment in production can be a very scary thing.

This is good advice.

Comments closed

TVF Actual Execution Plans

Kevin Eckart shows us how to get table-valued function execution plan details:

While the estimated gives us all kinds of information, the actual plan keeps the underlying operations hidden in favor of a Clustered Index Scan and a TVF operator. This isn’t very useful when it comes to troubleshooting performance issues especially if your query has multi-table joins to the TVF.
Thankfully, this is where Extended Events (EE) comes into play. By using EE, we can capture the Post Execution Showplan that will give us the actual full plan behind the Clustered Index Scan and TVF operators.

As Kevin notes, this extended event runs the risk of degrading performance, so don’t do this in a busy production environment.

Comments closed

Powershell + SQL Server

Shawn Melton provides an introduction to various ways to interact with a SQL Server instance via Powershell:

The most commonly known cmdlet out of this module is, Invoke-Sqlcmd. This is generally thought of as a PS replacement for the old sqlcmd command-line utility, that to date is still available in currently supported versions of SQL Server. You utilize this cmdlet to execute any T-SQL query that you want against one or multiple instances. The advantage you get using Invoke-Sqlcmd over the command-line utility is the power of handling output in PS. The output from the cmdlet is created as a DataTable (System.Data.DataRow is the exact type).

This is a good overview of the different methods available.

Comments closed

JSON Parsing Performance

Jovan Popovic answers a question I’ve had on my mind:

One of the first questions that people asked once we announced JSON support in SQL Server 2016 was “Would it be slow?” and “How fast you can parse JSON text?”. In this post, I will compare performance of JSON parsing with JSON_VALUE function with the XML and string functions.

The short answer is, JSON parsing should be faster than XML but slower than our historical T-SQL parsing functions.

Comments closed