Press "Enter" to skip to content

Month: January 2016

Result Sets

Kenneth Fisher learns and teaches us about RESULT SETS:

Quick definition. A result set is the output of a query. It could result in a one row, one column output or a 100+ column, million+ row output. Either way that’s a result set. Note: you can have multiple result sets from a single object (stored procedure, function etc) call.

This was introduced in SQL Server 2012 and there are a couple of security-related scenarios in which RESULT SETS is helpful.  It also lets you rename columns in stored procedure calls, if you’re into that sort of thing.

Comments closed

Clear The Query Store

Grant Fritchey shows how to clear the Query Store in SQL Server 2016:

While setting up example code for my presentation at SQL Cruise (which is going to be a fantastic event), I realized I wanted to purge all the data from my Query Store, just for testing. I did a series of searches to try to track down the information and it just wasn’t there. So, I did what anyone who can phrase a question in less than 140 characters should do, I posted a question to Twitter using the #sqlhelp hash tag.

You can also call EXEC sp_query_store_remove_query to remove a specific query from the Query Store.

2 Comments

Mirrored Backups

Sean McCown talks about mirrored backups:

By mirroring backups, you’re saying that you want to backup to 2 locations simultaneously.  So let’s say you have the need to backup your DBs to a local SAN drive, but also you need to send them to another data center in case something happens to your local SAN.  The way to do that in SQL is with mirrored backups and the syntax looks like this:

BACKUP DATABASE MyDB TO DISK = ‘G:\MyDB.trn’ MIRROR TO DISK = ‘\\DC1\MyDB.trn’

So above you can see that SQL will write both of these files at once, and give you a good amount of redundancy for your DB backups.  However, this can go wrong when your network isn’t stable or when the link to the other data center is slow.  So you should only mirror backups when you can pretty much guarantee that it won’t fail or lag.  And as you can guess that’s a heavy burden to put on most networks.  In the situation last week that spawned this blog, the network went down for something like 9 hrs and caused the DB’s log to not be backed up that entire time, and hence the log grew and grew.  Now you’re in danger of bringing prod down and that’s clearly not what your backup strategy should do.

Sean talks about alternatives and then talks about how they’ve gotten around the problem with Minion Backup.  If you haven’t tried Minion Backup, it is well worth your time; it’s already a great product and I use it in a production environment I support.

Comments closed

Finding Object Counts

SQLWayne shows how to break down counts of objects by type:

And while it did the trick, I was wanting, for no particular reason, to also have the total number of objects and the percentage.  Again, no particular reason.  It might be able to be done with a window function, but that is also something that I have limited familiarity with, so I decided to approach it as a CTE.  And it works nicely.  The objs CTE gives me a count of each object type while the tots CTE gives me the count of all objects.  By giving each CTE a column with the value of 1, it’s easy to join them together then calculate a percentage.

That’s one of the nicest things about SQL as a language:  you access metadata the same way you access regular data, so that technique can be used to query other data sets as well.

1 Comment

Code Coverage In SSDT

Ed Elliott continues to amaze me.  This time, he’s got a code coverage tool for T-SQL code:

If we execute this stored procedure we can monitor and show a) how many statements there are in this and also b) which statements have been called but we can’t see which branches of the case statement were actually called. If it was a compiled language like c# where we have a profiler that can alter the assembly etc then we could find out exactly what was called but I personally think knowing which statements are called is way better than having no knowledge of what level of code coverage we have.

Yet another reason to grab the SSDT Dev Pack.  By this point, I expect there to be a couple more reasons next week…

Comments closed

Reducing Ad Hoc Query Risk

Kenneth Fisher has some tips to reduce the risk of running ad hoc queries:

  • Make sure that this is the ONLY code in your window or that you are protected by a RETURN or SET EXECUTION OFF at the top of your screen. I have this put in place by default on new query windows. This protects you from running too much code by accident.

  • Make a habit of checking what instance you are connected to before running any ad-hoc code. Running code meant for a model or test environment in production can be a very scary thing.

This is good advice.

Comments closed

TVF Actual Execution Plans

Kevin Eckart shows us how to get table-valued function execution plan details:

While the estimated gives us all kinds of information, the actual plan keeps the underlying operations hidden in favor of a Clustered Index Scan and a TVF operator. This isn’t very useful when it comes to troubleshooting performance issues especially if your query has multi-table joins to the TVF.
Thankfully, this is where Extended Events (EE) comes into play. By using EE, we can capture the Post Execution Showplan that will give us the actual full plan behind the Clustered Index Scan and TVF operators.

As Kevin notes, this extended event runs the risk of degrading performance, so don’t do this in a busy production environment.

Comments closed

Powershell + SQL Server

Shawn Melton provides an introduction to various ways to interact with a SQL Server instance via Powershell:

The most commonly known cmdlet out of this module is, Invoke-Sqlcmd. This is generally thought of as a PS replacement for the old sqlcmd command-line utility, that to date is still available in currently supported versions of SQL Server. You utilize this cmdlet to execute any T-SQL query that you want against one or multiple instances. The advantage you get using Invoke-Sqlcmd over the command-line utility is the power of handling output in PS. The output from the cmdlet is created as a DataTable (System.Data.DataRow is the exact type).

This is a good overview of the different methods available.

Comments closed

JSON Parsing Performance

Jovan Popovic answers a question I’ve had on my mind:

One of the first questions that people asked once we announced JSON support in SQL Server 2016 was “Would it be slow?” and “How fast you can parse JSON text?”. In this post, I will compare performance of JSON parsing with JSON_VALUE function with the XML and string functions.

The short answer is, JSON parsing should be faster than XML but slower than our historical T-SQL parsing functions.

Comments closed

Tracking Changed Data In Standard Edition

Mickey Stuewe wants to track changed data, but has to use Standard Edition:

I use a pattern that includes four fields on all transactional tables. This (absolutely) includes lookup tables too. The two table types that are an exception to this pattern are audit tables and error tables. I’ll cover why later in this article.

Four fields include CreatedOn, CreatedBy, UpdatedOn, and UpdatedBy. The dates should be DateTime2. CreatedOn is the easiest to populate. You can create a default on the field to be populated with GetDate().

This is a common pattern and works pretty well.  The trick is making sure that you keep that metadata up to date.

Comments closed