Press "Enter" to skip to content

Category: T-SQL Tuesday

Most-Voted Connect Items

Adam Machanic peruses the top-rated Connect items:

Magic numbers! That sounds cool. Except it’s not. It’s a horrible drain on readability and a pox on the manageability of your code. But we need them, because every database out there has “lookup tables” with specific values that we need to predicate on throughout the code base. SQL Server could solve this problem once and for all by allowing users to promote these values to named enumerators. How cool would that be?!? 220 votes, nine years.

And finally, one more from Mr. Ben-Gan, a clever idea for improving TOP with the OVER clause. I love the OVER clause. I love TOP. Their offspring would be beautiful indeed… 180 voters over the past nine years have shared in this sentiment.

This is an interesting list.

Comments closed

Estimated Number Of Rows To Be Read

Rob Farley discusses a successful Connect item’s implementation:

I opened it up, and sure enough, no sign of that 7,276 value. It looks just the same as the estimated plan I just showed.

Getting plans out of the cache is where the estimated values come into their own. It’s not just that I’d prefer to not actually run potentially-expensive queries on customer databases. Querying the plan cache is one thing, but running queries to get the actuals – that’s a lot harder.

With SQL 2016 SP1 installed, thanks to that Connect item, I can now see the Estimated Number of Rows to be Read property in estimated plans, and in the plan cache. The operator tooltip shown here is taken from the cache, and I can easily see that Estimated property showing 7,276. This is shown from Management Studio because Plan Explorer doesn’t yet call out this property explicitly:

If you’re looking to use SQL Server 2016 SP1, read the whole thing; this will make query tuning without running those horribly expensive queries a bit easier.

Comments closed

Wanted: Database-Level Wait Stats

Arun Sirpal would like to see database-level wait stats:

Wait Stats is my “go-to” thing, when you want to dig into performance issues everyone knows you will probably end up using sys.dm_os_wait_stats. You cannot use this in the Azure world, you have to use a DMV that is scoped to the database level. I think this would be a nice idea to have with the “earth” based SQL Servers – the ability to return information about all the waits encountered by threads that executed at the database level.

The connect item can be found at this link: https://connect.microsoft.com/SQLServer/feedback/details/2647332/sys-dm-db-wait-stats-dmv?tduid=(262281c4c73a682498780643b77e80d1)(256380)(2459594)(TnL5HPStwNw-KjxCoz0pGWobbq7q1MQTIw)()

I like this idea.

Comments closed

Wanted: Sorting Stacked Charts In Power BI

Melissa Connors would like to control stack order for a stacked chart in Power BI:

Currently, it’s displaying the stacked column in alphabetical order from the bottom up (coyote/red on the bottom, fox/black in the middle, and wolf/gray on the top). I want the ability to display it by the count for each type, not the name. In this example, I want the order of coyote, wolf, and fox. I believed that there must be a way to control the order within a stacked column, so I clicked all over the UI and fired up the Google.

Read the whole thing and vote for this Power BI idea if it’s interesting.

Comments closed

Wanted: Forced Parallelism

James Anderson would like a query hint to force parallelism:

I would like to see a hint that causes the optimiser to consider a parallel plan no matter the cost of the query. It’s possible to get this behaviour with trace flag 8649 but it’s unsupported by Microsoft and therefore unfit for production use.

I only tend to use query hints as a very last resort. It’s almost always better to allow the optimiser to make these decisions and continue to reevaluate these decisions as your data changes but sometimes they can be a get out of jail free card.

I’ve written about this before and so has Paul White. Paul created this connect item for something similar. Please vote it up if you think it could help with your query tuning.

Click through for the full argument, and then hit the Connect item if you agree.

Comments closed

String Or Binary Data

Lori Edwards wants to fix one of the most annoying error messages in SQL Server history:

The Connect item that I’m promoting today is Connect #339410 : Please fix the “String or binary data would be truncated” message to give the column name. This Connect item was opened in 2008 and has 1,328 upvotes. If we can create virtual realities, we should be able to add a column name to an error message. Help us Microsoft, you’re our only hope.

This error message is incredibly annoying, particularly because it seems the engine should know which record has an error and on which column that error is.  We finally got information on which row caused a primary or unique key constraint violation, and that was helpful; this would be at least as helpful.

Comments closed

Connect Items Galore

Aaron Bertrand has a series of Connect items of interest:

There are so many active Connect items, it’s difficult to choose just one, but after spending some time looking at the generate_series function in PostgreSQL, the one at the top of my list right now is this suggestion from Erland Sommarskog (blog):

Connect #258733 : Add a built-in table of numbers

I think it would be cheap and easy for SQL Server to provide a defaut table of numbers, always in memory, just like any other catalog view or DMV. This thing can use compression now, regardless of edition, so even a table with 2 billion rows will only take 13 MB, and should be easy to populate either at start-up or on first use. I could list out all of the potential uses for a numbers table, but they’re fairly well documented in the following sources (and most use cases don’t need anywhere near 2 billion values, so maybe the range could be defined using a sys.configurations or database-level SCOPED CONFIGURATION setting)

Click through for additional Connect items, and please vote on any Connect items which catch your eye.

Comments closed

Backup Up Analysis Services

Jens Vestergaard shows how to take backups of Analysis Services cubes:

I have not met a setup where applying compression was not an option, yet. Obviously this has a penalty cost on CPU while executing the backup, and will affect the rest of the tasks running on the server (even if you have your data and backup dir on different drives). But in my experience, the impact is negligible.

This may not be the case with the encryption option, as this has a much larger foot print on the server. You should be using this with some caution in production. Test on smaller subsets of the data if in doubt.
Another thing to keep in mind, as always when dealing with encryption, do remember the password. There is no way of retrieving the data other than with the proper password.

My goal is to be able to rebuild any cube from the relational database, but even with that goal in mind, it is smart to have backups.

Comments closed

Transaction Log Operations And Backups

John Deardurff explains what happens in the transaction log when you restore a backup:

In the example, the database performed a checkpoint at noon and a backup had been taken at that time. The restore process will capture all the transactions up until the point the database had been backed up. After the database has been restored, the recovery process will roll forward transactions 2 and 4 because they had been committed to the transaction log before the point of failure. Since transactions 3 and 5 did not commit before the time of system failure, the undo process will roll back the transactions to keep the data in a consistent state.

Read the whole thing.

Comments closed

Backup Basics

Aaron Bertrand covers reasons for backups, backup models, and also a vital part of the backup process:

Now, all of the above may be review for you, but a much more important part of this story is that you need to be TESTING your backups. I’ve seen many customers who have been happily taking backups and storing them on some drive somewhere, and then when disaster strikes and they actually need to restore them, they can’t – maybe they had been backing up corruption all along, or the backups were failing but they were ignoring alerts, or they weren’t taking log backups frequently enough to meet their RPO, or they were only taking full backups.

Testing backups is vital; just because the backup process reported success doesn’t mean that you’ll necessarily be able to restore that backup when the time comes that you need it.  It’s also good to drill people on restoration skills, as things get a bit more stressful when three levels of management are standing behind your chair asking you what’s taking so long.

Comments closed