I dunno about you, but I got a big stocking full of coal. Next year, I’m gonna be better, and I plan on asking Santa for a whole bunch of Connect requests. For T-SQL Tuesday, I asked you to name your favorite SQL Server bugs & enhancement requests, and here’s what you want in your stocking next year.
If you agree with a feature, click on it, and upvote its Connect request. These bloggers took the time to make their case – now it’s time for you to vote.
29 separate links, so there’s a lot of reading here.
Here’s the Connect Item involved – written by Michael Swart.
Trace flag 2390 can cause large compile time when dealing with filtered indexes. (Active)
Huh. “What does that have to do with clustered columnstore indexes?” you might ask. No, really, it’d be a personal favor if you *did* ask…
Ask and then read on. This is the kind of post I’d send to someone wanting to learn how to troubleshoot issues.
Two of these requests have Connect items, which I’m listing below. The first allows you to change the location of Query Store data to reside somewhere in the user database besides the PRIMARY filegroup:
The other request is related to exporting that data, which is technically possible now, but it’s not a supported method so it’s not something I really want to implement in a client environment. I’ve had many people describe their process for testing which includes restoring database nightly. If they’re using Query Store as part of testing, that data is lost every night by the restore.
Click through for more and vote up those items relevant to you.
The Connect Item I have chosen to write about is an old one and is about getting Intellisense for MDX in SQL Server Management Studio [SSMS]. Despite the fact that it was created back in 2009 by Jamie Thomson (b|l|t), it is still active and there has been a public acknowledgement back then, by the Analysis Service Team, that they will consider this request for an upcoming release. 2009, still active… True story.
Read on for more details and be sure to join Jens’s quixotic quest if you’d like to see MDX Intellisense.
One Event Behind
I another post, I wrote that the XEvents event_stream target is regularly “one event behind”. There is an existing Connect item seeking a fix to this problem: QueryableXEventData and “Watch Live Data” one event behind. If you use the “Watch Live Data” grid for XEvents in SSMS, this is an important issue and worthy of your upvote. It’s also important if you ever want to access XEvent data programmatically with C# or PowerShell because the QueryableXEventData class uses the event_stream target and is also subject to the issue.
Read on for more details.
And his proposed solution:Add a new column to sys.dm_db_log_space_usage or sys.database_recovery_status called LastLogBackupTime.
I LOVE this idea…back up the T-log more frequently during busy times, less often during off hours. At my current client, there is almost nothing happening outside of a 12 hour workday window, so this would be perfect here.
Now, I am possibly misunderstanding Ola’s request or the intent…and that’s ok. This query from the msdb..backupset table already contains this info via a relatively short amount of code:
Click through for more details as well as Ola’s Connect item.
Now try this:DECLARE @OrderID int = NULL, @OrderType int = 1, @Qty int = 2, @ServiceSpeed int = 3; SET @OrderID = dbo.GetOrderID (@OrderType, @Qty, @ServiceSpeed); SELECT @OrderID 'Using SET Syntax';
Now you get a NULL back from the final SELECT. What happened? If you are a careful code reviewer, you might have spotted that the function definition has the @Qty and @ServiceSpeed parameters flipped as compared to the table definition and how we’re calling the function.
But this isn’t an error. There’s no obvious indication that anything is wrong. Imagine if instead of NULL, which would probably break something, you got a different order ID back. Your program would silently continue, oblivious to what is essentially data corruption.
And if you build a function with a large number of parameters, it gets that much easier accidentally to swap just two of them. Click through for the rest of the story, and check out Riley’s Connect item.
Let’s take a look at what is being asked here. Using the 32-bit integer as an example, we currently have a data type that can accept a range between negative two billion and two billion. But if negative numbers aren’t required, we can use those same 32 bits to store numbers between zero and four billion. Why, goes the question, throw away that perfectly useful upper range by always reserving space for a negative range we may not need?
I appreciate Ewald’s thoughtfulness here in working out the value of the request as well as some of the difficulties in building something which fulfills his desire. Great read.
First contender: Inserting to an indexed view can fail
What would happen if I told you that, with regards to a view, sometimes inserting into the table could fail? Well that’s what this Connect item from Dave_Ballantyne found, along with the reason.
Click through for more bugs.
The concept is very similar to a DEFAULT constraint, with two differences:
1. Will work on an UPDATE operation, without specifying DEFAULT
2. Could be configured to disallow the user from entering a value. My proposed syntax was pretty simple:
AUTO [WITH OVERRIDE] (scalar expression)
Now I realize that 10 years ago, I didn’t take terribly long to consider that WITH was a terrible thing to add to the syntax, and AUTO is a keyword already, so I am going to rename it: AUTO_DEFAULT (scalar expression, [option]). Since I have thought a bit more about this in the years since writing it, I realized there were a few more options that would be nice. I was terrible in college doing syntax parsing, but the syntax itself is not important. Temporal in SQL Server 2016 has syntax that is similar to this for the new temporal columns which I got really excited about the first time I saw it: SysStartTime datetime2 GENERATED ALWAYS AS ROW START NOT NULL. Maybe in vNext?
Read the whole thing. Then check out the related Connect item Adam Machanic submitted. I’d love to see that functionality, given how frequently I create these metadata columns.