The concept is very similar to a DEFAULT constraint, with two differences:
1. Will work on an UPDATE operation, without specifying DEFAULT
2. Could be configured to disallow the user from entering a value. My proposed syntax was pretty simple:
AUTO [WITH OVERRIDE] (scalar expression)
Now I realize that 10 years ago, I didn’t take terribly long to consider that WITH was a terrible thing to add to the syntax, and AUTO is a keyword already, so I am going to rename it: AUTO_DEFAULT (scalar expression, [option]). Since I have thought a bit more about this in the years since writing it, I realized there were a few more options that would be nice. I was terrible in college doing syntax parsing, but the syntax itself is not important. Temporal in SQL Server 2016 has syntax that is similar to this for the new temporal columns which I got really excited about the first time I saw it: SysStartTime datetime2 GENERATED ALWAYS AS ROW START NOT NULL. Maybe in vNext?
Read the whole thing. Then check out the related Connect item Adam Machanic submitted. I’d love to see that functionality, given how frequently I create these metadata columns.
Why is my select code in there? That was designed to be a piece of test code. Shouldn’t the BEGIN..END after the AS define my procedure?
Actually it doesn’t. the procedure doesn’t end until the CREATE PROCEDURE statement is terminated. That termination comes by ending the batch. The CREATE PROCEDURE documentation has this limitation:
The CREATE PROCEDURE statement cannot be combined with other Transact-SQL statements in a single batch.
This means that anything else you have in that batch will be considered as part of the procedure, regardless of BEGIN..END.
Judicious usage of the GO statement can help keep you out of trouble.
We can see that our
FORCESCANhint really did make things worse – while we shifted the cost away from the clustered index seek, the sort was actually much worse, even though the estimated costs deemed them relatively equivalent. More importantly, we can see that
STRING_AGG()does offer a performance benefit, whether or not the concatenated strings need to be ordered in a specific way. As with
STRING_SPLIT(), which I looked at back in March, I am quite impressed that this function scales well prior to “v1.”
Given that the early releases tend to be “get the thing working” and later CTPs are around “make the thing faster,” it’s nice to see that STRING_AGG() is already ready for prime-time, and makes me wonder if they’ll make it even faster by RTM.
The results are so much easier to look at and comprehend, aren’t they? All object types for a schema are on a single line and it is easy for us to do impact analysis visually.
Sometimes doing it in T-SQL is the best approach, but pivoting is generally something which is cheaper in the application tier, whether you’re building a report, dashboard, or web app.
If you’re not a frequent SQL writer, the syntax can indeed be confusing. Especially
GROUP BYand aggregations “infect” the rest of the entire
SELECTclause, and things get really weird. When confronted with this weirdness, we have two options:
- Get mad and scream at the SQL language designers
- Accept our fate, close our eyes, forget about the snytax and remember the logicaloperations order
I generally recommend the latter, because then things start making a lot more sense, including the beautiful cumulative daily revenue calculation below, which nests the daily revenue (
SUM(amount)aggregate function) inside of the cumulative revenue (
SUM(...) OVER (...)window function):
Lukas explains things from an Oracle perspective, so not all of this matches T-SQL, but it’s close enough for comparison.
So what’s the problem? Well, to start with by using the SELECT TOP 0 INTO Archive method of creating the Archive table you’ve created the archive table with an identity column. Get rid of that identity property (you still want the column) and you don’t have a problem. But the really cool part of this is that it reminded me that the OUTPUT clause has a field list.
Also check out the comments for another solution.
Anyway, this obsession had me thinking – does wrapping identifiers in square brackets save SQL Server any time? Does it say to the optimizer, “Hey, I PROMISE this whole thing inside these square brackets is an identifier. Cross my heart.” And the optimizer takes your code at its word and doesn’t look through its list of reserved keywords for one that matches AccountCreateDate or address_line_2?
The answer is… no. Throwing every identifier into square brackets doesn’t speed it up at all. Here’s the test:
Read on for the test.
This function removes all need to convert values just to concatenate text and string values together, but for some reason whenever I see code posted on various websites, even from SQL Server MVP’s they seem never to utilise this function.
One excellent use for this is when you attempting to created the SQL Statement for a dynamic SQL execution. Instead of the usual conversions and + signs joining the text and parameters together its far simpler.
CONCAT is one of my favorite T-SQL enhancements from 2012 (after window functions).
Neither is really any more typing than the other, and they both work the exact same way under the covers. So it would seem that the choice between
CONVERTis merely a subjective style preference. If that were always true, however, you probably wouldn’t be reading this post.
There are other cases where you need to use
CONVERTin order to output correct data or perform the right comparison. Some examples:
Read on for examples. My preference is CAST, mostly because it’s fewer characters to type. But there are certainly advantages to using CONVERT.
Now, let’s assume I’m interested in these things:
- How many payments were there in the same hour as any given payment?
- How many payments were there in the same hour before any given payment?
- How many payments were there within one hour before any given payment?
Those are three entirely different questions.
Lukas’s solution uses Oracle syntax, but most of it also applies to SQL Server 2012 and higher. The part that doesn’t apply, unfortunately, is the RANGE BETWEEN INTERVAL, which allows you to find values clustered in the same time period (one hour in his example).