The results are so much easier to look at and comprehend, aren’t they? All object types for a schema are on a single line and it is easy for us to do impact analysis visually.
Sometimes doing it in T-SQL is the best approach, but pivoting is generally something which is cheaper in the application tier, whether you’re building a report, dashboard, or web app.
If you’re not a frequent SQL writer, the syntax can indeed be confusing. Especially
GROUP BYand aggregations “infect” the rest of the entire
SELECTclause, and things get really weird. When confronted with this weirdness, we have two options:
- Get mad and scream at the SQL language designers
- Accept our fate, close our eyes, forget about the snytax and remember the logicaloperations order
I generally recommend the latter, because then things start making a lot more sense, including the beautiful cumulative daily revenue calculation below, which nests the daily revenue (
SUM(amount)aggregate function) inside of the cumulative revenue (
SUM(...) OVER (...)window function):
Lukas explains things from an Oracle perspective, so not all of this matches T-SQL, but it’s close enough for comparison.
So what’s the problem? Well, to start with by using the SELECT TOP 0 INTO Archive method of creating the Archive table you’ve created the archive table with an identity column. Get rid of that identity property (you still want the column) and you don’t have a problem. But the really cool part of this is that it reminded me that the OUTPUT clause has a field list.
Also check out the comments for another solution.
Anyway, this obsession had me thinking – does wrapping identifiers in square brackets save SQL Server any time? Does it say to the optimizer, “Hey, I PROMISE this whole thing inside these square brackets is an identifier. Cross my heart.” And the optimizer takes your code at its word and doesn’t look through its list of reserved keywords for one that matches AccountCreateDate or address_line_2?
The answer is… no. Throwing every identifier into square brackets doesn’t speed it up at all. Here’s the test:
Read on for the test.
This function removes all need to convert values just to concatenate text and string values together, but for some reason whenever I see code posted on various websites, even from SQL Server MVP’s they seem never to utilise this function.
One excellent use for this is when you attempting to created the SQL Statement for a dynamic SQL execution. Instead of the usual conversions and + signs joining the text and parameters together its far simpler.
CONCAT is one of my favorite T-SQL enhancements from 2012 (after window functions).
Neither is really any more typing than the other, and they both work the exact same way under the covers. So it would seem that the choice between
CONVERTis merely a subjective style preference. If that were always true, however, you probably wouldn’t be reading this post.
There are other cases where you need to use
CONVERTin order to output correct data or perform the right comparison. Some examples:
Read on for examples. My preference is CAST, mostly because it’s fewer characters to type. But there are certainly advantages to using CONVERT.
Now, let’s assume I’m interested in these things:
- How many payments were there in the same hour as any given payment?
- How many payments were there in the same hour before any given payment?
- How many payments were there within one hour before any given payment?
Those are three entirely different questions.
Lukas’s solution uses Oracle syntax, but most of it also applies to SQL Server 2012 and higher. The part that doesn’t apply, unfortunately, is the RANGE BETWEEN INTERVAL, which allows you to find values clustered in the same time period (one hour in his example).
Even though we would expect to see both records returned we only get 1 record. Huh? This is exactly what puzzled a coworker, ofcourse query was not as simple as this one but same issue caused him to hit a road block.
In the case of COALESCE and OR methods, results are identical.
The underlying issue here is that the variable data type differs from the column’s data type, and exposes a difference in how COALESCE and ISNULL work.
There are several options for fixing this. For instance, you can use a CASE expression, or COALESCE, but I find they get a tad muddled to write after a while, especially if we’re safeguarding multiple columns from our mathematical disaster. Plus,under the covers, the functions I like to use are just case expressions anyway. Isn’t it nice that SQL Server will save you a touch of typing? I think so. What a considerate piece of software!
This is a bit of a beginner tip, but it came up while we were at DBA Days, so I figured I’d write about it in case anyone runs across it.
Read on for Erik’s favorite solution to the problem.
I am finding more and more cases where SQL code is being created using the double dot or period for the 2 part naming convention.
For example, instead of using dbo.table1 I am seeing ..table1.
I don’t know who suggested this in the first place, but it is not a good idea. Sure it works and does what you expect, but there is a HUGE risk with doing this. When you use the .. syntax, you are telling the code to use whatever the default schema is for the user that is running the query. By default that is the dbo schema, but there is no guarantee that all systems are going to be that way.
Read on to understand why this is a big deal.