What I’m doing is simply converting the table into its JSON form, and then using this to create a table using the multi-row VALUES syntax which paradoxically allows expressions. The expression I’m using is JSON_Value, which allows me do effectively dictate the source within the table, via that JSON Path expression, and the destination. As it is an expression, I can do all sorts of manipulation as well as a transpose. I could, if I wanted, (in SQL 2017)provide that path parameter as a variable. This sort of technique can be used for several other reporting purposes, and it is well-worth experimenting with it because it is so versatile.
That is not at all what I would have thought up; very interesting approach. I’d probably just be lazy and shell out to R Services.
Asker: that’d be awesome if i can inner join two other tables instead of the table mentioned after FROM keyword
Me: …wait, what?
A: He’s asking
t1 left join t12
t1 left join t13
t12 inner join t13
M: em…it’s possible but it’s…iffy
A: i wanna learn it.
do your magic
I’ve seen this in action before, but rewrote the queries not to do this. The problem is that as the query gets more complicated, it becomes much harder to diagram things mentally. I don’t think I’ve seen a use yet that I couldn’t rewrite to be simpler.
Surprised? I’d have to say that I was. Now as my buddy Adam Machanicpointed out, it’s not the fault of TRY_CAST and TRY_CONVERT because they just TRY to do a CAST and a CONVERT. And it’s the original functions that have the bizarre behavior.
Can’t say that I love this because it means that I can’t use these functions for their purpose, except for decimal. So that then left me wondering which types had this behavior.
Check it out. One way to get around this default behavior could be to use NULLIF, so TRY_CAST(NULLIF(@InputVar, ”) AS INT).
The solution in many cases to simply update statistics on the underlying system tables indexes used by the problem DMV queries. This can be done selectively by identifying the system table indexes referenced in execution plan seek and scan operators of the problem query execution plan and then executing UPDATE STATISTICS on each index. However, the task is somewhat tedious.
Alternatively, one can simply update stats on all the system tables. Below is a script that generates and executes DDL to update stats on all non-empty system tables, making quick work of this.
Click through for the script, as well as an interesting note if you try to use constructs like @sql = @sql + N’some string’ in your code.
OK. Now Aaron Bertrand has a post from February 2016…
Performance Surprises and Assumptions : SET NOCOUNT ON
In that blog post the potential performance benefit of NOCOUNT ON was elusive. And, to be honest, this round of my tests was NOT to learn about NOCOUNT but other stuff. Just happened to stumble on NOCOUNT when I was sifting through stuff after some early testing. But with these results in hand, maybe NOCOUNT has a few more performance surprise for all of us 🙂
For a single run, printing out counts isn’t that expensive, but when doing a lot of work, it can add up.
I found this technique on Rich Benner’s SQL Server Blog: Visualising the Marvel Cinematic Universe in T-SQL and decided to play around with it after someone asked me to give him the sizes of all databases on a development instance of SQL Server
The way it works is that you take the size of the database and then divide that number against the total size of all databases. You then use the replicate function with the | (pipe) character to generate the ‘graph’ so 8% will look like this ||||||||
You can use this for tables with most rows, a count per state etc etc. By looking at the output the graph column adds a nice visual effect to it IMHO
It does the job and doesn’t require you to go out to a different product, so it works pretty well for occasional administrative queries.
This system uses
TOPevery now and then trying to limit the number of records it gets back (and the developers always seem to use the arbitrary
301– I’m guessing some degree of cargo cult programming going on). I’d prefer a well-constructed
WHEREclause to limit the result set but beggars can’t be choosers when working with legacy vendor code.
What I found odd was that the
Ledger1table didn’t get a lot of traffic – with the
WHEREclause in use (omitted here for brevity), you’d only get a handful of records, maybe a dozen at most.
Click through to see the rest of the problem, as well as Andy’s solution.
In both examples above the SQL executed apparently should had been safe from SQL injection, but it isn’t. Neither REPLACE nor QUOTENAME were able to properly protect and the injected division by zero was executed. The problem is the Unicode MODIFIER LETTER APOSTROPHE(NCHAR(0x02bc)) character that I used in constructing the NVARCHAR value, which is then implicitly cast to VARCHAR. This cast is converting the special ‘modifier letter apostrophe’ character to a plain single quote. This introduces new single quotes in the value, after the supposedly safe escaping occurred. The result is plain old SQL Injection.
Click through for the script. The upside of this is that it’s entirely under your control and you should be able to get it right by using NVARCHAR consistently.
In T-SQL, you summarize data by using the GROUP BY clause within an aggregate query. This clause creates groupings which are defined by a set of expressions. One row per unique combination of the expressions in the GROUP BY clause is returned, and aggregate functions such as COUNT or SUMmay be used on any columns in the query. However, if you want to group the data by multiple combinations of group by expressions, you may take one of two approaches. The first approach is to create one grouped query per combination of expressions and merge the results using the UNION ALLoperator. The other approach is to use the GROUPING SETS operator along with the GROUP BY clause and define each grouping set within a single query.
In this article I’ll demonstrate how to achieve the same results using each method.
Mastering GROUPING SETS makes reporting queries in T-SQL so much more effective.
Have you ever wanted to find something that was referenced in the body of a SQL query?
Maybe you need to know what queries you will have to modify for an upcoming table rename. Or maybe you want to see how many queries on your server are running SELECT *
Below are two templates you can use to search across the text of SQL queries on your server.
Click through for the scripts. Finding references in T-SQL objects (views, procedures, functions, triggers, etc.) is a fairly straightforward process. Finding references in ad hoc statements is much more hit-or-miss.