I got asked a question about the OUTPUT clause recently and realized I didn’t remember the syntax. I’ve rarely used this, so I had to look it up and thought this would be a good basic post.
The idea with OUTPUT is that the data from the inserted and deleted tables can be output from the INSERT statement, outside of your triggers. This is the same data, but you can access it in the insert.
The format is
INSERT xxx OUTPUT yyyy INTO @zzz VALUES (or SELECT) mmmm
If I had one thing I could change about OUTPUT, I’d like to be able to output directly into variables for those cases in which I know I’m only going to get one result (or maybe I only care about one arbitrary result in a set).
So with those limitations out in the open, we can move on to some performance testing. Given Microsoft’s track record with built-in functions that leverage CLR under the covers (cough
FORMAT()cough), I was skeptical about whether this new function could come close to the fastest methods I’d tested to date.
Let’s use string splitters to separate comma-separated strings of numbers, this way our new friend JSON can come along and play too. And we’ll say that no list can exceed 8,000 characters, so no
MAXtypes are required, and since they’re numbers, we don’t have to deal with anything exotic like Unicode.
The results are surprising. I expected it to be somewhere around CLR-level, but not way better.
And here is a very lengthy (~900 lines) T-SQL Code that I generated from SSMS & SQL Profiler to check the same Dependencies of a Table in SQL Server 2014. You can also create a Stored Procedure and apply the Table & Schema as parameters.
You can just replace the Table & Schema in the first 2 lines and execute the code to check table dependencies
You might be able to optimize this script, but it’s nice to have a starting point.
A quick one today, just looking for strings. I wrote an article on this, so there’s more detail there, but here’s a bit of code you can look through and see what it does.
He didn’t tag his post T-SQL Tuesday, but it certainly is apropos.
What makes this interesting is when I am using UNION to join the results. How do you place a final resultset from a UNION, EXCEPT, or INTERSECT into a temporary table using SELECT INTO? Where does the INTO portion of the query go?
This is actually a pretty simple thing to do. The INTO for the SELECT INTO goes into the first query of the set. An example of UNIONing the results from sys.dm_exec_query_stats and sys.dm_exec_query_stats into a temporary table is provided in listing 1.
No subqueries are necessary here.
As it turns out, when you have a character string in SQL Server that contains character 0x000, it really doesn’t know what to do with it most of the time, especially when you’re dealing with Unicode strings.
I did track down http://sqlsolace.blogspot.com/2014/07/function-dbostripunwantedcharacters.html, but I generally try to avoid calling UDF’s in my queries.
Jay’s got an answer which works, so check it out. Also, I second the use of the #sqlhelp hashtag. There’s a great community watching that hashtag.
Let’s say you have a table called
dbo.BugReports, and you need to change it to
dbo.SupportIncidents. This can be quite disruptive if you have references to the original name scattered throughout stored procedures, views, functions, and application code. Modern tools like SSDT can make a refactor relatively straightforward within the database (as long as queries aren’t constructed from user input and/or dynamic SQL), but for distributed applications, it can be a lot more complex.
A synonym can allow you to change the database now, and worry about the application later – even in phases. You just rename the table from the old name to the new name (or use
ALTER TABLE ... SWITCHand then drop the original), and then create a synonym named with the old name that “points to” the new name
I’ve used synonyms once or twice, but they’re pretty low on my list, in part because of network effects: if I create this great set of synonyms but the next guy doesn’t know about them, it makes maintenance that much harder.
The SWITCH statement can instantly ‘move’ data from one table to another table. It does this by updating some meta data, to say that the new table is now the owner of the data instead of the old table. This is very useful as there is no physical data movement to cause the stresses mentioned earlier. There are a lot of rules enforced by SQL Server before it will allow this to work. Essentially each table must have the same columns with the same data types and NULL settings, they need to be in the same file group and the new table must be empty. See here for a more detailed look at these rules.
If you can take a downtime, this is pretty easy. Otherwise, making sure that the two tables are in sync until the switchover occurs is a key problem to keep in mind.
Medians as a concept are simple enough. If you have a large number of values, like a range of statistical values, you want to pick the middle one. The median, as opposed to the average is useful for a number of reasons, one of them that you can reduce the effect of so-called outlier values.
The fact that SQL Server doesn’t have a fast, built-in median function surprises me, to be honest. The best alternative I’ve found was a CLR function in SQL#.
So here is the thing. When you change one you change them all. That means if you don’t specify a precision when you can then you get the default. That’s not exactly a common problem though. Usually what you are changing is the precision (or possibly the datatype). What is a common mistake is not specifying the nullability.
When modifying DDL, make sure that you keep it consistent and complete.