Recently I had a conversation with a colleague that wasn’t aware of the REVERSE function in SQL Server. I had sent them a code snippet and they inquired as to what the purpose of the function was. Essentially, this function that reverses any string value.
For example, the phrase:
The brown dog jumped over the lazy fox
reversed looks like this
xof yzal eht revo depmuj god nworb ehT
Read on to see a good example of when you might use REVERSE.
We’ve seen that we can easily solve the original problem with SQL only: Select some data from a table using predicate A, and if we don’t find any data for predicate A, then try finding data using predicate B from the same table.
Oracle and PostgreSQL can both optimise away the unnecessary query 2 by inserting a “probe” in their execution plans that knows whether the query 2 needs to be executed or not. In Oracle, we’ve even seen a situation where the combined query outperforms two individual queries. SQL Server 2014 surprisingly does not have such an optimisation.
Interesting totally-not-a-comparison between the three database products. There are some things I’d ideally like the SQL Server optimizer to do with common table expressions, but as Lukas notes, it doesn’t, so user beware.
GOis a special little guy. It’s not exactly T-SQL. It’s a way of telling the SQL Server Management Studio (SSMS) to send everything before it, from the beginning of the script or the preceding
GO, to the SQL Server instance.
If you read the documents, the main point to take away is…
A Transact-SQL statement cannot occupy the same line as a GO command. However, the line can contain comments.
The problem turns out to be a little trickier than you’d first imagine. Also, after reading this post, I think it’s lunchtime…
SQL Server starting with 2012 ship with a robust set of functions that can be used to make code perform faster, with fewer lines of code. The functions can be used in ETL Process to provide better error handling. A example of this would be the Try_Parse function that allows you to check if a value can be converted.
Another example would be using the FIRST_VALUE() and LAST_VALUE() functions which work against a set of data. These functions are very useful when looking for things like month over month averages and when doing calculations. The table below contain a list of function that are supported starting with 2012 along with some examples of how to use them.
He breaks them down into four categories and provides examples. Functions can bring their own set of problems with query performance, but most of them can be very useful.
Commas are all the rage nowadays:
There are a fair number of questions nowadays about returning data from a database in a comma separated string. Sure the application should probably do that but hey, database servers are expensive, why not get some bang for your bucks!
Protip: I use the colon as a separator because sometimes commas will get stuck in the Ethernet cable.
Finally, this week I had a breakthrough. I was working on updating a bunch of data but it was breaking on a small subset of that data. In this case, I was attempting to
JOINtwo tables on fields that should have been
INTs, but in a very small number of cases one side was using a comma-delimited string. The user told me that someone else had done these updates in the past and didn’t encounter the problem I was having (so I knew that it was something i was doing “wrong”), but given that it was only a handful of broken updates she was OK with manually doing the updates (we were scripting it because we were updating potentially tens of thousands of records).
I am not OK with manually fixing this in the future. I wanted to know how the other DBA had done it before. I dug into some history and found
CROSS APPLY. My nemesis. I was determined to figure out how to use it this time.
The APPLY operator is extremely powerful in the right set of circumstances. Andy shows the “classic” use case, but there are a number of other uses for the operator.
What does all that mean? No idea. Clearly there is JSON that’s returned here and can be deserialized to gather meanings. Is this useful? I think graphs solve a certain set of problems very well, and more efficiently than relational systems. Certainly I could implement a graph structure relationally, but at scale I’m not sure the queries would be as easy to write or run as quickly.
I don’t know if I’d use a graph structure in any of the problems we try to solve in the SQLServerCentral app, but who knows. Maybe we would if we could.
Steve leaves this with more questions than answers, but he does give a very simple repro script if you want to futz about with graphs.
Basically, it escapes any occurrence of the second parameter within the first parameter. So when would we be using it in dynamic SQL? Well, probably the most common way I’ve used it is when I’m building a list of commands I want to run.
Click through for more details, including valid quote characters.
The concatenation of two or more data sets is most commonly expressed in T-SQL using the
UNION ALLclause. Given that the SQL Server optimizer can often reorder things like joins and aggregates to improve performance, it is quite reasonable to expect that SQL Server would also consider reordering concatenation inputs, where this would provide an advantage. For example, the optimizer could consider the benefits of rewriting
A UNION ALL Bas
B UNION ALL A.
In fact, the SQL Server optimizer does not do this. More precisely, there was some limited support for concatenation input reordering in SQL Server releases up to 2008 R2, but this was removed in SQL Server 2012, and has not resurfaced since.
It’s an interesting article about an edge case.
We expect this much from IsNull. However, coalesce is a little different. will take the data type from the first non-null value passed and use that for the table definition. This might not always be what you want because if you pass bits you might get integers. If you pass an array of integers and floats you will get numeric. Be aware if this isn’t what you wanted.
Read the whole thing.