Press "Enter" to skip to content

Category: T-SQL

Order of Execution on SELECT Expressions

Bert Wagner digs into an interesting topic:

Success! But as I was celebrating my dynamic SQL victory, I realized I was making an assumption about SQL Server that I had never thought about before:

The above query only works because SQL Server is executing the variables in the SELECT list sequentially. I’m incrementing @CurrentRow only after processing my @RowQuery variable, and this logic only works correctly if SQL Server executes the variable expressions in the order they appear in the SELECT list. If SQL Server was executing items in the SELECT list in reverse or random order, @CurrentRow could potentially get set BEFORE @RowQuery was evaluated, causing the logic of adding “UNION ALL SELECT” in the right location to fail.

I have a vague recollection that you couldn’t always count on this, though I admit to never having seen a counter-example. I don’t believe expression execution order is defined in the ANSI SQL standard—the idea is that everything is typically resolved “at once.”

Comments closed

Joining Lists of Values in T-SQL

Jason Brimhall shows how you can build a list of values using the table value constructor and join to it:

The table value constructor is basically like a virtual table not too different from a CTE or a subquery (in that they are all virtual tables of sorts). The table value constructor however can be combined with either of those other types and is a set of row expressions that get put into this virtual table in a single DML statement.

It’s one of the nicer things SQL Server 2008 gave us.

Comments closed

Identity Inserts: One Table at a Time

Bert Wagner shows that you can only insert with IDENTITY_INSERT = ON for one table at a time:

Ok, simple enough to fix: we just need to do what the error message says and SET IDENTITY_INSERT ON for both tables:

SET IDENTITY_INSERT dbo.User_DEV ON; SET IDENTITY_INSERT dbo.StupidQuestions_DEV ON;

And… it still didn’t work:

IDENTITY_INSERT is already ON for table 'IdentityTest.dbo.User_DEV'. Cannot perform SET operation for table 'dbo.StupidQuestions_DEV'.

Click through for the ramifications and your alternative.

Comments closed

Trailing Spaces and String Comparisons

Bert Wagner shows how SQL Server handles trailing spaces when comparing two strings:

The LEN() function shows the number of characters in our string, while the DATALENGTH() function shows us the number of bytes used by that string.

In this case, DATALENGTH is equal to 10. This result is due to the padded spaces occurring after the character “a” in order to fill the defined CHAR length of 10. We can confirm this by converting the value to hexadecimal. We see the value 61 (“a” in hex) followed by nine “20” values (spaces).

Click through to see what happens and why it works the way it does.

Comments closed

T-SQL Bugs with Joins

Itzik Ben-Gan takes us through four bugs or oddities around joins:

The order counts are now correct, but the total freight values are not. Can you spot the new bug?

The new bug is more elusive because it manifests itself only when the same customer has at least one case where multiple orders happen to have the exact same freight values. In such a case, you are now taking the freight into account only once per customer, and not once per order as you should.

Click through to avoid accidentally introducing bugs in your T-SQL code.

Comments closed

Default Schemas in SQL Server

Max Vernon takes us through the order in which SQL Server searches for tables given a single-part name:

Default schemas in SQL Server can be a blessing, since they reduce the need to specify the schema when creating DDL statements in T-SQL. However, relying on the default schema when creating DML statements can be problematic. A recent question on dba.stackexchange.com asked “Does T-SQL have a Schema search path?”, similar to PostgreSQL implements the search_pathparameter. This post shows how schemas are implemented in SQL Server. We’ll also see why it’s important to always specify the schema when using SQL Server.

A lot of this behavior goes back to the pre-2005 era. 2005 introduced schemas as logical separators, whereas pre-2005 they were more of a security measure (and dbo was the database owner’s schema). I completely agree that you should specify two-part names in-database. It’s a tiny bit faster (which adds up when you’re doing thousands of transactions per second) and reduces ambiguity.

Comments closed

Making Dynamic SQL Safe

Erik Darling explains patiently that if you use sp_executesql wrong, you don’t get the benefits of using it right:

The gripes I hear about fully fixing dynamic SQL are:

– The syntax is hard to remember (setting up and calling parameters)
– It might lead to parameter sniffing issues

I can sympathize with both. Trading one problem for another problem generally isn’t something people get excited about.

But there are good reasons fully to fix it, so read on.

Comments closed

Tracking Database Changes with DDL Triggers

Lori Brown shows how you can use DDL triggers to track database or instance-level changes:

I have been working on some improvements to some of the regular ways we monitor for important changes.  We always have to be on the lookout for unexpected changes being made in the SQL instances that we monitor since often times we are not the only team who has sysadmin access to the instance.  We are always the best trained to take care of and configure things but we sometimes find that someone makes a change either to the SQL or database configuration without telling us.  We want to know when things like this happen!

I’m a big fan of these. Of course you need to get the code right, as a bad trigger can be devastating but you can get a lot of useful information out of it and figure out who’s hand was in the cookie jar.

Comments closed

Contrasting Common Table Expressions and Temp Tables

Brent Ozar has some advice on when to use common table expressions versus temporary tables:

I’d suggest starting with CTEs because they’re easy to write and to read. If you hit a performance wall, try ripping out a CTE and writing it to a temp table, then joining to the temp table.

This is my advice, too. Start with the thing which is easiest for you to develop and maintain. If it suffices for performance, stick with it; otherwise, move to the next-lowest level of complication. Stop when you have good enough performance. This optimizes for one of the most precious resources people rarely think about: developer maintenance time. Developers are pretty expensive, so the more time they spend trying to understand complex code, the less time they’re doing stuff which pushes the business forward.

Comments closed