Press "Enter" to skip to content

Category: T-SQL

Tracking Database Changes with DDL Triggers

Lori Brown shows how you can use DDL triggers to track database or instance-level changes:

I have been working on some improvements to some of the regular ways we monitor for important changes.  We always have to be on the lookout for unexpected changes being made in the SQL instances that we monitor since often times we are not the only team who has sysadmin access to the instance.  We are always the best trained to take care of and configure things but we sometimes find that someone makes a change either to the SQL or database configuration without telling us.  We want to know when things like this happen!

I’m a big fan of these. Of course you need to get the code right, as a bad trigger can be devastating but you can get a lot of useful information out of it and figure out who’s hand was in the cookie jar.

Comments closed

Contrasting Common Table Expressions and Temp Tables

Brent Ozar has some advice on when to use common table expressions versus temporary tables:

I’d suggest starting with CTEs because they’re easy to write and to read. If you hit a performance wall, try ripping out a CTE and writing it to a temp table, then joining to the temp table.

This is my advice, too. Start with the thing which is easiest for you to develop and maintain. If it suffices for performance, stick with it; otherwise, move to the next-lowest level of complication. Stop when you have good enough performance. This optimizes for one of the most precious resources people rarely think about: developer maintenance time. Developers are pretty expensive, so the more time they spend trying to understand complex code, the less time they’re doing stuff which pushes the business forward.

Comments closed

CAST and CONVERT Make Expressions Nullable

Daniel Hutmacher points out a side effect of using CAST() and CONVERT():

Suppose we want to set up a view in the new solution that mirrors the names and definitions of the old table, so the legacy integration can use that view going forward:

CREATE OR ALTER VIEW new.the_table_like_before
AS
SELECT CAST(id AS varchar(32)) AS id,
CAST([row] AS int) AS [row],
CAST(date_loaded AS datetime) AS dt
FROM new.the_table;

Now, if you check out the resulting datatypes of the view, you’ll notice that all the columns are marked nullable, even though they’re all based on non-nullable columns, so the values in the view could never be null.

Read on for a couple possible solutions.

Comments closed

Using CHOOSE() in SQL Server

Bert Wagner explains the CHOOSE() function:

While I know I don’t utilize most of the features available in SQL Server, I like to think I’m at least aware that those features exist.

This week I found a blind-spot in my assumption however. Even though it shipped in SQL Server 2012, the SQL Server CHOOSE function is a feature that I think I’m seeing for the first time this past week.

CHOOSE() and IIF() were functions ported over to make it easier for Access and Excel users to write code. I tend to avoid them because there are typically better idiomatic constructs (like CASE) in SQL Server.

Comments closed

Minimal Logging with FastLoadContext

Paul White takes us through another way to perform minimally logged bulk loads with SQL Server:

This post provides new information about the preconditions for minimally logged bulk load when using INSERT...SELECT into indexed tables.

The internal facility that enables these cases is called FastLoadContext. It can be activated from SQL Server 2008 to 2014 inclusive using documented trace flag 610. From SQL Server 2016 onward, FastLoadContext is enabled by default; the trace flag is not required.

Without FastLoadContext, the only index inserts that can be minimally logged are those into an empty clustered index without secondary indexes, as covered in part two of this series. The minimal logging conditions for unindexed heap tables were covered in part one.

Click thorugh for a highly informative article.

Comments closed

Self-Joins Versus Key Lookups

Erik Darling takes us through an interesting scenario:

Like most tricks, this has a specific use case, but can be quite effective when you spot it.

I’m going to assume you have a vague understanding of parameter sniffing with stored procedures going into this. If you don’t, the post may not make a lot of sense.

Or, heck, maybe it’ll give you a vague understanding of parameter sniffing in stored procedures.

This was new to me.

Comments closed

Constructing Virtual Tables with VALUES

Kenneth Fisher shows how to use the VALUES clause to construct a virtual table:

This has come up a few times recently, I find it rather fascinating and I can never seem to remember how to do it properly .. so in other words, it’s a perfect subject for a blog post.

Basically, you can use VALUES to create a table within a query. I’ve seen it done in a number of places. Mostly when demoing something and you want some data, but don’t want to actually create a table. I’ve also seen it used to create a numbers table, create test data, etc. Really, any case where you want a short list of values but don’t want to create an actual (or even temp) table to store them.

Click through for examples on how to construct and use this virtual table as a quick replacement for creating a temporary table or table variable.

Comments closed

Avoiding DONE Tokens in Loops

Emanuele Meazzo shows one reason why loops can be so much slower in T-SQL:

Not everybody knows that SQL Server sends a DONE Token to the client each time that a SQL statement completes (so, everything except variable declarations); For the query above you can basically track it with extended events by tracking the “SQL Statement Completed” event.
What happens in a loop? For each statement that’s completed, a token is sent, which means that for this loop that contains 3 statements (the WHILE loop itself it’s a statement) 15 Million tokens are sent to the client.

There can be a drastic difference, as Emanuele shows.

Comments closed