Press "Enter" to skip to content

Category: T-SQL

CAST and CONVERT Make Expressions Nullable

Daniel Hutmacher points out a side effect of using CAST() and CONVERT():

Suppose we want to set up a view in the new solution that mirrors the names and definitions of the old table, so the legacy integration can use that view going forward:

CREATE OR ALTER VIEW new.the_table_like_before
AS
SELECT CAST(id AS varchar(32)) AS id,
CAST([row] AS int) AS [row],
CAST(date_loaded AS datetime) AS dt
FROM new.the_table;

Now, if you check out the resulting datatypes of the view, you’ll notice that all the columns are marked nullable, even though they’re all based on non-nullable columns, so the values in the view could never be null.

Read on for a couple possible solutions.

Comments closed

Using CHOOSE() in SQL Server

Bert Wagner explains the CHOOSE() function:

While I know I don’t utilize most of the features available in SQL Server, I like to think I’m at least aware that those features exist.

This week I found a blind-spot in my assumption however. Even though it shipped in SQL Server 2012, the SQL Server CHOOSE function is a feature that I think I’m seeing for the first time this past week.

CHOOSE() and IIF() were functions ported over to make it easier for Access and Excel users to write code. I tend to avoid them because there are typically better idiomatic constructs (like CASE) in SQL Server.

Comments closed

Minimal Logging with FastLoadContext

Paul White takes us through another way to perform minimally logged bulk loads with SQL Server:

This post provides new information about the preconditions for minimally logged bulk load when using INSERT...SELECT into indexed tables.

The internal facility that enables these cases is called FastLoadContext. It can be activated from SQL Server 2008 to 2014 inclusive using documented trace flag 610. From SQL Server 2016 onward, FastLoadContext is enabled by default; the trace flag is not required.

Without FastLoadContext, the only index inserts that can be minimally logged are those into an empty clustered index without secondary indexes, as covered in part two of this series. The minimal logging conditions for unindexed heap tables were covered in part one.

Click thorugh for a highly informative article.

Comments closed

Self-Joins Versus Key Lookups

Erik Darling takes us through an interesting scenario:

Like most tricks, this has a specific use case, but can be quite effective when you spot it.

I’m going to assume you have a vague understanding of parameter sniffing with stored procedures going into this. If you don’t, the post may not make a lot of sense.

Or, heck, maybe it’ll give you a vague understanding of parameter sniffing in stored procedures.

This was new to me.

Comments closed

Constructing Virtual Tables with VALUES

Kenneth Fisher shows how to use the VALUES clause to construct a virtual table:

This has come up a few times recently, I find it rather fascinating and I can never seem to remember how to do it properly .. so in other words, it’s a perfect subject for a blog post.

Basically, you can use VALUES to create a table within a query. I’ve seen it done in a number of places. Mostly when demoing something and you want some data, but don’t want to actually create a table. I’ve also seen it used to create a numbers table, create test data, etc. Really, any case where you want a short list of values but don’t want to create an actual (or even temp) table to store them.

Click through for examples on how to construct and use this virtual table as a quick replacement for creating a temporary table or table variable.

Comments closed

Avoiding DONE Tokens in Loops

Emanuele Meazzo shows one reason why loops can be so much slower in T-SQL:

Not everybody knows that SQL Server sends a DONE Token to the client each time that a SQL statement completes (so, everything except variable declarations); For the query above you can basically track it with extended events by tracking the “SQL Statement Completed” event.
What happens in a loop? For each statement that’s completed, a token is sent, which means that for this loop that contains 3 statements (the WHILE loop itself it’s a statement) 15 Million tokens are sent to the client.

There can be a drastic difference, as Emanuele shows.

Comments closed

Modifying XML in T-SQL

Max Vernon takes us through the .modify function:

Determining the property syntax when modifying XML values in SQL Server can be time consuming if you don’t work with XML regularly. SQL Server includes a very flexible XML subsystem, called XML_DML, or XML Data Manipulation Language. XML_DML can be used to easily and effectively update XML values in an xml-typed column or variable. This question on dba.stackexchange.comasked about using the .modify function to change the value of an element, which in turn prompted this post.

Read on for a number of examples.

Comments closed

Considerations when Deleting Lots of Data

Ed Elliott takes us through things to think about before deleting a few million rows from a table:

Fragmentation
Fragmentation occurs when we delete from pages, and there is still data surrounding our data. If we have 100 rows and delete every odd row, we would have perfect fragmentation in that we have doubled the size of the data that we need. If we delete rows 1-49, even though we remove the same number of rows we don’t have any fragmentation as the data is in a continuous block. Knowing how the data is stored on disk and how the data will be deleted, is it the first x records or every x record is vital so that we know whether, after the delete, we should also reorganise the indexes to remove the deleted records.

Ed has quality insights here, so check it out.

Comments closed