Press "Enter" to skip to content

Category: T-SQL

Changing Constraints in Near-Zero Downtime Situations

I have part six of my interminable series on near-zero downtime deployments:

The locking story is not the same as with the primary and unique key constraints. First, there’s one extra piece: the transition will block access to dbo.LookupTable as well as the table we create the constraint on. That’s to keep us from deleting rows in our lookup table before the key is in place.

Second, the locks begin as soon as we hit F5. Even SELECT statements get blocked requesting a LCK_M_SCH_S lock. Bad news, people.

So what can we do to get around this problem? Two routes: the ineffectual way and the ugly way.

Despite my being a ray of sunshine here, you should still check this out. It’s shorter than the average Russian novel, at least.

Comments closed

Gaps and Islands with Dates

Bert Wagner hits one of my favorite topics:

In a traditional gaps and islands problem, the goal is to identify groups of continuous data sequences (islands) and groups of data where the sequence is missing (gaps).

While many people encounter gaps and islands problems when dealing with ranges of dates, and recently I did too but with an interesting twist:

How do you determine gaps and islands of data that has overlapping date ranges?

Check out Bert’s explanation of the solution; it’s a good one.

Comments closed

Performance Testing Simple Scalar UDF Functions

Wayne Sheffield tries out a simple scalar UDF in SQL Server 2019 to see how it performs:

I recently published a post detailing the new Scalar UDF Inlining feature in SQL 2019 here. That post introduced the new feature in a way that I used to compare performance to the other function types, continuing the performance evaluation of functions that I had previously posted here and here. In the Scalar UDF Inlining post, I used a function to strip all non-numeral values from a string, and to return the result. This used the FOR XML output option.

In thinking about how scalar functions are commonly used, I’ve decided to revisit this feature with a simpler function. I will still compare it to all the other types of functions to see how Scalar UDF Inlining compares to the others.

Wayne’s results are music to the product team’s ears, I’m sure.

Comments closed

Comparing STRING_AGG Performance Against FOR XML PATH

Jeffry Schwartz compares the performance of STRING_AGG in SQL Server 2017 versus the tried and true method of FOR XML PATH for string concatenation:

Two different T-SQL methods were used for testing.  The FOR XML PATH solution incorporates the reverse and stuff functions along with FOR XML PATH.  The String_agg solution is much simpler, but is only available on SQL Server 2017 currently.

The FOR XML PATH method has been documented thoroughly over the years in many articles, so these explanations will not be repeated here.  However, two points are worth mentioning.  The first one is that the @strSeparator (in this case ‘ or  ’) string will reside at the end of the constructed string (in this case @strFilter).  Since the last occurrence of @strSeparator must be removed to prevent a syntax error, characters must be dropped from the end of the string.  Although it may seem obvious that four characters (the length of @strSeparator) should be removed, actually only three need be removed because the end of the constructed string is “r” not “ “.  Therefore, the length argument of the left function specification is the length of @strFilter minus three.

I like STRING_AGG a lot, primarily because I can remember it and I can explain it to people relatively new to SQL Server. Read on to see how they both compare in terms of performance.

Comments closed

Querying Cosmos DB From SQL Server

Jovan Popovic shows how you can use the Cosmos DB ODBC driver to perform OPENROWSET queries against Cosmos DB:

Now you need to install ODBC Driver for CosmosDB on the computer where you have SQL Server installed. I’m using Microsoft Azure Cosmos DB ODBC 64-bit.msi for 64-bit Windows – 64-bit versions of Windows 8.1 or later, Windows 8, Windows 7, Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2.

Once you install this driver, you should setup ODBC source in system DSN and test the connection:

If you’re running SQL Server 2019, you can follow Jovan’s first two steps and then create an external data source and table with PolyBase to get to the same results.

Comments closed

Finding Three-Part or Four-Part Names in SQL Server

Louis Davidson shows how we can find three-part or four-part naming in T-SQL code:

In order to make this work, one of the considerations is to eliminate cross database dependencies, as you can’t reference objects that don’t exist in views, and even in stored procedures, which offer delayed resolution of objects, you can’t test the code without the database it is referencing.

In addition, and somewhat more important to the process, is dealing with three part names that reference the name of the database your object is in. During the comparison process, the database can be created with a name that is different from your target database to compare to (referred to as a shadow database.) So if you are in database X and have references to X.schema.table, but the database is generated as X_Shadow, the X. is now a cross database reference rather than the local reference you are desiring.

Four part names to linked servers are a different sort of nightmare, but one that is (hopefully) exceedingly rare. The queries presented will help with this as well.

Louis has a few scripts to help you find these. If your code is in source control already, you could also build a regular expression to search through it.

Comments closed

Enabling and Disabling Triggers

Kenneth Fisher shows how you can enable and disable triggers in SQL Server:

I tend to feel that a lot of people who use triggers don’t really understand them. That said, every now and again you have to deal with them. And in particular (for this post) you might need to disable and then re-enable them. Enabling and disable are identical commands in this case so I’m just going to use the DISABLEversion and you can just replace it with ENABLE as needed.

To be fair, the world might be a better place if we disabled a majority of triggers…

Comments closed

Changing Tables with Limited Downtime

I continue my series on near-zero downtime deployments and this time look at table changes:

The realm of More Significant Changes is not where you often want to be. There’s a lot of scaffolding code you need to write. Basically, suppose you want to make a repair on the 5th story exterior of an 8-story building. You have a couple of options: the YOLO option is to kick everybody out of the building and have people rappel from the top of the building down to the 5th story to make their changes. They need all of the people out of the building because shut up it’s a strained analogy. This approach is kind of inconvenient: people have to stay out of your building until your people are done repairing the exterior. That’s blocking in the database world.

On the other side, you can build a bunch of scaffolding and attach it to the exterior of the building, perform your repairs, and tear down that scaffolding. While the scaffolding is up, people come and go like normal and don’t even think about it. As you tear the scaffolding down, you temporarily block the door for a moment as you’re finishing your work. This is much more convenient for end users and fits the “near-zero downtime” approach we’re taking.

Strained analogies aside, this is a long post on making a series of table-related changes without your end users noticing.

Comments closed

Making Stored Procedure Changes With Limited Downtime

I continue my series on database development in a (near) zero downtime environment:

Versioning a procedure is pretty simple: you create a new procedure with alterations you want. Corporate naming standards where I’m at have you add a number to the end of versioned procedures, so if you have dbo.SomeProcedure, the new version would be dbo.SomeProcedure01. Then, the next time you version, you’ll have dbo.SomeProcedure02 and so on. For frequently-changing procedures, you might get up to version 05 or 06, but in practice, you’re probably not making that many changes to a procedure’s signature. For example, looking at a directory with exactly 100 procedures in it, I see 7 with a number at the end. Two of those seven procedures are old versions of procedures I can’t drop quite yet, so that means that there are only five “unique” procedures that we’ve versioned in a code base which is two years old. Looking at a different part of the code with 879 stored procedures, 95 have been versioned at least once in the 15 or so years of that code base’s existence. The real number is a bit higher than that because we’ve renamed procedures over time and renamings tend to start the process over as we might go from dbo.SomeProcedure04 to dbo.SomeNewProcedure when we redesign underlying tables or make other drastic architectural changes.

The secret is, I’m always versioning.

Comments closed

Benefits of Inline Indexes

Phil Factor covers a topic fairly close to my heart:

Usually, the added features of the CREATE TABLE syntax in new releases of SQL Server are esoteric, and unless you are dealing with memory-optimized tables or other esoteric stuff, they aren’t of great interest. However, the Inline INDEX for both a table and column index has just crept in quietly with SQL Server 2014 (12.x). This was interesting because the SQL Server team back-fitted it to all tables rather than just in-memory OLTP tables for which it was, at the time, found necessary. The new syntax was introduced which allows you to create certain index types inline with the table definition. These could be at column level, concerning just that column, or at the table level, with indexes containing several columns.

Why interesting? This affects multi-statement table functions, user-defined table types, table-valued parameters as well as table variables. It was considered a game-change for table variables because, for a start, it allowed non-unique indexes or explicit clustered indexes to be declared on columns for the first time because you can create indexes on table variables as part of the table definition. Of more significance were the table-level indexes that allowed you to specify multi-column indexes. Previous releases had allowed multi-column primary or unique constraints, but not explicitly named indexes. You still cannot declare an index after the table is created, which is a shame as there are good reasons for being able to do so after a table is stocked with data. Any sort of large import of data into a table that is over-indexed or prematurely-indexed is doomed to crawl rather than to run. I’ll show this later on in this article.

Click through for an analysis of inline indexes themselves as well as how they fit on table variables—something I tend not to do much.

Comments closed