Press "Enter" to skip to content

Category: T-SQL

Making Stored Procedure Changes With Limited Downtime

I continue my series on database development in a (near) zero downtime environment:

Versioning a procedure is pretty simple: you create a new procedure with alterations you want. Corporate naming standards where I’m at have you add a number to the end of versioned procedures, so if you have dbo.SomeProcedure, the new version would be dbo.SomeProcedure01. Then, the next time you version, you’ll have dbo.SomeProcedure02 and so on. For frequently-changing procedures, you might get up to version 05 or 06, but in practice, you’re probably not making that many changes to a procedure’s signature. For example, looking at a directory with exactly 100 procedures in it, I see 7 with a number at the end. Two of those seven procedures are old versions of procedures I can’t drop quite yet, so that means that there are only five “unique” procedures that we’ve versioned in a code base which is two years old. Looking at a different part of the code with 879 stored procedures, 95 have been versioned at least once in the 15 or so years of that code base’s existence. The real number is a bit higher than that because we’ve renamed procedures over time and renamings tend to start the process over as we might go from dbo.SomeProcedure04 to dbo.SomeNewProcedure when we redesign underlying tables or make other drastic architectural changes.

The secret is, I’m always versioning.

Comments closed

Benefits of Inline Indexes

Phil Factor covers a topic fairly close to my heart:

Usually, the added features of the CREATE TABLE syntax in new releases of SQL Server are esoteric, and unless you are dealing with memory-optimized tables or other esoteric stuff, they aren’t of great interest. However, the Inline INDEX for both a table and column index has just crept in quietly with SQL Server 2014 (12.x). This was interesting because the SQL Server team back-fitted it to all tables rather than just in-memory OLTP tables for which it was, at the time, found necessary. The new syntax was introduced which allows you to create certain index types inline with the table definition. These could be at column level, concerning just that column, or at the table level, with indexes containing several columns.

Why interesting? This affects multi-statement table functions, user-defined table types, table-valued parameters as well as table variables. It was considered a game-change for table variables because, for a start, it allowed non-unique indexes or explicit clustered indexes to be declared on columns for the first time because you can create indexes on table variables as part of the table definition. Of more significance were the table-level indexes that allowed you to specify multi-column indexes. Previous releases had allowed multi-column primary or unique constraints, but not explicitly named indexes. You still cannot declare an index after the table is created, which is a shame as there are good reasons for being able to do so after a table is stocked with data. Any sort of large import of data into a table that is over-indexed or prematurely-indexed is doomed to crawl rather than to run. I’ll show this later on in this article.

Click through for an analysis of inline indexes themselves as well as how they fit on table variables—something I tend not to do much.

Comments closed

Making Near-Zero Downtime Deployments Easier

I continue my series on developing for near-zero downtime deployments:

By default, SQL Server uses pessimistic locking, meaning that readers can block writers, writers can block readers, and writers can block writers. In most circumstances, you can switch from Read Committed to Read Committed Snapshot Isolation and gain several benefits. RCSI has certainly been in the product long enough to vet the code and Oracle has defaulted to an optimistic concurrency level for as long as I can remember.

The downtime-reducing benefit to using RCSI is that if you have big operations which write to tables, your inserts, updates, and deletes won’t affect end users. End users will see the old data until your transactions commit, so your updates will not block readers. You can still block writers, so you will want to batch your operations—that is, open a transaction, perform a relatively small operation, and commit that transaction. I will go into batching in some detail in a later post in the series, so my intent here is just to prime you for it and emphasize that Read Committed Snapshot Isolation is great.

Now that I have the core concepts taken care of, the next posts in the series move into practical implementation examples with a lot of code.

Comments closed

UNCOMPRESS Isn’t DECOMPRESS

Solomon Rutzky strives to solve the question, “What is the UNCOMPRESS function anyhow?”:

With no clear indications of what the UNCOMPRESS function does, we can at least pass in some simple values to see what comes back, and see if we can make sense of the output. For the following tests, please keep in mind that “8-bit” refers to the VARCHARCHAR, and TEXT (deprecated) datatypes. And, “16-bit” refers to the NVARCHARNCHARNTEXT (deprecated),and XML datatypes.

Read on as Solomon figures out what it does and how non-useful it is for anybody nowadays.

Comments closed

Generating Reference Numbers With Sequences

Matthew McGiffen shares one technique to generate reference numbers using a sequence and the FORMAT function:

One thing to note is that, while the sequence will generally produce unique number, it is still worth enforcing that in your table definition with a unique constraint i.e.

ALTER TABLE dbo.Orders ADD CONSTRAINT UQ_Orders_OrderReference UNIQUE(OrderReference);

This prevents someone from issuing an UPDATE command that might create a duplicate reference. 

As long as you can live with the occasional gap in your reference number, sequences are a good solution to the problem.

Comments closed

ISNUMERIC And Unexpected Results

Jen Stirrup explains why ISNUMERIC isn’t all that great:

I noted that one of the columns failed to convert VARCHAR to DECIMAL.

The error message is below, and it’s usually fairly easy to sort:
Error converting data type varchar to numeric

Normally, I’d use ISNUMERIC to identify the rows that fail to have a value in that column that could be converted to a number. Then, I could identify the value, and then I could replace or exclude it, as required.

However, on this occasion, using ISNUMERIC failed to identify any columns as being non-numeric. 

Click through to see why Jen got this result.

Comments closed

sp_executesql WITH RECOMPILE

Erik Darling points out that running sp_executesql with the WITH RECOMPILE setting doesn’t really change anything:

This’ll give us the key lookup plan you see above. If I re-run the query and use the 2010-12-30 date, we’ll re-use the key lookup plan.

That’s an example of how parameters are sniffed.

Sometimes, that’s not a good thing. Like, if I passed in 2008-12-30, we probably wouldn’t like a lookup too much.

One common “solution” to parameter sniffing is to tack a recompile hint somewhere.

Click through for Erik’s demonstration.

Comments closed

Using Calendar Tables

I have a post up on using calendar tables:

There’s one problem with picking a SQL Saturday in April: Easter and Passover tend to run right around that time, and nobody wants a SQL Saturday on Passover or the day before Easter. Unfortunately, our calendar table doesn’t include holiday information. So let’s add it!

Working with holidays and working with fiscal years versus calendar years are just two of the uses of calendar tables. But they’re the only two that I show.

Comments closed