A Near-Zero Downtime Case Study

I have a post covering an example of making significant changes with near-zero downtime:

This is where we start the decline phase in our story. Our temporary procedures existed as a bridge from the old procedures which took ClientID and new procedures which will take ProfileID. With our final versions of procedures, we replace @ClientID with @ProfileID in the input parameters and update any conditional logic within filters to use ProfileID instead of ClientID.

The only remaining use of ClientID in these procedures is as an output in select statements, as we still need that for the old code; by this time, all of those references are ProfileID AS ClientID. Otherwise, these new procedures are built around ProfileID.

We still have the original procedures that the application code uses and they reference our now-finalized procedures. These procedures are now transition procedures—we need them right now but as we move application code over to the new procedures, we can deprecate these. But I’m getting ahead of myself a little bit.

Click through for the full story as well as a bunch of pictures which completely understate the vastness of work done.

Near-Zero Downtime Identity Column Changes

I’m getting close to the end of my series on near-zero downtime deployments. This latest post involves identity column changes:

There are some tables where you create an identity value and expect to cycle through data. An example for this might be a queue table, where the data isn’t expected to live permanently but it is helpful to have a monotonically increasing function to determine order (just watch out for those wrap-arounds and you’re fine). An example of reseeding is below:

DBCC CHECKIDENT('dbo.MyTable', RESEED, 1);

This operation needs to take a LCK_M_SCH_M lock, otherwise known as a schema modification lock. Any transactions which are writing to the table will block your transaction but so will any readers unless you have Read Committed Snapshot Isolation turned on or the reader is in the READ UNCOMMITTED or SNAPSHOT transaction isolation level.

If you are using RCSI and don’t have extremely long-running transactions, this is an in-and-out operation, so even though there’s a little bit of blocking, it’s minimal.

Not all changes are this easy, though.

Changing Constraints in Near-Zero Downtime Situations

I have part six of my interminable series on near-zero downtime deployments:

The locking story is not the same as with the primary and unique key constraints. First, there’s one extra piece: the transition will block access to dbo.LookupTable as well as the table we create the constraint on. That’s to keep us from deleting rows in our lookup table before the key is in place.

Second, the locks begin as soon as we hit F5. Even SELECT statements get blocked requesting a LCK_M_SCH_S lock. Bad news, people.

So what can we do to get around this problem? Two routes: the ineffectual way and the ugly way.

Despite my being a ray of sunshine here, you should still check this out. It’s shorter than the average Russian novel, at least.

Changing Tables with Limited Downtime

I continue my series on near-zero downtime deployments and this time look at table changes:

The realm of More Significant Changes is not where you often want to be. There’s a lot of scaffolding code you need to write. Basically, suppose you want to make a repair on the 5th story exterior of an 8-story building. You have a couple of options: the YOLO option is to kick everybody out of the building and have people rappel from the top of the building down to the 5th story to make their changes. They need all of the people out of the building because shut up it’s a strained analogy. This approach is kind of inconvenient: people have to stay out of your building until your people are done repairing the exterior. That’s blocking in the database world.

On the other side, you can build a bunch of scaffolding and attach it to the exterior of the building, perform your repairs, and tear down that scaffolding. While the scaffolding is up, people come and go like normal and don’t even think about it. As you tear the scaffolding down, you temporarily block the door for a moment as you’re finishing your work. This is much more convenient for end users and fits the “near-zero downtime” approach we’re taking.

Strained analogies aside, this is a long post on making a series of table-related changes without your end users noticing.

Making Stored Procedure Changes With Limited Downtime

I continue my series on database development in a (near) zero downtime environment:

Versioning a procedure is pretty simple: you create a new procedure with alterations you want. Corporate naming standards where I’m at have you add a number to the end of versioned procedures, so if you have dbo.SomeProcedure, the new version would be dbo.SomeProcedure01. Then, the next time you version, you’ll have dbo.SomeProcedure02 and so on. For frequently-changing procedures, you might get up to version 05 or 06, but in practice, you’re probably not making that many changes to a procedure’s signature. For example, looking at a directory with exactly 100 procedures in it, I see 7 with a number at the end. Two of those seven procedures are old versions of procedures I can’t drop quite yet, so that means that there are only five “unique” procedures that we’ve versioned in a code base which is two years old. Looking at a different part of the code with 879 stored procedures, 95 have been versioned at least once in the 15 or so years of that code base’s existence. The real number is a bit higher than that because we’ve renamed procedures over time and renamings tend to start the process over as we might go from dbo.SomeProcedure04 to dbo.SomeNewProcedure when we redesign underlying tables or make other drastic architectural changes.

The secret is, I’m always versioning.

Thoughts on Merging Dacpacs

Ed Elliott shares some thoughts on merging dacpacs, including the all-important tl;dr:

You probably don’t want to actually merge dacpac’s you probably want to deploy multiple and use /p:IncludeCompositeObjects=true.

If you do really want to merge dacpacs you can start with this sample: https://github.com/GoEddie/DacpacMerge

That said, it’s not too long, so do read.

Making Near-Zero Downtime Deployments Easier

I continue my series on developing for near-zero downtime deployments:

By default, SQL Server uses pessimistic locking, meaning that readers can block writers, writers can block readers, and writers can block writers. In most circumstances, you can switch from Read Committed to Read Committed Snapshot Isolation and gain several benefits. RCSI has certainly been in the product long enough to vet the code and Oracle has defaulted to an optimistic concurrency level for as long as I can remember.

The downtime-reducing benefit to using RCSI is that if you have big operations which write to tables, your inserts, updates, and deletes won’t affect end users. End users will see the old data until your transactions commit, so your updates will not block readers. You can still block writers, so you will want to batch your operations—that is, open a transaction, perform a relatively small operation, and commit that transaction. I will go into batching in some detail in a later post in the series, so my intent here is just to prime you for it and emphasize that Read Committed Snapshot Isolation is great.

Now that I have the core concepts taken care of, the next posts in the series move into practical implementation examples with a lot of code.

The Steps Of A Database Deployment

I have a series on near-zero downtime. In this post, I cover some of the key assumptions in the series as well as the steps in a database deployment:

Database Release
The database release phase is the first “primary” phase. It usually starts on a schedule, maybe 2 PM on a Wednesday or maybe “every day at 9 AM, 1 PM, 6 PM, and 10 PM” for more mature shops. Depending upon how much of an effect our release process normally has on end users, we might alert them that we expect to see a degradation in services starting at this point.

This phase of the release has us push out our database changes. This can involve creating or altering database objects but will not involve dropping existing objects.

Our database changes should support the blue-green deployment model. At this point in the process, all of the application code is “blue”—that is, the current production code. Our procedure changes need to be able to support that code without breaking. If we need to drop a column from a stored procedure, for example, we would not want to do it here. If we need to add a column to a stored procedure, we might do it here as long as it doesn’t break the calling code.

This is two topics smashed together into one post, but gives you an idea of a mental model around database deployments.

Creating Custom Helm Charts For SQL Server

Andrew Pruski shows us how we can build our own Helm chart to deploy SQL Server to Kubernetes:

Navigate to the new directory: –

cd C:\Helm

And now create the new chart!

helm create testsqlchart

OK, what that has done is create an empty chart so we need to drop in our yaml configuration files.

Read on to see how to generate the chart and use it to deploy SQL Server.

Platform Compatibility and SSDT

Ed Elliott walks us through platform compatibility in SQL Server Data Tools:

Sometimes you don’t have the perfect development environment for SQL Server, sometimes you deploy to things like SQL Azure and want to test locally, for various reasons it is possible that you want to deploy to one version of SQL Server but set the project properties to a different version of SQL Server. If you find yourself in this situation you might need to use the parameter AllowIncompatiblePlatform
 which essentially says “I know I said the project is for SQL 2017 but I am deploying to SQL 2014”, anytime you say this you also sign the contract that says “If I am deploying to a lower version then I have made sure I haven’t used any keywords or object types that didn’t exist in that version and also if everything fails then so be it, I live with my choices every day”.

The story is a little complicated, but Ed straightens it out for us.

Categories

March 2019
MTWTFSS
« Feb  
 123
45678910
11121314151617
18192021222324
25262728293031