Press "Enter" to skip to content

Category: Deployment

OLTP-Friendly Database Deployments

Michael Swart looks at one of the biggest problems when trying to do a zero-downtime deployment to an OLTP system:

There are two main kinds of SQL queries. SELECT/INSERT/UPDATE/DELETE statements are examples of Data Manipulation Language (DML). CREATE/ALTER/DROP statements are examples of Data Definition Language (DDL).

With schema changes – DDL – we have the added complexity of the SCH-M lock. It’s a kind of lock you don’t see with DML statements. DML statements take and hold schema stability locks (SCH-S) on the tables they need. This can cause interesting blocking chains between the two types where new queries can’t start until the schema change succeeds

Click through for suggestions with regard to schema locks, as well as a few tips for modifying large tables.

Comments closed

Online Database Object Changes

Michael J Swart continues his online deployment series:

PROCEDURES:
Procedures are very easy to Blue-Green. Brand new procedures are added during the pre-migration phase. Obsolete procedures are dropped during the post-migration phase.

If the procedure is changing but is logically the same, then it can be altered during the pre-migration phase. This is common when the only change to a procedure is a performance improvement.

But if the procedure is changing in other ways. For instance, when a new parameter is added, or dropped, or the resultset is changing. Then use the Blue-Green method to replace it: During the pre-migration phase, create a new version of the procedure. It must be named differently and the green version of the application has to be updated to call the new procedure. The original blue version of the procedure is deleted during the post-migration phase. It’s not always elegant calling a procedure something like s_USERS_Create_v2 but it works.

This has been a great series so far, and the way he does deployments matches very closely to the way we do them.

Comments closed

Blue-Green Deployments

Michael J Swart has started a new series on online deployments and covers the blue-green deployment architecture:

When using the Blue-Green method, basically nothing gets changed. Instead everything gets replaced. We start by setting up a new environment – the green environment – and then cut over to it when we’re ready. Once we cut over to the new environment successfully, we’re free to remove the original blue environment. The technique is all about replacing components rather than altering components.

Check it out for a great explanation, not only of how true blue-green doesn’t jibe well with databases, but how to get it to work well enough.

Comments closed

Using The Command Line To Migrate To Azure SQL Database

Arun Sirpal shows how to use SqlPackage.exe to migrate a database to Azure SQL Database:

I have moved many databases to Azure via different methods but I recently came across a new way. Well technically it’s not new, I should say, newly found. The migration was done via the command line which is not exactly ground breaking but it’s nice to have another option.

The idea behind this is simple. Create the bacpac via command line using sqlpackage.exe with the action as export then do an import action into Azure.

Read on for the demo.

Comments closed

Spinning Up SQL Server Containers In Jenkins

Chris Adkin has a few tips for loading SQL Server in Jenkins as part of testing or deployment:

Problem 1 Image Tag

There is no image tag specified for the microsoft/mssql-server-linux image, therefore, if Microsoft push a newer version of the image to docker hub, this will be pulled down from docker hub when the build pipeline runs. This is easily fixed by tagging the image with a tag for an explicit version, e.g. microsoft/mssql-server-linux:2017-GA.

Click through for the starting code, two additional issues, and the corrected code.

Comments closed

Policing Database Deployments

Drew Furgiuele has a rather interesting library that he’s released to the general public:

The base return is the TSQLFragment object, which in turn has a Batches object, which in turn holds… well it can hold a lot of different things. When the text is parsed, it will determine what type of object to return based on the statement it determines it is. For example, if it’s an insert statement it will be a certain type of object with a given set of properties and methods, and if it’s, say, a create index statement you’ll get different properties, such as which table or view is getting the index along with the indexed columns and included columns. It really is interesting.

But interesting can a double-edged sword: since the statement object that gets returned can be different for each parsed piece of code, that means to set up any type of intelligence around the stuff we’re dealing with, we need to check for very specific objects.

Unfortunately, I never got past the first animated GIF, whose subtitle was wrong.  You, however, should read the whole thing.

Comments closed

Improving Code Quality With SonarQube

Samir Behara has a quick look at SonarQube, an open source static analysis engine:

In my project, we have also integrated SonarQube with our TFS CI/CD build and have configured the Quality Gates.

For example – If I try to inject a security threat or a known coding issue — the TFS build will fail, the check in will get rejected, the quality gate fails and SonarQube points me to the exact issue – which I can rectify and do another check-in. So it will basically stop you from checking in code with potential issues.

Currently the only way to catch such issues is during manual coding reviews. SonarQube will help in automating that process. You can write your own rules to look for known issues in the code and stop it before the code gets checked in to source control.
So overall you can ensure good quality code going to Production and less regression defects coming up at a later point of time.

Read on for an example where a SonarQube rule can find a SQL injection vulnerability and thereby fail the build.

Comments closed

What To Do With A Database In Source Control

Ed Elliott with Database Source Control 102:

This post is for a specific type of person if you are:

  • New to source control
  • Are getting started on your path to the continuous delivery nirvana
  • Have been able to get your database into some sort of source control system
  • Have more than one person checking in code to the database source
  • You are unsure what yo do next

Then this post is for you!

This is a nice post with some next-steps for when you have a database in source control but aren’t quite sure what to do next.

Comments closed

Reverting Database Changes

Alex Yates talks about database rollback in the event of a code release failure:

A thorny topic. Rolling back code is easy. Normally you can just redeploy the old binaries and you’re done. Databases are more difficult – because data. The existence of persistent data has two significant consequences:

  1. You can’t just redeploy the entire database – you need to create a script to migrate your existing database from the existing (undesirable) state back to the old (desired) state. This probably isn’t an automated process.

  2. You need to think about new data that has entered the database since the deployment. Do you want to keep it? Do you need to process it to get it into an older format? (E.g. if you need to undo a column split.)

Alex has some helpful tips for structuring database changes.  Me, I just never look back…which is actually one of the strategies Alex talks about.

Comments closed