Press "Enter" to skip to content

Category: Deployment

Baby Steps With Deployments

Riley Major looks at small things you can do to help smooth out deployment processes:

It’s not just about exercising restraint, though. It’s also about taking small additional steps to minimize the impact of new development. If you are making a new stored procedure parameter, can you provide a default so that its omission will make the procedure act as before? If so, you can now deploy that change without affecting any consuming code. If you are changing the name of a column, can you create a computed column with the old name which simply mirrors the original column’s data? All insert- and update-related code would still need to change, but you could save the SELECT statements for the next round.

Ideally, these steps are transitory. Optional parameters can be made mandatory once all of the consuming procedures have been adapted. The legacy-named computed columns should be deleted once all old queries are updated. But recognize that this might not happen. Other priorities come up, and these supposedly temporary constructs can linger.

Click through for additional hints.

Comments closed

Managing Federated Systems

Aaron Bertrand tells how he would work with a federated system back in the SQL Server 2005 days, and ties it back to modern database deployment practices:

The biggest concern I have about database deployment is not about how you deploy, or even whether or not you use source control. It’s that your database changes are backward compatible – meaning they won’t break the current application, in the event the application can’t be changed at the same time (and with distributed software, that’s impossible). The largest number of bugs I had a hand in creating were caused because I assumed the application or middle tiers would be deployed at the same time.

I quickly became very motivated to make sure my changes would work before and after the application tier changes were deployed. Add a parameter to a stored procedure? Make sure it’s at the end, and has a default. Add a column to a table? Make sure views and stored procedures don’t expose it (yet) or require it. And a hundred other examples.

It’s a good read.

Comments closed

Continuous Integration Is A Process

Derik Hammer makes the vital point that continuous integration isn’t a tool; it’s a process:

SQL Server Data Tools (SSDT) is a tool that I am particularly familiar with and will become the subject of my examples. SSDT database projects shift the source of truth from your database to your source control. The intent is that the project and its build artifact, the dacpac, is the desired state of your database. SSDT will then generate the code necessary for you to migrate from your current state to the desired state.

The problem with my description is that it is similar to saying, “hammers drive nails into wood,” and then expecting that you won’t have to learn how to swing the hammer, aim at the head of the nail, or regulate how hard you hit it. Tools like SSDT are not magic and they can have problems. A solid understanding of how they work can mitigate or completely avoid these issues, however.

Click through for Derik’s rant.

Comments closed

Dealing With Database Changes

Vladimir Oselsky walks through his database deployment workflow:

When it comes to actual deployment to Test and production servers, it is handled by application update program that runs scripts on the target server one by one in alphabetical order. Since we have clients running different versions, scripts always have to be applied in order, for example, if the customer is on version 1.5 before the could get 2.5 they need 2.0. This ensures that database changes are applied in correct order, and I don’t have to worry about something breaking.

One last problem that I have to deal with on a regular basis is Version-drift. This is caused when I manually patch a client for a fix without going through the proper build process. In those cases, I just have to manually merge changes into development to guarantee that it will make it out to other clients. Once in a while, it becomes quite complicated to keep track of different clients running different versions and how to ensure that if they need a fix, it is not something that could be resolved through update versus manual code changes.

Version drift can be a big pain, but check out Vlad’s workflow.

Comments closed

47 Incorrect Deployment Assumptions

Brent Ozar has a list of 47 assumptions regarding database deployments that turn out not always to be true:

30. The deployment person wouldn’t dream of only highlighting some of it and running it.

31. The staff who were supposed to work with you during the deployment will be available.

32. The staff, if available at the start of the call, will be available during the entire call.

33. The staff won’t come down with food poisoning halfway through the deployment call, forget to mute their home office phone, step into the bathroom, and leave the bathroom door open.

I’ve never had item #33 happen to me, but that’s a pretty solid list of stuff that can go wrong.

Comments closed

Database Deployment: Growing Up

Ryan Booz uses schooling as an extended metaphor for database deployment:

In general, the biggest issues we hit continue to be client customizations to the database (even ones we sanction) and an ever growing set of core-pop data that we manage and have to proactively defend against client changes.  This is an area we just recently admitted we need to take a long, hard look at and figure out a new paradigm.

I should mention that it was also about this time that we were finally able to proactively get our incremental changes into source  control.  All of our final scripts were in source somewhere, but the ability to use SQL Compare and SQL Source Control allowed our developers to finally be a second set of eyes on the upgrade process.  No longer were we weeding through 50K lines of SQL upgrade just to try and find what changed.  Diffing whole scripts doesn’t really provide any good context… especially when we couldn’t guarantee that the actions in the script were in the same order from release to release.  This has been another huge win for us.

This is a view from someone in the middle of the process.  Ryan’s group isn’t pushing everything automatically, but they’re building out to that.

Comments closed

CI With SQL Server And Jenkins

Chris Adkin shows how to auto-deploy SQL Server Data Tools projects to a SQL Server instance using Jenkins:

The aim of this blog post is twofold, it is to explain how:

  • A “Self building pipeline” for the deployment of a SQL Server Data Tools project can be implemented using open source tools
  • A build pipeline can be augmented using PowerShell

What You Will Need

  • Jenkins automation server

  • cURL

  • SQL Server 2016 (any edition will suffice)

  • Visual Studio 2015 community edition

  • A windows server, physical or virtual to install all of the above on, I will be using Windows Server 2012 R2 as the operating system

Automated integration via CI is extremely helpful, and Chris makes it look easy in this post.

Comments closed

Team Maturity Levels

Ed Elliott has the best lists:

Maturity Levels

OK so this is pretty simple, we have these levels:

  • Low
  • Medium
  • High

Wow. Just WOW

That is an amazing list, how did you come up with it? Did it come from some phd study on the effectiveness of lists in the internet age? No.

So a little more detail…

Read on for Ed’s take on database development maturity levels.  I might quibble with some of the specifics, but I agree with the principle.

Comments closed

Continuous Deployment In A Box

Ed Elliott has been working on a very interesting project:

What does this do?

Unblock-File *.ps1 – removes a flag that windows puts on files to stop them being run if they have been downloaded over the internet.
.\ContinuousDeploymentFTW.ps1 – runs the install script which actually:

  • Downloads chocolatey
  • Installs git
  • Installs Jenkins 2
  • Guides you how to configure Jenkins
  • Creates a local git repo
  • Creates a SSDT project which is configured with a test project and ssdt and all the references that normally cause people problems
  • Creates a local Jenkins build which monitors your local git repo for changes
  • When code is checked into the repo, the Jenkins job jumps into action and…

If you check into the default branch “master” then Jenkins:

  • Builds the SSDT project
  • Deploys the project to the unit test database
  • Runs the tSQLt unit tests
  • Generates a deployment script for the “production” database

and what you have there is continuous delivery in a box

Click through for a video where Ed shows how it all works.

Comments closed

How SSDT Publication Works

Ed Elliott has a great post deconstructing the SQL Server Data Tools deployment process:

The first thing to note is the garbage collection, there are like 8 changes to the line so 8 sets of garbage collection that happen which is expensive for a .net app. We get much more processor usage for a more prolonged time and we really start to see some disk activity, write activity just has one peak while the files are written but reading stays quite high throughout the whole process. It could be that it is sql reading from disk (my demo was from a single ssd with a local sql instance) but we can see after the sql cpu settles down the green disk read line stays high so it is unlikely to be pure sqlserver.exe.

What does this tell us?

Well if you have large objects, multi-thousand line stored procs then you better get yourself some serious ssd’s to deploy from. If you have smaller objects then you’ll need CPU and memory – don’t scrimp here!

Check it out.

Comments closed