Press "Enter" to skip to content

Category: Deployment

Reverting Database Changes

Alex Yates talks about database rollback in the event of a code release failure:

A thorny topic. Rolling back code is easy. Normally you can just redeploy the old binaries and you’re done. Databases are more difficult – because data. The existence of persistent data has two significant consequences:

  1. You can’t just redeploy the entire database – you need to create a script to migrate your existing database from the existing (undesirable) state back to the old (desired) state. This probably isn’t an automated process.

  2. You need to think about new data that has entered the database since the deployment. Do you want to keep it? Do you need to process it to get it into an older format? (E.g. if you need to undo a column split.)

Alex has some helpful tips for structuring database changes.  Me, I just never look back…which is actually one of the strategies Alex talks about.

Comments closed

Building Azure Resource Manager Templates

Ed Elliott gives a brief overview of Azure Resource Manager templates and puts together a sample template:

The ARM API deploys resources to Azure, but doesn’t deploy code onto those resources. For example you can use ARM to deploy a virtual machine with SQL Server already installed but you can’t use ARM to deploy a database from an SSDT DacPac.

To save time when designing solutions, it is important to understand that ARM API is used simply for resources and we need to use some other technology such as DSC or PowerShell to manage the deployments onto the infrastructure once it is deployed.

This is a nice overview of the topic, and because it’s Ed (who is much better about this than most), he goes into how to test before even getting into how to create.

Comments closed

Azure SQL Database Deployment Account Errors

Steve Jones troubleshoots an issue with Azure SQL Database:

 I’ve had most builds work really well. I tried a number of things, but kept getting a few items in the build. There were login errors or network errors, both of which bothered me since I could manually log in with SSMS from the same machine as my build agent.

I suspected a few things here, one of which was the use of named pipes for the Shadow database and TCP for Azure SQL Database.

Eventually, I decided to fall back with msbuild, ignoring VSTS, and make sure all my parameters were correct.

Read on for the rest of the story.

Comments closed

Automating tSQLt Tests

James Anderson shows how to integrate tSQLt tests as part of his ReadyRoll pipeline:

By default, ReadyRoll will ignore tSQLt objects, including our tests. We don’t want ReadyRoll to script out the tSQLt objects, but we do want it to script our tests. To set our filter we need to unload the project in VS and edit the project file. Add the following to the section named ReadyRoll Script Generation Section:

James’s series is really coming together at this point, so if you haven’t been reading, check out the links in his post.

Comments closed

Database Code Analysis

William Brewer has an interesting article on performing code analysis on database objects:

In general, code analysis is not just a help to the individual developer but can be useful to the entire team. This is because it makes the state and purpose of the code more visible, so that it allows everyone who is responsible for delivery to get a better idea of progress and can alert them much earlier to potential tasks and issues further down the line. It also makes everyone more aware of whatever coding standards are agreed, and what operational, security and compliance constraints there are.

Database Code analysis is a slightly more complicated topic than static code analysis as used in Agile application development. It is more complicated because you have the extra choice of dynamic code analysis to supplement static code analysis, but also because databases have several different types of code that have different conventions and considerations. There is DML (Data Manipulation Language), DDL (Data Definition Language), DCL (Data Control Language) and TCL (Transaction Control Language).  They each require rather different analysis.

William goes on to include a set of good resources, though I think database code analysis, like database testing, is a difficult job in an under-served area.

Comments closed

Ship It

James Anderson compiled the latest T-SQL Tuesday results and shipped it:

I’d like to say a huge thank you to everyone who read or published a post for T-SQL Tuesday #90. I had a great time reading through all the posts and I learnt a lot!

I feel that the real takeaway here is that Continuous Integration and DevOps are not just about putting the right tools in place, it’s all about putting the right working practices in place.

Read on for the wrap-up.

Comments closed

Smoother Deployments

Steve Jones talks about how, at a prior company, he & coworkers simplified their deployment processes and their lives:

The lead developer and I both had little children at the time. Spending 12+ hours at work on Wednesday wasn’t an ideal situation for us, and we decided to get better.

The first thing we did was ensure that all code was tracked in VSS. We had most web code here, but there were always a few files that weren’t captured, so we cleaned that up. I also added database code to VSS with the well known, time tested and proven File | Save, File | Open method of capturing SQL code. This took a few months, and some deployment issues, to get everyone in the habit of modifying code in this manner. I refused to deploy code that wasn’t in VSS, and since our CTO was a former developer, I had support.

The other change was the lead developer and I started building a release branch of code each week. We’d move over the changes that were going to be released to this branch, which simplified our process. We could now see exactly which code was being deployed. This was before git and more modern branching strategies, but we were able to easily copy code from the mainline of development to the release branch as we made changes for this week.

It’s a good read.

Comments closed

Continuous Disintegration

Alex Yates gets on his soapbox about continuous integration:

Everyone does CI wrong!

(OK, perhaps not everyone, but a lot of people.)

Whenever I deliver a conference session about database continuous integration (CI), I like to start by asking a question to the audience. “Who can tell me what continuous integration means?”

I almost always get responses like:

“Automated deployments!”

“Automated builds upon commit!”

Very occasionally someone will impress me with something like:

“Unit tests!” or “Automatically running my unit tests!”

Not bad answers. Have a biscuit. But you are still missing the fundamental point.

Alex makes a number of great points, so check it out.

Comments closed

The DBA Role Isn’t Dead

Robert Davis discusses shipping database changes and the role of the DBA:

I don’t know what percentage of people out there are doing DevOps, but I’m going to go out on a limb and say that it is most likely some number LESS than MOST of them. I don’t think more than half the companies or people out there that do Ops are doing DevOps. I also believe that DBAs that make really good money aren’t making it because DBAs are rare. They are making it because DBA is a tough job to be really good at and the ones who are really good at it are rare. All DBAs are molded by the environments they work in, but really good DBAs are ones that eventually learn to mold their environments to them.

A normal DBA may say something like, “I do it this way because that is the way it is done here.”

An exceptional DBA says, “We do it that way here because that is the way I have it done.”

Robert makes great points.  I have on my agenda to write a post entitled something like “The Cloud’s Not Going To Steal Our Jobs,” somewhat in the same spirit as this post.  Definitely a must-read.

Comments closed

Baby Steps With Deployments

Riley Major looks at small things you can do to help smooth out deployment processes:

It’s not just about exercising restraint, though. It’s also about taking small additional steps to minimize the impact of new development. If you are making a new stored procedure parameter, can you provide a default so that its omission will make the procedure act as before? If so, you can now deploy that change without affecting any consuming code. If you are changing the name of a column, can you create a computed column with the old name which simply mirrors the original column’s data? All insert- and update-related code would still need to change, but you could save the SELECT statements for the next round.

Ideally, these steps are transitory. Optional parameters can be made mandatory once all of the consuming procedures have been adapted. The legacy-named computed columns should be deleted once all old queries are updated. But recognize that this might not happen. Other priorities come up, and these supposedly temporary constructs can linger.

Click through for additional hints.

Comments closed