The ARM API deploys resources to Azure, but doesn’t deploy code onto those resources. For example you can use ARM to deploy a virtual machine with SQL Server already installed but you can’t use ARM to deploy a database from an SSDT DacPac.
To save time when designing solutions, it is important to understand that ARM API is used simply for resources and we need to use some other technology such as DSC or PowerShell to manage the deployments onto the infrastructure once it is deployed.
This is a nice overview of the topic, and because it’s Ed (who is much better about this than most), he goes into how to test before even getting into how to create.
I’ve had most builds work really well. I tried a number of things, but kept getting a few items in the build. There were login errors or network errors, both of which bothered me since I could manually log in with SSMS from the same machine as my build agent.
I suspected a few things here, one of which was the use of named pipes for the Shadow database and TCP for Azure SQL Database.
Eventually, I decided to fall back with msbuild, ignoring VSTS, and make sure all my parameters were correct.
Read on for the rest of the story.
By default, ReadyRoll will ignore tSQLt objects, including our tests. We don’t want ReadyRoll to script out the tSQLt objects, but we do want it to script our tests. To set our filter we need to unload the project in VS and edit the project file. Add the following to the section named ReadyRoll Script Generation Section:
James’s series is really coming together at this point, so if you haven’t been reading, check out the links in his post.
In general, code analysis is not just a help to the individual developer but can be useful to the entire team. This is because it makes the state and purpose of the code more visible, so that it allows everyone who is responsible for delivery to get a better idea of progress and can alert them much earlier to potential tasks and issues further down the line. It also makes everyone more aware of whatever coding standards are agreed, and what operational, security and compliance constraints there are.
Database Code analysis is a slightly more complicated topic than static code analysis as used in Agile application development. It is more complicated because you have the extra choice of dynamic code analysis to supplement static code analysis, but also because databases have several different types of code that have different conventions and considerations. There is DML (Data Manipulation Language), DDL (Data Definition Language), DCL (Data Control Language) and TCL (Transaction Control Language). They each require rather different analysis.
William goes on to include a set of good resources, though I think database code analysis, like database testing, is a difficult job in an under-served area.
I’d like to say a huge thank you to everyone who read or published a post for T-SQL Tuesday #90. I had a great time reading through all the posts and I learnt a lot!
I feel that the real takeaway here is that Continuous Integration and DevOps are not just about putting the right tools in place, it’s all about putting the right working practices in place.
Read on for the wrap-up.
The lead developer and I both had little children at the time. Spending 12+ hours at work on Wednesday wasn’t an ideal situation for us, and we decided to get better.
The first thing we did was ensure that all code was tracked in VSS. We had most web code here, but there were always a few files that weren’t captured, so we cleaned that up. I also added database code to VSS with the well known, time tested and proven File | Save, File | Open method of capturing SQL code. This took a few months, and some deployment issues, to get everyone in the habit of modifying code in this manner. I refused to deploy code that wasn’t in VSS, and since our CTO was a former developer, I had support.
The other change was the lead developer and I started building a release branch of code each week. We’d move over the changes that were going to be released to this branch, which simplified our process. We could now see exactly which code was being deployed. This was before git and more modern branching strategies, but we were able to easily copy code from the mainline of development to the release branch as we made changes for this week.
It’s a good read.
Everyone does CI wrong!
(OK, perhaps not everyone, but a lot of people.)
Whenever I deliver a conference session about database continuous integration (CI), I like to start by asking a question to the audience. “Who can tell me what continuous integration means?”
I almost always get responses like:
“Automated builds upon commit!”
Very occasionally someone will impress me with something like:
“Unit tests!” or “Automatically running my unit tests!”
Not bad answers. Have a biscuit. But you are still missing the fundamental point.
Alex makes a number of great points, so check it out.
I don’t know what percentage of people out there are doing DevOps, but I’m going to go out on a limb and say that it is most likely some number LESS than MOST of them. I don’t think more than half the companies or people out there that do Ops are doing DevOps. I also believe that DBAs that make really good money aren’t making it because DBAs are rare. They are making it because DBA is a tough job to be really good at and the ones who are really good at it are rare. All DBAs are molded by the environments they work in, but really good DBAs are ones that eventually learn to mold their environments to them.
A normal DBA may say something like, “I do it this way because that is the way it is done here.”
An exceptional DBA says, “We do it that way here because that is the way I have it done.”
Robert makes great points. I have on my agenda to write a post entitled something like “The Cloud’s Not Going To Steal Our Jobs,” somewhat in the same spirit as this post. Definitely a must-read.
It’s not just about exercising restraint, though. It’s also about taking small additional steps to minimize the impact of new development. If you are making a new stored procedure parameter, can you provide a default so that its omission will make the procedure act as before? If so, you can now deploy that change without affecting any consuming code. If you are changing the name of a column, can you create a computed column with the old name which simply mirrors the original column’s data? All insert- and update-related code would still need to change, but you could save the SELECT statements for the next round.
Ideally, these steps are transitory. Optional parameters can be made mandatory once all of the consuming procedures have been adapted. The legacy-named computed columns should be deleted once all old queries are updated. But recognize that this might not happen. Other priorities come up, and these supposedly temporary constructs can linger.
Click through for additional hints.
The biggest concern I have about database deployment is not about how you deploy, or even whether or not you use source control. It’s that your database changes are backward compatible – meaning they won’t break the current application, in the event the application can’t be changed at the same time (and with distributed software, that’s impossible). The largest number of bugs I had a hand in creating were caused because I assumed the application or middle tiers would be deployed at the same time.
I quickly became very motivated to make sure my changes would work before and after the application tier changes were deployed. Add a parameter to a stored procedure? Make sure it’s at the end, and has a default. Add a column to a table? Make sure views and stored procedures don’t expose it (yet) or require it. And a hundred other examples.
It’s a good read.