Press "Enter" to skip to content

Author: Kevin Feasel

Smoother Deployments

Steve Jones talks about how, at a prior company, he & coworkers simplified their deployment processes and their lives:

The lead developer and I both had little children at the time. Spending 12+ hours at work on Wednesday wasn’t an ideal situation for us, and we decided to get better.

The first thing we did was ensure that all code was tracked in VSS. We had most web code here, but there were always a few files that weren’t captured, so we cleaned that up. I also added database code to VSS with the well known, time tested and proven File | Save, File | Open method of capturing SQL code. This took a few months, and some deployment issues, to get everyone in the habit of modifying code in this manner. I refused to deploy code that wasn’t in VSS, and since our CTO was a former developer, I had support.

The other change was the lead developer and I started building a release branch of code each week. We’d move over the changes that were going to be released to this branch, which simplified our process. We could now see exactly which code was being deployed. This was before git and more modern branching strategies, but we were able to easily copy code from the mainline of development to the release branch as we made changes for this week.

It’s a good read.

Comments closed

Azure Cosmos DB

Victoria Holt discusses a new Azure product announcement:

Another database annoucment today from Microsoft. The first globally distributed, multi-model database service. Microsoft describe Azure Cosmos DB as containing a write optimized, resource governed, schema-agnostic database engine that natively supports multiple data models: key-value, documents, graphs, and columnar.

The troll in me really wants this to be SQL Server, but it is instead a very much beefed-up DocumentDB.

Comments closed

Continuous Disintegration

Alex Yates gets on his soapbox about continuous integration:

Everyone does CI wrong!

(OK, perhaps not everyone, but a lot of people.)

Whenever I deliver a conference session about database continuous integration (CI), I like to start by asking a question to the audience. “Who can tell me what continuous integration means?”

I almost always get responses like:

“Automated deployments!”

“Automated builds upon commit!”

Very occasionally someone will impress me with something like:

“Unit tests!” or “Automatically running my unit tests!”

Not bad answers. Have a biscuit. But you are still missing the fundamental point.

Alex makes a number of great points, so check it out.

Comments closed

Narrative Custom Visual

Devin Knight continues his Power BI custom visuals series:

In this module you will learn how to use the Narrative Power BI Custom Visual.  The Narrative visual is developed by Narrative Science and it gives you the ability to automatically deliver analysis of your data.  The results look similar to a final report that a analyst might provide after spending weeks with your data.

It’s not going to replace a seasoned analyst (or any analyst at all, frankly) but if you need a couple paragraphs of text summing up a trend, it’s a good start.

Comments closed

The DBA Role Isn’t Dead

Robert Davis discusses shipping database changes and the role of the DBA:

I don’t know what percentage of people out there are doing DevOps, but I’m going to go out on a limb and say that it is most likely some number LESS than MOST of them. I don’t think more than half the companies or people out there that do Ops are doing DevOps. I also believe that DBAs that make really good money aren’t making it because DBAs are rare. They are making it because DBA is a tough job to be really good at and the ones who are really good at it are rare. All DBAs are molded by the environments they work in, but really good DBAs are ones that eventually learn to mold their environments to them.

A normal DBA may say something like, “I do it this way because that is the way it is done here.”

An exceptional DBA says, “We do it that way here because that is the way I have it done.”

Robert makes great points.  I have on my agenda to write a post entitled something like “The Cloud’s Not Going To Steal Our Jobs,” somewhat in the same spirit as this post.  Definitely a must-read.

Comments closed

Baby Steps With Deployments

Riley Major looks at small things you can do to help smooth out deployment processes:

It’s not just about exercising restraint, though. It’s also about taking small additional steps to minimize the impact of new development. If you are making a new stored procedure parameter, can you provide a default so that its omission will make the procedure act as before? If so, you can now deploy that change without affecting any consuming code. If you are changing the name of a column, can you create a computed column with the old name which simply mirrors the original column’s data? All insert- and update-related code would still need to change, but you could save the SELECT statements for the next round.

Ideally, these steps are transitory. Optional parameters can be made mandatory once all of the consuming procedures have been adapted. The legacy-named computed columns should be deleted once all old queries are updated. But recognize that this might not happen. Other priorities come up, and these supposedly temporary constructs can linger.

Click through for additional hints.

Comments closed

Managing Federated Systems

Aaron Bertrand tells how he would work with a federated system back in the SQL Server 2005 days, and ties it back to modern database deployment practices:

The biggest concern I have about database deployment is not about how you deploy, or even whether or not you use source control. It’s that your database changes are backward compatible – meaning they won’t break the current application, in the event the application can’t be changed at the same time (and with distributed software, that’s impossible). The largest number of bugs I had a hand in creating were caused because I assumed the application or middle tiers would be deployed at the same time.

I quickly became very motivated to make sure my changes would work before and after the application tier changes were deployed. Add a parameter to a stored procedure? Make sure it’s at the end, and has a default. Add a column to a table? Make sure views and stored procedures don’t expose it (yet) or require it. And a hundred other examples.

It’s a good read.

Comments closed

Availability Group Agent Alerts

Tracy Boggiano builds a set of SQL Server alerts related to Availability Group happenings and issues:

For Availability Groups we have a few extra error numbers we care about. Error number 1480 tells when a server changes roles, so we can know when a server flips from a secondary to a primary, or from a primary to a secondary. Error number 35264 tells when data movement has suspended on any database. This can occur for many reasons. One I have seen is when you have expanded your mount point on your primary and the data or log file runs out of space on the secondary the data or log file can not expand on the secondary because you forgot to expand the secondary. Error number 35265 tells you when the data movement has resumed on any database.  Error number 41404 let’s you know if your AG is offline which can be bad if you expected an automatic failover.  Error number 41405 let’s you know if an Availability Group can’t automatically failover for any reason.  In the later to cases you will want to look at your SQL Error logs and AlwaysOn Extended Events Health session.

Click through for the alert scripts.

Comments closed

Continuous Integration Is A Process

Derik Hammer makes the vital point that continuous integration isn’t a tool; it’s a process:

SQL Server Data Tools (SSDT) is a tool that I am particularly familiar with and will become the subject of my examples. SSDT database projects shift the source of truth from your database to your source control. The intent is that the project and its build artifact, the dacpac, is the desired state of your database. SSDT will then generate the code necessary for you to migrate from your current state to the desired state.

The problem with my description is that it is similar to saying, “hammers drive nails into wood,” and then expecting that you won’t have to learn how to swing the hammer, aim at the head of the nail, or regulate how hard you hit it. Tools like SSDT are not magic and they can have problems. A solid understanding of how they work can mitigate or completely avoid these issues, however.

Click through for Derik’s rant.

Comments closed

Powershell Gallery And The Linux Model

Chrissy LeMaire explains the Linux packaging model and the long-term vision for Powershell:

So Joey comes up and says “Chrissy, Aaron Nelson has pretty much required me to talk to you. The SQL Community has the #1 PowerShell UserVoice request. We see that – we’ve heard you, The People want Out-DataTable and we agree. Would you be happy if we added it to the PowerShell Gallery first?”

“Uh, no! I want Out-DataTable to be a first class citizen like Out-GridView.”

“But where we’re going with PowerShell — we’re going smaller – to just core files, then you add on from the Gallery as desired.”

“Oh dang, like Linux! I’m liking it, keep talking.”

“To be clear, this is post 6.0. In the 6.0 timeframe, but we want to decouple as many release trains as possible, like PowerShellGet and PSReadline. But we’ll still very well package the ‘uber-complete, awesome devops tool edition’ of PowerShell. In the meantime, you could setup a metapackage for just your database stuff.”

“So it is like Linux patterns! PowerShell Gallery does that? I’m sold.”

Chrissy goes on to explain what a Powershell Gallery metapackage module is, how to create one, and even how to publish one yourself.

Comments closed