Press "Enter" to skip to content

Category: Deployment

Deployment Contributors

Richie Lee discusses an alternative to pre-model scripts:

According to the blurb, deployment contributors can perform custom actions when deploying a SQL script. And one such use of deployment contributors would be to alter index builds to be an online operation. Microsoft also have a Github DACExtensions repo, and this is very useful because, and in the interests of full disclosure, I have never written a deployment contributor myself. This is partly because the repo has some very good examples, including the online index issue (this post nicely covers how to make use of deployment contributors.) I know those that have and have explained how they work very well. But I think there are a few challenges w/r/t deployment contributors:

  • No one has ever heard of them

  • You have to use C#

  • They’re not entirely straightforward.

This is a good discussion of deployment contributors, including why we don’t see them more frequently.

Comments closed

Continuous Delivery With SSAS

Jens Vestergaard shows how to implement continuous deliver with Analysis Services cubes:

None of the above mentioned scenarios appeals to Team Foundation Server(TFS) and in order to get into the no-sweat zone during release time, we need to build our deployments around TFS; The obvious choice when working with Microsoft.

Natively Visual Studio, or more precisely MSBuild, does not support dwprojfiles which are used for Analysis Services (SSAS) projects. So obviously this has to involve some kind of magic. But as it turns out, it’s not all that magic. However there is not much documentation on this particular scenario out there but I managed to find one good resource, which is this. It gave me just enough assistance to complete the task.

This is a long post, but well worth reading.

Comments closed

Database Deployments With ReadyRoll

James Anderson uses ReadyRoll to perform database deployments:

Migration Scripts

ReadyRoll automatically generates and adds migration scripts to our project in Visual Studio. This means almost all the manual work of writing (or generating with compare tools) migration scripts is done by ReadyRoll. ReadyRoll can also organise these scripts, using semantic versioning, into a logical folder structure within our project.

This post is a continuation in his database deployment automation series; this post talks mostly about what ReadyRoll is and how to install it.

Comments closed

Uncontrolled Environments

Ed Elliott discusses database deployments in uncontrolled environments:

There have been a few discussions on stack overflow recently about how to manage deployments in uncontrolled environments, specifically data migrations. The questions were from an SSDT perspective, I don’t think that SSDT is a great choice for these uncontrolled environments and there are some additional requirements for these uncontrolled environments that need some additional thought and care when creating release scripts (whether manually or using a tool).

Ed has some interesting thoughts here, and I agree with the idea that SQL Server Data Tools deployment scripts are not the best choice when you have people changing schema all around you in unexpected ways.

Comments closed

Deploying SSDT Scripts

Richie Lee has concerns with database deployments:

At any rate, the script is generated and maybe reviewed….. so then what? In SSDT there is no way to create and deploy script in one step; they are two distinct steps. And even if they were one step, this would still not resolve the issue that troubles me. So what is this issue?

The issue is that by creating a script, and then running the deploy, you cannot be sure that the database is in the exact same state that it was when the initial script was generated. If you don’t already know, SSDT runs a deploy entirely in memory, so as mentioned there is no script created. You have to explicitly create the script as part of the process. Or, if you have already created one, you have to re-create the script.

I’m on the fence here.  In simpler environments, I think Richie has a good point.  But in a complex environment, I wouldn’t even use auto-generated deployment scripts; when you’re changing hundreds of database objects (including adding and modifying columns, backfilling data, modifying indexes, etc.), that automated deployment script is almost guaranteed to fail.  And if it does fail, it could leave you in a state of irreparable harm.

Comments closed

Deploying SSIS Packages In VS 2015

Neil Gelder notes that you can deploy different versions of SSIS packages using Visual Studio 2015:

For years I’ve dream’t of having one set of tools for developing SSIS packages! not a lot to ask really and  great step towards this from Microsoft was decoupling the development IDE from the main SQL Server install to produce the standalone SSDT (SQL Server data tools)

But like most people I work in an environment which has legacy versions for SQL Server in production, but equally like most tech folk (giddy kids wanting new toys) I always try and use the most current and exciting  version of VS.  This however proves a problem when developing for SSIS, for example if you developed a SSIS package in VS 2013 you’d not be able to deploy this correctly to a SQL Server 2012 version of Integration services catalog.  In the past this resulted in having two IDE’s installed, SSDT 2012 (VS shell) for any 2012 catalog development and VS 2013 installed for other work.

I had one person mention during a talk I gave that this isn’t foolproof, but my experience (limited to SQL Server 2012 and 2014) was that deployment worked fine.  As always, test before making changes.

Comments closed

Dacpac Deployment Models

Ed Elliott discusses publish profiles as part of dacpac deployment scenarios:

For a while I meandered between the two approaches until the ssdt team announced that they had released a nuget package with the DacFx in and I decided that I would move over to that as it meant that I no longer had to check in the dll’s into source control which in itself is a big win. I also decided to fix the extensions thing and so figured out a (slightly hacky) way to get the DacFx dll’s in the nuget package to behave like sqlpackage and allow a sub-directory to be used to load dll’s – I fixed that using this powershell module that wraps a .net dll (https://the.agilesql.club/blogs/Ed-Elliott/DacFxed-Nugetized-DacFx-Power…). Now I have the problem of not having to check in dll’s and still being able to load contributors without having to install into program files sorted BUT I still had the problem of lots of command line args which I was sharing in powershell scripts and passing in some custom bits like server/db names etc.

I’m not very familiar with dacpacs, so this was an interesting read for me.

Comments closed

Deployment As Process

Ginger Grant argues in favor of deployment processes:

All sorts of things can happen when one person writes and deploys. I know someone who worked in the IT department for a large cell phone company. At the time, working there meant free phone service. One of the devs was a heavy user of the free phone service and so was his large extended family. His job was to maintain the billing code. After several questionable incidents at work, HR got involved and he was perp walked out of the building. Due to the circumstances surrounding his departure, his cell phone accounts were checked to ensure from this point on, he would get a bill. Although his account showed a number of active phones, his balance was always zero. The code in source control was checked and there was nothing in it which provided a reason why his bill was zero. Upon further investigation, my friend noticed the version number in production did not match the version number in source control. The code in source control was compiled and a huge balance appeared for the former employee. If someone else had deployed the code in source control, this chicanery would not have been possible.

This is an interesting read, so go check it out.

Comments closed

Continuous Integration

James Anderson is starting a series on continuous integration:

I had been using source control for years but it’s always felt like a tick box exercise that I was doing because I had to. I had never used it to review old versions to see where code went wrong or to quickly roll back changes if I decided I no longer wanted to go in a certain direction with the code. I never felt like I was getting anything back from using source control. Sometimes it takes a problem to arise for you to see the value of a solution.

In 2015 I started to inherit the code base for our internal maintenance database, the UtilityDB. This database is used to store performance metrics and to manage tasks such as index maintenance and backups. This database is installed on all of our instances.

This first post is an introduction to the series, and it looks like he’ll cover some heady topics.

Comments closed