Press "Enter" to skip to content

Category: Deployment

Deferred Name Resolution and Its Discontents

Kendra Little gives us a detailed rundown of deferred name resolution in SQL Server:

Have you ever tried to create an object in SQL Server, but it failed due to a missing table, column, or other dependency? If so, you’ve hit a case where SQL Server doesn’t offer ‘deferred name resolution’.

While these errors can be helpful when you’ve made a typo or accidentally used the wrong database, this can sometimes be a big hassle when you are…

– Deploying database code to set up a partial environment

– Deploying database code from version control to an empty database to ensure the code is valid

In this post, I walk through common scenarios and test whether deferred name resolution works or not.

Check it out for what is probably the most detailed look at the topic I’ve ever seen.

Comments closed

Architecting a Jenkins Replacement

Li Haoyi takes us through an internal Databricks tool for continuous integration:

Runbot is a bespoke continuous integration (CI) solution developed specifically for Databricks’ needs. Originally developed in 2019, Runbot incrementally replaces our aging Jenkins infrastructure with something more performant, scalable, and user friendly for both users and maintainers of the service. This blog post will explore the motivations behind developing Runbot, the core design decisions that went into it, and how we used it to greatly improve the experience of all the developers within the Databircks engineering organization.

It doesn’t look like the tool is available externally, but it’s an interesting read and helps understand some of the “why” behind the solution.

Comments closed

Index Maintenance and Pipeline Operation Scripts

Kevin Chant has a two-fer for us:

My first personal go-to script is one that has helped me out a lot over the years. Because I have used it a lot to identify missing indexes. I know there are a few different versions available online that you can use. However, I tend to use the one that comes with Glenn Berry’s Diagnostic Queries.

It is so easy to use. I’m not sharing the snippet of code on here because I want to encourage people to download the entire diagnostic script instead. Just download the script that is relevant for your version of SQL Server and search for ‘Missing indexes’.

Read the whole thing.

Comments closed

Adding Columns to a Published Data Model with TMSL

Kristyna Hughes wants to update an existing schema:

Goal of this demo: Update a Power BI model schema by adding a column to the data model without opening a PBIX file and ensure the scheduled refresh still works.

Why would this be useful? Updating the schema in the desktop tool requires an entire refresh of the data model which can take a while if your model is large. Also, app developers could systematically add new elements to existing data models using a formulaic XMLA script through SSMS, saving your report designers time when new fields need to be added.

Read on for limitations, as well as the process.

Comments closed

Working with DACPACs

Chad Callihan has a two-parter. First up is the process of creating a DACPAC:

DAC is the abbreviation for data-tier application and is an item containing the objects of a database. When put together into a versioned file to be used for deploying in SQL Server, that package is called a DACPAC. You can point a DACPAC at an existing database to deploy changes or use a DACPAC to create a brand new database with tables, procedures, and the rest of the database objects built in.

And once you have a DACPAC, you of course need something to do with it:

What if we want to use our DACPAC to create a new database using SSMS? Our first step will be to right click on Databases and select Deploy Data-tier Application.

As a concept, DACPACs are really great: I can package together tables, database code, and reference data in one bundle and import it into a variety of products (e.g., database projects, other SQL Server instances, Azure SQL Database). The problem is that once they get beyond a certain size, DACPACs are rather unwieldy and failure-prone to create and to extract.

Comments closed

Automating Semantic Versioning with Azure DevOps

Dave Ruijter shows how you can use Azure DevOps to perform automatic semantic versioning:

I am a fan of using semantic versioning (a.k.a. SemVer) for data solutions, following the v1.0.0 pattern. It helps in the communication between team members and stakeholders, by limiting ambiguity and misunderstandings related to the version of your solution’s releases. With semantic versioning, the trick is to increment the version according to the changes you have made since the latest release. Manually keeping track of that is not an easy task, especially for small teams, without the capacity to have somebody dedicated to this administration task. I found a way to make this a lot easier, leaning on the Pull Request description! And as a bonus, we will create some nice release notes automatically

Click through to see what you need to have set up on your Azure DevOps subscription and a detailed walkthrough of how to set it up.

Comments closed

Database Deployment with External References

Sebastian Meine and Liz Baron try to untangle the Gordian knot:

Most database developers are dealing with databases that contain external references. Even if the database code is in source control, these external references can make it very difficult to deploy to new environments. In these multi-database environments, tools like SQLCompare and SQL Change Automation do not automatically resolve object-order across databases, resulting in errors during deployment.

One way to tackle this, which works especially well for CI pipelines, is to create facades for all externally referenced databases. A facade in this context is a database with the expected name, with the expected objects, but those objects are hollowed out and do not contain any dependencies. You can compare this concept to an interface in an object-oriented language. Once you have these facades, they can be used in a pre-deployment step, simplifying the rest of the deployment by effectively removing object-order dependencies with these external databases.

This is one of the most painful parts of converting existing databases into model-driven database development. Especially once you start having to deal with cross-dependencies and rapidly-changing databases.

Comments closed

Deploying Synapse Artifacts to a Managed vNet Workspace

Rui Cunha takes us through an Azure Synapse Analytics deployment scenario:

In my previous article, I demonstrated how we could easily use the Synapse Workspace Deployment extension to accomplish this second stage of the process. I’m now coming back to this topic as I realized that many of our customers were reporting difficulties in completing this second stage of their Synapse CICD process because they were failing to deploy Synapse artifacts to a Managed VNET Synapse Workspace.

In this particular scenario, the deployment was failing because their target workspace was not allowing access from public networks.

Fortunately, the answer isn’t “Allow access from public networks.” Click through to see what you can do instead.

Comments closed

Creating a Distributed Availability Group in Azure via Terraform

Sandeep Arora has some scripts for us:

To create a distributed availability group, you need two availability groups (AG) each with its own listener, which you then combine.In this case, one availability group is on-premises and the other needs to be created in Microsoft Azure. This example doesn’t cover all of the details like creating an extended network setup between on-premises network and Azure or joining Azure active directory domain services to and on-premises forest; instead, it highlights the key requirements for setting up the availability group in Azure and then configuring the distributed AG between the on-premises availability group (represented as AOAG-1) and the Azure availability group (represented as AOAG-2).

Click through for the preparations you need in place and a set of scripts to do the work.

Comments closed