Press "Enter" to skip to content

Category: Deployment

Adding Columns to a Published Data Model with TMSL

Kristyna Hughes wants to update an existing schema:

Goal of this demo: Update a Power BI model schema by adding a column to the data model without opening a PBIX file and ensure the scheduled refresh still works.

Why would this be useful? Updating the schema in the desktop tool requires an entire refresh of the data model which can take a while if your model is large. Also, app developers could systematically add new elements to existing data models using a formulaic XMLA script through SSMS, saving your report designers time when new fields need to be added.

Read on for limitations, as well as the process.

Comments closed

Working with DACPACs

Chad Callihan has a two-parter. First up is the process of creating a DACPAC:

DAC is the abbreviation for data-tier application and is an item containing the objects of a database. When put together into a versioned file to be used for deploying in SQL Server, that package is called a DACPAC. You can point a DACPAC at an existing database to deploy changes or use a DACPAC to create a brand new database with tables, procedures, and the rest of the database objects built in.

And once you have a DACPAC, you of course need something to do with it:

What if we want to use our DACPAC to create a new database using SSMS? Our first step will be to right click on Databases and select Deploy Data-tier Application.

As a concept, DACPACs are really great: I can package together tables, database code, and reference data in one bundle and import it into a variety of products (e.g., database projects, other SQL Server instances, Azure SQL Database). The problem is that once they get beyond a certain size, DACPACs are rather unwieldy and failure-prone to create and to extract.

Comments closed

Automating Semantic Versioning with Azure DevOps

Dave Ruijter shows how you can use Azure DevOps to perform automatic semantic versioning:

I am a fan of using semantic versioning (a.k.a. SemVer) for data solutions, following the v1.0.0 pattern. It helps in the communication between team members and stakeholders, by limiting ambiguity and misunderstandings related to the version of your solution’s releases. With semantic versioning, the trick is to increment the version according to the changes you have made since the latest release. Manually keeping track of that is not an easy task, especially for small teams, without the capacity to have somebody dedicated to this administration task. I found a way to make this a lot easier, leaning on the Pull Request description! And as a bonus, we will create some nice release notes automatically

Click through to see what you need to have set up on your Azure DevOps subscription and a detailed walkthrough of how to set it up.

Comments closed

Database Deployment with External References

Sebastian Meine and Liz Baron try to untangle the Gordian knot:

Most database developers are dealing with databases that contain external references. Even if the database code is in source control, these external references can make it very difficult to deploy to new environments. In these multi-database environments, tools like SQLCompare and SQL Change Automation do not automatically resolve object-order across databases, resulting in errors during deployment.

One way to tackle this, which works especially well for CI pipelines, is to create facades for all externally referenced databases. A facade in this context is a database with the expected name, with the expected objects, but those objects are hollowed out and do not contain any dependencies. You can compare this concept to an interface in an object-oriented language. Once you have these facades, they can be used in a pre-deployment step, simplifying the rest of the deployment by effectively removing object-order dependencies with these external databases.

This is one of the most painful parts of converting existing databases into model-driven database development. Especially once you start having to deal with cross-dependencies and rapidly-changing databases.

Comments closed

Deploying Synapse Artifacts to a Managed vNet Workspace

Rui Cunha takes us through an Azure Synapse Analytics deployment scenario:

In my previous article, I demonstrated how we could easily use the Synapse Workspace Deployment extension to accomplish this second stage of the process. I’m now coming back to this topic as I realized that many of our customers were reporting difficulties in completing this second stage of their Synapse CICD process because they were failing to deploy Synapse artifacts to a Managed VNET Synapse Workspace.

In this particular scenario, the deployment was failing because their target workspace was not allowing access from public networks.

Fortunately, the answer isn’t “Allow access from public networks.” Click through to see what you can do instead.

Comments closed

Creating a Distributed Availability Group in Azure via Terraform

Sandeep Arora has some scripts for us:

To create a distributed availability group, you need two availability groups (AG) each with its own listener, which you then combine.In this case, one availability group is on-premises and the other needs to be created in Microsoft Azure. This example doesn’t cover all of the details like creating an extended network setup between on-premises network and Azure or joining Azure active directory domain services to and on-premises forest; instead, it highlights the key requirements for setting up the availability group in Azure and then configuring the distributed AG between the on-premises availability group (represented as AOAG-1) and the Azure availability group (represented as AOAG-2).

Click through for the preparations you need in place and a set of scripts to do the work.

Comments closed

Azure DevOps Templates for Data Platform Deployments

Kevin Chant has some toys for us:

For my T-SQL Tuesday contribution this month is I want to introduce my Azure DevOps templates for Data Platform deployments.

This months T-SQL Tuesday is hosted by Frank Geisler. Frank has invited us to write about deploying SQL components through descriptive methods and build some new cool templates for them.

Which is good timing for me, because I co-presented a session on the day this post is published. I showed how to use YAML in Azure DevOps for Data Platform deployments at Data Platform Virtual Summit. .

Click through to learn more and see Kevin’s repos, as well as more information on the topic.

Comments closed

Building a Deployment Pipeline with sqlpackage

Daniel Hutmacher builds a cloudless deployment pipeline:

I did some googling to see just how simple I could make a database deployment pipeline. I keep the DDL scripts in a git repository on the local network, but I can’t use Azure DevOps or any other cloud service, and I don’t have Visual Studio installed, so the traditional database project in SSDT that I know and love is unfortunately not an option for me.

So I googled a little, and here’s what I ended up doing.

Click through to see how Daniel was able to do it.

Comments closed

Deploying SQL Server via Ansible

Amit Khandelwal gives us another way of deploying SQL Server on Linux:

Today, we’ll look at how to automate SQL Server deployment and configuration on Linux. To automate our deployment, we will use the Ansible system role, which is available here.

Note: The Ansible system role that I use in this blog is a sample system role that is provided as is and for reference only. Microsoft and RedHat do not support this. However, I invite you to provide feedback and suggestions for improving the system role here: Issues linux-system-roles/mssql (github.com).

Read on for the instructions and a demonstration.

Comments closed