Press "Enter" to skip to content

Category: Deployment

Microsoft Fabric Data Warehouse in a Database Project

Kevin Chant creates a database project:

In this post I want to cover how you can share a Microsoft Fabric Data Warehouse Database Project with the new target platform.

Which is now possible thanks to the latest Azure Data Studio Insiders update. You can view the ‘Add projects support for Fabric DW‘ pull request in the public azuredatastudio GitHub repository.

Kevin takes us through creating the database project in Azure Data Studio and then using Azure DevOps or Azure Data Studio to deploy it back out.

Comments closed

CI/CD for Synapse Serverless SQL Pool with SqlPackage and Azure DevOps

Rui Cunha has a tutorial for us:

Azure Synapse Analytics Serverless SQL is a query service mostly used over the data in your data lake, for data discovery, transformation, and exploration purposes. It is, therefore, normal to find in a Synapse Serverless SQL pool many objects referencing external locations,  using disparate external data sources, authentication mechanisms, file formats, etc. In the context of CICD,  where automated processes are responsible for propagating the database code across environments, one can take advantage of database oriented tools like SSDT and SqlPackage CLI , ensuring that this code is conformed with the targeted resources.

In this article I will demonstrate how you can take advantage of thee tools when implementing the CICD for the Azure Synapse Serverless SQL engine. We will leverage SQL projects in SSDT to define our objects and implement deploy-time variables (SQLCMD variables).  Through CICD pipelines, we will build the SQL project to a dacpac artifact, which enables us to deploy the database objects one or many times with automation.

Click through for the demonstration.

Comments closed

Deploying Azure Resources with Terraform and GitHub Actions

Reitse Eskens sets up some new resources:

When you start out with Terraform, you’ll most likely run the code locally with terraform on your own machine. Terraform works with a so-called state-file, it saves the state of the Azure deployment it left behind and compares the (new) code with the state it encounters when it runs again. Changes are resolved by changing, deleting or adding resources that don’t match the state-file.

This works fine when you’re flying solo and don’t have co-workers who can change resources as well. Whenever you need to share code, the industry standard is to use a git solution, whether GitHub, GitLab, Azure DevOps or some other solution, as long as it has version control you should be fine (providing people adhere to the correct usage of branches).

Click through for a step-by-step walkthrough, as well as explanation of the major actors in that play.

Comments closed

Power BI: Git Integration vs Deployment Pipelines

Marc Lelijveld notes a chemical reaction:

Together with the introduction of Microsoft Fabric a long-awaited feature for Power BI also became available. Git integration now allows us to connect your workspace to a git repository to sync the meta data from your Power BI dataset and report between the workspace and the repository.

At the same time, Power BI deployment pipelines are around. The pipelines help you to move content between workspaces from Development to Test and finally to Production. The question that arises, when to use what? And how do they complement each other, or maybe even clash with each other?

Read on for Marc’s thoughts across these two topics.

Comments closed

Creating a Power BI Deployment Pipeline

Richard Swinbank does some deployment:

In part 4 of this series, I introduced a standard pattern for organising report files and pipelines, with a standard process for creating new reports. Repeatable patterns and processes are great candidates for automation, and in this post I’ll build a report creation process that automatically configures a new report and creates its deployment pipeline .

This video (5m31s) shows you the process in action, from creating a new report to seeing it deployed automatically to Power BI:

Click through for the video, as well as plenty of written instruction.

Comments closed

Power BI Report Deployment with Connections to Shared Datasets

Rayis Imayev does some large-scale deployment:

Let’s say you have a collection of Power BI .pbix files stored in a git-based source control system (GitHub, Azure DevOps, or any other system). Among these files, one is your data model, while the others are Power BI visual reports and dashboards connected to the published dataset from your data model. Your published dataset is located in a separate workspace dedicated to shared content, and the visualization Power BI reports are placed in another workspace with appropriate permissions to access them.

Now, let’s consider an additional complexity: you have this collection of files not only in one development environment but also in two others. These environments support your Power BI reporting testing (UAT) and the release of your Power BI reports to end-users (Production).

The questions that arise are: How do you deploy your solution, and most importantly, how do you automate it?

Click through for an architectural diagram, as well as the answer to this question.

Comments closed

Creating an Azure DevOps YAML Pipeline for SQL Server Deploys

Oilivier Van Steenlandt updates to the new Azure DevOps model:

In one of my previous blog posts, I used the SQL Server database deploy task to deploy my DACPAC to SQL Server. Unfortunately, this task became deprecated in Release Pipelines. In this blog post, I would like to share the alternative.

Additionally, we will be moving from a Classic Release pipeline to a YAML pipeline. The YAML pipeline will be responsible for building and deploying our Database Projects.

Click through for the walkthrough.

Comments closed

Deploying to Multiple Power BI Dataset Environments

Richard Swinbank configures some deployments:

In earlier posts in this series, I talked about to developing and deploying standalone Power BI datasets and automating report deployment into different environments. I’ll bring together those approaches in this post, to enable deployment of shared datasets into multiple environments. This has consequences for automated report deployment, and I’ll take a look at that too.

Read the whole thing.

Comments closed

Postgres Change Management Rollbacks

Grant Fritchey explains why stateful systems are difficult to roll back:

The invitation this month for #PGSqlPhriday comes from Dian Fay. The topic is pretty simple, database change management. Now, I may have, once or twice, spoken about database change management, database DevOpsautomating deployments, and all that sort of thing. Maybe. Once or twice.

OK. This is my topic.

I’ve got some great examples on taking changes from the schema on your PostgreSQL databases and then deploying them. All the technical stuff you could want. However, I don’t want to talk about that today. Instead, I want to talk about something really important, the concept of rollbacks when it comes to database deployments.

I completely agree with Grant’s description of the pain and his recommendation. With stateful systems, roll forward, not backward.

Comments closed

Thoughts on Postgres File Layout and Migration

Dian Fay shares some advice:

I’ve used several migration frameworks in my time. Most have been variations on a common theme dating back lo these past fifteen-twenty years: an ordered directory of SQL scripts with an in-database registry table recording those which have been executed. The good ones checksum each script and validate them every run to make sure nobody’s trying to change the old files out from under you. But I’ve run into three so far, and used two in production, that do something different. Each revolves around a central idea that sets it apart and makes developing and deploying changes easier, faster, or better-organized than its competition — provided you’re able to work within the assumptions and constraints that idea implies.

Read on for thoughts about three tools: sqitch, graphile-migrate, and migra.

Comments closed