Press "Enter" to skip to content

Category: Deployment

CI/CD with Azure Synapse Link for SQL Server 2022

Kevin Chant gives us the whole story:

In this post I want to show you a complete CI/CD experience for Azure Synapse Link for SQL Server 2022 tables. Which uses a YAML Pipeline in Azure DevOps. Including how to automatically stop and start it in the pipeline.

In a previous post I showed how an easier way to perform CI/CD for Azure Synapse Link for SQL Server 2022. Where you only need to stop the link, update the SQL Server database and afterwards start the link again.

However, the best CI/CD solutions are the ones where you do not do any manual work at all. This includes stopping and starting the link.

And that’s just what Kevin gives us.

Comments closed

Dataset Changes while Deploying in Power BI

Marc Lelijveld investigates a what-if scenario:

One of the topics discussed during the session, is the effect of deployments on datasets by using native deployment pipelines in the Power BI service. Deployment Pipelines only deploy meta data from the data model, however specific changes might have an unwanted effect on the data in the dataset in the target stage.

In this blog post, I will further elaborate on several specific use cases and the effect on your dataset in the target stage.

Read on for the results of three separate tests.

Comments closed

CI/CD for Synapse Link for SQL Server 2022

Kevin Chant makes some changes:

In another post I showed how you can use CI/CD to update both ends of Azure Synapse Link for SQL Server 2022 using Azure DevOps. Allowing you to update both a SQL Server 2022 database and an Azure Synapse Analytics dedicated SQL Pool in the same deployment pipeline.

By my own admission, that method can become complex. Plus, I showed some more advanced concepts in that post. With this in mind, I have decided to cover an easier way in this post.

Read on for the simpler technique.

Comments closed

Rebuilding a Dedicated SQL Pool via Azure DevOps

Sarath Sasidharan clones an Azure Synapse Analytics dedicated SQL pool:

There are many scenarios where you want to create a new Synapse dedicated SQL pool environment based on an existing Synapse dedicated SQL pool environment. This may be required when you need to create a development or test environment based on your production environment by copying complete schemas and without copying data.

Note that this process won’t move the data itself—given that you’re starting with terabytes for an effective dedicated SQL pool, trying to create a bacpac would be an exercise in misery.

Comments closed

Releasing a Tabular Model without Users or Roles

Olivier van Streenlandt hit a deployment problem:

A couple of weeks ago my team & I ran into an issue with SQL Server Analysis Services (SSAS), due to a network split between companies, We weren’t able anymore to manage our SSAS access into our SSAS Tabular Model. Since deploying a Tabular Model using Visual Studio is also overwriting members & roles, we needed to find a valid alternative to execute our deployments. Manually at first and automated in the end.

Read on to see how they used Azure DevOps pipelines to solve the issue.

Comments closed

dbops Powershell Module

Kevin Chant looks at a useful Powershell module:

Before covering the dbops PowerShell module I want to quickly cover DbUp.

DbUp is a .NET library that you can use to do migration-based deployments. It is open-source and is licensed under the MIT license, which you can read about in the DbUp license file.

According to the official list of supported databases, it allows you to do migration-based deployments to various databases. Such as SQL Server and MySQL. As you will discover later in this post it also works with a newer Azure service as well.

DbUp has been on my to-learn list for a little while, though I haven’t had a chance to dig into it yet.

Comments closed

Deploying an Arc-Enabled SQL Managed Instance

Warwick Rudd continues a series on Azure Arc-enabled data services:

Now that we have our Azure Arc-enabled Data Controller configured and available, we can now deploy our first Arc-enabled SQL Managed Instance into our environment. As previously mentioned depending on the type of configuration required for your environment with your Arc-enabled Data Controller (Directly connected or Indirectly connected modes) this will dictate the approach available for you to setup / configure your Arc-enabled SQL Managed Instance.

Click through for a step-by-step guide.

Comments closed

A Primer on Azure Arc-Enabled Data Services

Warwick Rudd has a four-parter on Azure Arc-Enabled Data Services. Part 1 sets the stage:

Utilising Azure Arc-enabled data services provides you the ability to take advantage of the Azure data services (SQL Server, Azure SQL Managed Instance, PostgreSQL) in a hybrid environment. This offering provides you with reduced administrative efforts in managing and maintaining your data services while giving you the same look and feel as if you were running in the Azure Cloud.

Part 2 looks at the Data Controller:

The Azure Arc Data Controller is a Kubernetes operator that performs all of the orchestration to ensure you achieve your desired state. This is the main component in the Azure Arc infrastructure that links the data services with the Arc-enabled hardware located either in your On-premises, Azure, or any other public cloud data center and your azure subscription.

The Arc data controller allows you to deploy, manage, secure, and monitor your deployed data services estate using Azure Data Studio or the Azure Portal (only for directly connected mode deployments) but giving you the same experience as if you were managing your data services from inside of the Azure Portal.

Part 3 deploys a Data Controller:

As previously mentioned there are 2 types of deployment available for your Arc Data Controller. In this post, we are going to have a look at deploying in the Arc Data Controller using the directly connected mode.

For a directly connected Arc Data Controller, we have direct connectivity to our Azure subscription. With this in mind, there are several options as we previously discussed on how to deploy the data controller. For this post, we are using the portal deployment method.

Finally, Part 4 covers management options:

With ADS open and running you can create connections to Arc Data Controllers the same as you can with Instances of SQL Server. In ADS we have under the connections area a section specific for Arc Data Controllers.

Check out all four posts.

Comments closed

Hosting an App on RStudio Connect

Liam Kalita wraps up a series:

So far, we have seen how to create an app using ReactJS and and a Plumber API. In part 3, we will show you how to host the application on RStudio Connect (RSC)!

When it comes to hosting the application on RSC we will set the content URL for both the app and API so that they are in the same domain and won’t have this CORS issue.

Read the whole thing.

Comments closed

Updating Synapse Linked SQL Servers with Azure DevOps

Kevin Chant makes a change:

This post covers how to update both ends of Azure Synapse Link for SQL Server 2022 using Azure DevOps. As shown at the Data Toboggan conference.

By the end of this post you will know how to deploy database updates to both the SQL Server database and the Azure Synapse dedicated SQL Pool that are used as part of Azure Synapse Link for SQL Server 2022, using a pipeline in Azure DevOps. To keep them consistent.

Click through for the process.

Comments closed