Press "Enter" to skip to content

Category: Cloud

Failover Groups in Azure SQL Database

Mika Sutinen looks at some interesting functionality:

One of the interesting features in Azure SQL Database is the Failover Groups. It allows you to manage replication of an Azure SQL database, or group of databases, to another logical server. The reason I’ve bolded the manage replication is, that the replication itself is handled by active geo-replication, which is also a feature of Azure SQL Database.

Read on to see how these are different and why you might want to use failover groups.

Leave a Comment

Migrating Azure PostgreSQL Single Server to Flex via pg_dump

Josephine Bush changes server type:

This is more complicated than using the Azure Migration method, but because it’s maxed out on resources for the last week in the east regions (and possibly central), and who knows when they will fix it, I had to resort to other methods. I’m getting on flex sooner than later. I want to get this over with and get to those performance improvements and better features. I will preface this all by saying, if you have big databases, this may not be the right path for you. Look into streaming replication or wait for Microsoft to fix their migration tool and do an online migration via that. Also, if you don’t have strong Postgres skills, this is far more complicated than the migration tool in Azure, far more complicated.

Click through for the step-by-step instructions.

Leave a Comment

Dealing with Schema Drift in Azure Data Factory

Rayis Imayev deals with change:

I will jump straight to the problem statement without a “boring” introduction, which, in a sense, already feels like an opening statement.

Moving data between two or more endpoints is a common task. Sometimes it’s as simple as migrating data from one place to another. Other times, it’s a request to copy specific documents from source environments. In more complex cases, you might need to consolidate multiple data files into the same destination, such as loading several separate files into a single database table.

This was the bete noir of SSIS’s existence. Minor metadata changes would cause the entire system to break down.

Leave a Comment

Backup to URL via Managed Identity in SQL Server 2022

Joey D’Antoni doesn’t trust user logins:

Backing up databases to the cloud is not a new thing. Microsoft introduced the BACKUP TO URL functionality to SQL Server 2012 SP1 CU2. I’m not going to tell you how long ago. Still, it wasn’t last month, and Microsoft recently celebrated the 15th anniversary of Azure so that you can get an idea. When the feature started—it was minimal; you could only backup a database of up to a single terabyte and couldn’t stripe over multiple files. Additionally, you had to use the access key to the storage account, which gave complete control over the storage account—that wasn’t a good thing.

Read on for a quick overview of the feature and guidance on how it all works.

Leave a Comment

Receiving Notification when a Microsoft Fabric Notebook Fails

Gilbert Quevauvilliers gets an e-mail:

What I have found is that when I created a pipeline in Microsoft Fabric that uses a notebook, when there is an error with the notebook, I do not get an alert that the notebook has failed.

This has happened to me in the past and I have found this pattern below to work consistently to notify me of errors.

In this blog post I will show you how I get notified when a notebook fails in a pipeline.

Read on to learn how.

Leave a Comment

Azure Elastic Jobs to Run Powershell and T-SQL

Josephine Bush kicks off a job:

I’ve covered how to create Elastic Jobs in the portal (this one is important to read if you aren’t familiar with elastic jobs already), with Terraform, and with Bicep. Now, I’ll cover how to create them and their associated objects with PowerShell. Don’t do this in prod to start. Always test in a lower environment first.

Click through for the process, as well as the script.

Leave a Comment

Using the Microsoft Fabric Capacity Metrics App

Reitse Eskens uses a tool:

In a number of previous blogs and in my session on loadtesting Microsoft Fabric, I’ve always questioned the metrics app and one specific point is the timepoint detail. When you click on a graph, you get the option to go to the timepoint detail and read more.

This is all fun and games but looking at the list of active processes at that specific point in time, you’ll quickly see processes that are way out of the selected point in time. For me, it rendered this thing useless because it messed up the things I wanted to see.

Read on to see the right way to handle this app.

Leave a Comment

Controlling Execution Flow in Fabric Data Pipelines

Reza Rad has everything under control:

In Microsoft Fabric, the Data Factory is the workload for ETL and data integration, and the Data Pipeline is a component in that workload for orchestrating the execution flow. There are activities in the pipeline, and you can define in which order you want the activities to run. In this article and video, you will learn about the execution order and output states in Data Pipeline and how they can be used in real-world scenarios of data integration.

The mechanisms here are fundamentally similar to what we’ve had in Azure Data Factory (obviously) and SQL Server Integration Services.

Leave a Comment

Billing for SQL Database in Microsoft Fabric

Amar Digamber Patil makes an announcement:

Since SQL database is a native item in Fabric, it utilizes Fabric capacity units like other Fabric workloads. Compute charges apply only when the database is actively used, so you only consume what you need. Storage is billed separately on a monthly basis, as are automatic backups, which are retained for seven days.

Billing for compute usage and data storage for SQL databases in Fabric will commence after February 1st.

Click through for more information, including links to more information regarding billing and monitoring.

Leave a Comment

Object Ownership in Databricks

Chen Hirsh shares a tale of woe:

Have you ever made a change in your system and immediately regretted it? A few weeks ago, I did just that while working with a customer on their Databricks platform. His IT guys made some changes, moving a user to another domain. In Databrick, this is considered a new user, so I added the new user and gave him the same permissions as the old user.

And then, without thinking twice, I deleted the old user from Databricks.

Things did not go well from there. Read on to learn what happened, why, and how to avoid this problem in the future.

Leave a Comment