Press "Enter" to skip to content

Category: Cloud

Configuring Alerts in Azure SQL Managed Instance

Aleksey Vitsko wants an alert:

You have an Azure SQL Managed Instance and you want to set up SQL Server alerts for errors with severity 17-25, similar what you would do for an on-prem SQL Server. You go to the SQL Server Agent folder in Object Explorer, expand it, and whoops – there is no Alerts folder.

As of time of writing this article (June 2025), Azure SQL Managed Instance doesn’t have this functionality, and we don’t have any ETA on when it will be implemented. So, how can we setup alerts in Azure SQL MI to notify us when there are issues?

Read on for a workaround and a warning.

Leave a Comment

Power BI Dataflow Gen1 and Connecting to SQL DB

Koen Verbeeck lays out a warning:

I’m in the progress of migrating some legacy stuff at a client, and in their Power BI environment there are still quite some Power BI dataflows Gen1. I had migrated an Azure Synapse Dedicated SQL Pool to an Azure SQL DB (much cheaper for their volume of data), and in the dev/test environment all dataflows were switched correctly to the new database.

However, in production, the dataflows only wanted to connect to the Azure SQL DB production database through a gateway. Weird, right? 

Click through for a rundown of the issue, as well as another one Koen ran into regarding Azure Data Lake Storage.

Leave a Comment

Loading Data from Network-Protected Storage Accounts into OneLake

Matt Basile grabs some data:

AzCopy is a powerful and performant tool for copying data between Azure Storage and Microsoft OneLake, and is the preferred tool for large-scale data movement due to its ease of use and built-in performance optimizations. AzCopy now supports copying data from firewall-enabled Azure Storage accounts into OneLake using trusted workspace access. Now you can use AzCopy to load data from even network-protected storage accounts, letting you effortlessly load data into OneLake without compromising on security or performance.

Click through for an explanation of trusted workspace access, followed by the steps to try it out for yourself.

Leave a Comment

Azure API Management in front of Databricks and OpenAI

Drew Furgiuele has a follow-up:

A few months ago, I wrote a blog post about using Azure API Management with Databricks Model Serving endpoints. It struck a chord with a lot of people using Databricks on Azure specifically, because more and more people and organizations are trying their damndest to wrangle all the APIs they use and/or deploy themselves. Recently, I got an email from someone who read it and asked a really good question:

Click through for that question, as well as Drew’s answer.

Leave a Comment

Using a Child Pipeline Variable in a Parent Pipeline in Fabric Data Factory

Justin Bird passes back some information:

I answered a question on the Fabric community on return variables recently and thought I would expand upon it in a blog post. The question was how to use a variable derived in a child pipeline downstream in the parent pipeline. The person was specifically deriving a json object and wanted to iterate on the values in the parent pipeline.

Click through for the solution.

Comments closed

Optimizing Multi-Notebook Jobs in Microsoft Fabric and AWS Glue

Daniel Janik flips a switch:

Are your Azure Fabric pipelines with multiple notebooks running slower than you’d like? Are you paying for more Spark compute time than you should be? The culprit might be a simple setting that’s easy to miss. In this blog post, we’ll dive into the “For pipeline running multiple notebooks” setting in Azure Fabric and explain why enabling it can significantly improve your pipeline’s performance and reduce your costs.

Click through for this, as well as a comparison with AWS Glue and ways to perform something similar there.

Comments closed

What’s Old is New Again: Lakebases

Daniel Janik notes the cyclical nature of things:

For years, the narrative pushed was that traditional relational databases were ill-suited for the scale and complexity of modern BI solutions. The marketing was something like: “Databases don’t belong in BI; use Spark!” We embraced distributed computing frameworks, data lakes, and complex ETL pipelines to move data from operational databases into analytical engines. The idea was to separate transactional workloads from analytical ones to ensure performance and scalability. Spark, with its ability to handle massive datasets and flexible processing, became the darling of the data world.

“Remember, Sully, when I said you don’t need databases anymore?”

“Yeah, Matrix, I remember!”

“I lied.”

Comments closed

Zone Redundancy in Azure SQL Managed Instance

Arun Sirpal explains what zone redundancy is in Azure:

Do you know what happens when you enable zonal redundancy for your SQL managed instance?

Lets define it first (in the context of Business-Critical tier) – zonal redundancy is achieved by placing compute and storage replicas in different availability zones (3) and then using underlying Always On availability group to replicate data changes from the primary instance to standby replicas in other availability zones. 

Availability zones are in the same Azure region, so it works well for high availability but isn’t as good for disaster recovery: if an entire region goes down, zone redundancy won’t help you very much. Also, be aware that you’re paying for what’s running in those three zones because TANSTAAFL.

Comments closed

Copying Azure SQL Databases between Subscriptions

Kenneth Fisher is back in the fight:

I recently had to copy an Azure SQL database (SQL db) from one subscription to an Azure SQL Server instance in another subscription. All of the help I found suggested going to the database and hitting the COPY option. Unfortunately, when I did, I ran into a problem.

Read on for the issue, as well as one way to fix it. The route Kenneth landed on was the same one I ended up going with when I had a similar problem and very limited access to SQL DB on both subscriptions.

Comments closed