Press "Enter" to skip to content

Category: Cloud

Updates to Fabric Data Factory

Abhishek Narain has a list of updates:

Workspace Private Link Support for Data Factory (Preview): Microsoft Fabric enables secure data integration through Private Link support in Dataflows Gen2, Pipelines, and Copy jobs. This ensures that inbound data access remains isolated and compliant within protected workspaces. By leveraging VNet data gateways, organizations can securely connect to data sources across Private Link-enabled environments—eliminating exposure to public networks and reinforcing enterprise-grade security for sensitive data operations.

Most of these are security-related updates, with a mixture of things now GA, things currently in preview, and a pair of items coming soon.

Leave a Comment

Set MAXDOP in Azure SQL DB

Brent Ozar has a public service announcement:

In Azure SQL DB, you set max degrees of parallelism at the database level. You right-click on the database, go into properties, and set the MAXDOP number.

I say “you” because it really is “you” – this is on you, bucko. Microsoft’s magical self-tuning database doesn’t do this for you.

And where this backfires, badly, is that Azure SQL DB has much, much lower caps on the maximum number of worker threads your database can consume before it gets cut off. 

Click through to see what kind of error message you get and just how low these limits are.

Leave a Comment

A Primer on GitHub Actions

Temidayo Omoniyi provides an introduction to GitHub Actions workflows:

In today’s fast-paced development cycles, the demand to ship high-quality code quickly is more important than ever before. However, several tedious, labor-intensive, and prone to mistakes procedures that stand between producing code and releasing it to consumers frequently slow down teams.

Every Developer faces these common issues:

  • Repetitive Checks: Before each push, unit tests, linters, and build scripts are manually executed.
  • Inconsistent Environments: Code that passes locally in one environment but fails in another is known as the “it works on my machine” dilemma.
  • High-Stakes Deployments: Deploying code by following a meticulous, manual checklist in which even one mistake could result in downtime.
  • Slow Comments Loops: The review process is prolonged when you wait for a coworker to pull your branch, run tests, and provide comments on a pull request.

I like GitHub Actions workflows a lot. Once you’ve put together a workflow or two, it’s pretty easy to see what’s going on. On top of that, there is a huge amount of functionality and an enormous number of third-party templates to extend it even further.

Leave a Comment

Contrasting Microsoft Fabric, Databricks, and Snowflake

Ron L’Esteve builds a comparison chart:

Databricks and Microsoft Fabric are two of the most innovative Unified Data and Analytics intelligence platforms available on the market today. While similar, each brings their own advantages and limitations. Snowflake joins these two powerhouses when data warehouse decisioning comes into play. Sometimes it is challenging to decide which one to pick for your organization’s needs. This tip will help with uncovering when to choose Databricks vs Fabric vs Snowflake.

When it comes to Spark performance, Databricks is always going to win—they keep most of their optimizations to themselves, so anyone starting from open-source Spark is at a disadvantage. Otherwise, it’s a bit of a slugfest between Fabric and Databricks. At the end, Ron also brings in Snowflake, focusing on the data warehousing side of things for that three-way comparison. I don’t think there’s a clear winner among the three, and on net, that’s probably a good thing, as it forces the groups to continue competing.

Leave a Comment

Installing SQL Server 2025 RC0 on an Azure VM

Koen Verbeeck performs an installation:

I already had a virtual machine in Azure, running SQL Server 2025 CTP 2.0 (which uses a pre-made image). I explain how to set that one up in the article Install SQL Server 2025 Demo Environment in Azure. But I wanted to use the latest preview, which is Release Candidate 0 at the time of writing. Unfortunately, there’s no image available (yet?), so I had to do it the old-school way: installing SQL Server manually.

Read on to see how to do it, as well as a few extra things necessary to make everything work well in Azure.

Comments closed

Architectural Guidance for IoT Deployments in Azure

Bhimraj Ghadge shares some tips:

Edge computing, a strategy for computing on location where data is collected or used, allows IoT data to be gathered and processed at the edge, rather than sending the data back to a data center or cloud. Together, IoT and edge computing are a powerful way to rapidly analyze data in real-time.

In this Tutorial, I am trying to lay out the components and considerations for designing IoT solutions based on Azure IoT and services.

Read on for an overview of IoT components in Azure, as well as several things to keep in mind during systems design and implementation.

Comments closed

Configuring Alerts in Azure SQL Managed Instance

Aleksey Vitsko wants an alert:

You have an Azure SQL Managed Instance and you want to set up SQL Server alerts for errors with severity 17-25, similar what you would do for an on-prem SQL Server. You go to the SQL Server Agent folder in Object Explorer, expand it, and whoops – there is no Alerts folder.

As of time of writing this article (June 2025), Azure SQL Managed Instance doesn’t have this functionality, and we don’t have any ETA on when it will be implemented. So, how can we setup alerts in Azure SQL MI to notify us when there are issues?

Read on for a workaround and a warning.

Comments closed

Power BI Dataflow Gen1 and Connecting to SQL DB

Koen Verbeeck lays out a warning:

I’m in the progress of migrating some legacy stuff at a client, and in their Power BI environment there are still quite some Power BI dataflows Gen1. I had migrated an Azure Synapse Dedicated SQL Pool to an Azure SQL DB (much cheaper for their volume of data), and in the dev/test environment all dataflows were switched correctly to the new database.

However, in production, the dataflows only wanted to connect to the Azure SQL DB production database through a gateway. Weird, right? 

Click through for a rundown of the issue, as well as another one Koen ran into regarding Azure Data Lake Storage.

Comments closed

Loading Data from Network-Protected Storage Accounts into OneLake

Matt Basile grabs some data:

AzCopy is a powerful and performant tool for copying data between Azure Storage and Microsoft OneLake, and is the preferred tool for large-scale data movement due to its ease of use and built-in performance optimizations. AzCopy now supports copying data from firewall-enabled Azure Storage accounts into OneLake using trusted workspace access. Now you can use AzCopy to load data from even network-protected storage accounts, letting you effortlessly load data into OneLake without compromising on security or performance.

Click through for an explanation of trusted workspace access, followed by the steps to try it out for yourself.

Comments closed

Azure API Management in front of Databricks and OpenAI

Drew Furgiuele has a follow-up:

A few months ago, I wrote a blog post about using Azure API Management with Databricks Model Serving endpoints. It struck a chord with a lot of people using Databricks on Azure specifically, because more and more people and organizations are trying their damndest to wrangle all the APIs they use and/or deploy themselves. Recently, I got an email from someone who read it and asked a really good question:

Click through for that question, as well as Drew’s answer.

Comments closed