Press "Enter" to skip to content

Category: Microsoft Fabric

Updates to Fabric Eventstream

Alicia Li and Arindam Chatterjee share some updates:

Over the first quarter of 2026, Fabric Eventstreams shipped meaningful improvements across three themes that have repeatedly come up in feedback from our broad community of customers and partners: broader connectivityricher real-time processing, and secure enterprise‑ready networking and operations.

This post highlights some of the most impactful new Eventstreams-related features and capabilities delivered between January and March 2026.

Click through to see what’s new. Some of this is GA, though a good amount is in preview.

Leave a Comment

Mirroring SQL Server 2025 to Microsoft Fabric

Reitse Eskens digs in:

Maybe you’ve read my blog post in the DP-700 certification series about mirroring data. You can find that one here. This blog will be longer and more technical. And involve SQL Server. To make reading a little easier, I’ve listed the Microsoft Learn pages at the end of this blog post.

While writing the DP-700 post, I realised I wanted to dig a little deeper. Not only because I’m presenting a session on this subject, but also to learn more about the processes behind it. And, there’s SQL Server involved, something I still have a soft spot for in my heart. Or maybe even more than that.

The fact that your SQL Server instance has to be Arc-enabled is a bit annoying.

Leave a Comment

Shortcut Transformations now GA in Microsoft Fabric

Pernal Shah transforms some data:

Organizations today manage data across multiple storage systems, often in formats like CSV, Parquet, and JSON. While this data is readily available, turning it into analytics-ready tables typically requires building and maintaining complex ETL pipelines.

Shortcut transformations remove that complexity.

With Shortcut transformations, you can convert structured files referenced through OneLake shortcuts into Delta tables without building pipelines or writing code.

This currently works for CSV, Parquet, and JSON data and does cut out a very common step for raw-layer transformation.

Leave a Comment

ANY_VALUE() in Fabric Data Warehouse

Jovan Popovic notes a feature going GA:

Fabric Data Warehouse now supports the ANY_VALUE() aggregate, making it easier to write readable, efficient T-SQL when you want to group by a key but still return descriptive columns that are functionally the same for every row in the group.

Right now, this is only available in the Fabric Data Warehouse, so no Azure SQL DB, Managed Instance, or box product support at this time.

Leave a Comment

Role-Playing Dimensions and Direct Lake Semantic Models

Chris Webb finds a workaround to something that used to work:

Back in September 2024 I wrote a blog post on how to create multiple copies of the same dimension in a Direct Lake semantic model without creating copies of the underlying Delta table. Not long after that I started getting comments that people who tried following my instructions were getting errors, and while some bugs were fixed others remained. After asking around I have a workaround (thank you Kevin Moore) that will avoid all those errors, so while we’re waiting for the remaining fixes here are the details of the workaround.

I look at the set of steps needed to do this and say there has to be a better way.

Leave a Comment

Microsoft Fabric ETL and the Air Traffic Controller

Jens Vestergaard rethinks a metaphor:

In February 2025 I wrote about building an event-driven ETL system in Microsoft Fabric. The metaphor was air traffic control: notebooks as flights, Azure Service Bus as the control tower, the Bronze/Silver/Gold medallion layers as the runway sequence. The whole system existed because Fabric has core-based execution limits that throttle how many Spark jobs run simultaneously on a given capacity SKU.

The post was about working around a constraint. You could not just fire all your notebooks at once. You needed something to manage the queue.

More than a year on, it is worth being honest about what held up and what has changed.

Read on to see what has changed in this past year and how Jens thinks of it today.

Leave a Comment

Apache Airflow Jobs in Fabric Data Factory

Mark Kromer makes an announcement:

The world of data integration is rapidly evolving, and staying up to date with the latest technologies is crucial for organizations seeking to make the most of their data assets. Available now are the newest innovations in Fabric Data Factory pipelines and Apache Airflow job orchestration, designed to empower data engineers, architects, and analytics professionals with greater efficiency, flexibility, and scalability.

Read on to see what’s newly available, including some preview functionality.

Leave a Comment

Capacity Overage in Microsoft Fabric

Pankaj Arora has a new ‘give us money’ lever:

Capacity overage, is a new opt‑in capability in Microsoft Fabric designed to help organizations keep their workloads running—even during unexpected compute spikes. Now available in preview, this feature allows for automatic billing for excess capacity usage, based on limits you set, instead of throttling operations, ensuring smoother experiences when workloads exceed the limits of your purchased capacity.

I will say that I think it’s reasonable to have the two options of throttling (you went over by 30%, so for a stretch of time you’ll be capped until you get back under the limit) or simply paying. The controversy around this was mostly in the fact that, if you shut off and restart your Fabric capacity, you’d automatically be charged for the overages you created. To that end, providing more options on how to work off that overage debt is useful.

Leave a Comment

Generating Excel Reports via Fabric Dataflows Gen2

Chris Webb builds a report:

So many cool Fabric features get announced at Fabcon that it’s easy to miss some of them. The fact that you can now not only generate Excel files from Fabric Dataflows Gen2, but that you have so much control over the format that you can use this feature to build simple reports rather than plain old data dumps, is a great example: it was only mentioned halfway through this blog post on new stuff in Dataflows Gen2 Nonethless it was the Fabcon feature announcement that got me most excited. This is because it shows how Fabric Dataflows Gen2 have gone beyond being just a way to bring data into Fabric and are now a proper self-service ETL tool where you can extract data from a lot of different sources, transform it using Power Query, and load it to a variety of destinations both inside Fabric and outside it (such as CSV files, Snowflake and yes, Excel).

Click through for an example.

Leave a Comment