Press "Enter" to skip to content

Category: Microsoft Fabric

Microsoft Fabric October 2025 Feature Summary

Adam Saxton has a list:

This month’s update delivers key advancements across Microsoft Fabric, including enhanced security with Outbound Access Protection and Workspace-Level Private Link, smarter data engineering features like Adaptive Target File Size, and new integrations such as Data Agent in Lakehouse. Together, these improvements streamline workflows and strengthen data governance for users.

The list doesn’t feel quite as long as the prior couple of months, but there’s still a lot of content on here.

Leave a Comment

OneLake Security and the Fabric SQL Analytics Endpoint

Freddy Santos takes us through the latest with respect to security in OneLake:

OneLake Security centralizes fine-grained data access for Microsoft Fabric data items and enforces it consistently across engines.
Currently in Preview and opt-in per item, it lets you define roles over tables or folders and optionally add Row-Level Security (RLS) and Column-Level Security (CLS) policies. These definitions govern what users can see across Fabric experiences.

Read on to see what you can do.

Leave a Comment

Microsoft Fabric Direct Lake Join Index Creation

Phil Seamark explains a recent change:

If you’ve been working with Direct Lake in Microsoft Fabric, you’ll know its magic resides in its ability to quickly load data. It loads data into semantic models from OneLake when needed. This feature eliminates the overhead of importing. But until recently, the first query on a cold cache might feel sluggish. Why? One reason for this is that Direct Lake must build a join index. This index is added to the model during the first query. This index is a critical structure that maps relationships between tables for efficient lookups.

Earlier, this process was single-threaded and slow, especially on large tables with high cardinality. The good news? That’s changed.

Read on to see how, what a join index is, and what this impact looks like in practice.

Leave a Comment

Learning Microsoft Fabric Real-Time Intelligence

Valerie Junk picks up a new skill:

If you are reading this article on my website, chances are you know me from my Power BI content, the videosarticlestutorials, or downloads, or you came across it on LinkedIn. I want to be upfront: I am a front-end/business person. I create reports that lead to action and help businesses make smarter decisions while building a data-driven strategy.

When I started talking about Fabric Real-Time Intelligence, people were surprised. Some were curious. Others probably wondered what had happened. For me, real-time reports push you to approach design in a completely different way because users need to take action immediately. Decisions happen in the moment, and that changes everything about how you visualize and structure information, so that got me interested!

Read on to see how Valerie picked up KQL as a language, as well as some of the challenges involved. I will say, the Eventhouse is also the fastest mechanism Microsoft has to query large amounts of data in Microsoft Fabric—it beats out the lakehouse and warehouse pretty handily.

Leave a Comment

Automating Power BI Load Testing via Fabric Notebook

Gilbert Quevauvilliers grabs a query:

Load testing is essential when working with Microsoft Fabric capacity. With limited resources, deploying a Power BI report without testing can lead to performance issues, downtime, and frustrated users. In this series, I’ll show you how to automate load testing using Fabric Notebooks, making the process faster, easier, and repeatable.

Inspired by Phil Seamark’s approach, this method eliminates manual complexity and allows you to capture real user queries for accurate testing.

Read on for the first part, in which Gilbert uses the Performance Analyzer to capture query details.

Leave a Comment

Copying Data across Tenants with Fabric Data Factory

Ye Xu makes use of the Copy job:

Copy job is the go-to solution in Microsoft Fabric Data Factory for simplified data movement, whether you’re moving data across clouds, from on-premises systems, or between services. With native support for multiple delivery styles, including bulk copy, incremental copy, and change data capture (CDC) replication, Copy job offers the flexibility to handle a wide range of data movement scenarios—all through an intuitive, easy-to-use experience. Learn more in What is Copy job in Data Factory – Microsoft Fabric | Microsoft Learn.

With Copy job, you can also perform cross-tenant data movement between Fabric and other clouds, such as Azure. It also enables cross-tenant data sharing within OneLake, allowing you to copy data across Fabric Lakehouse, Warehouse, and SQL DB in Fabric between tenants with SPN support. This blog provides step-by-step guidance on using Copy job to copy data across different tenants.

Click through for a demonstration, as well as the security permissions that are necessary for this to work.

Leave a Comment

Checking Direct Lake Model Sources

Nikola Ilic wants to know if Direct Lake is using OneLake or SQL:

In my recent Microsoft Fabric training, I’ve been explaining the difference between the Direct Lake on OneLake and Direct Lake on SQL, as two flavors of Direct Lake semantic models. If you are not sure what I’m talking about, please start by reading this article. The purpose of this post is not to examine the differences between these two versions, but rather to clarify some nuances that might occur. One of the questions I got from participants in the training was:

“How do we KNOW if the Direct Lake semantic model is created as a Direct Lake on OneLake or Direct Lake on SQL model?”

Read on for that answer.

Leave a Comment

Job-Level Bursting in Microsoft Fabric Spark Jobs

Santhosh Kumar Ravindran announces a new feature:

  • Enabled (Default): When enabled, a single Spark job can leverage the full burst limit, consuming up to 3× CUs. This is ideal for demanding ETL processes or large analytical tasks that benefit from maximum immediate compute power.
  • Disabled: If you disable this switch, individual Spark jobs will be capped at the base capacity allocation. This prevents a single job from monopolizing the burst capacity, thereby preserving concurrency and improving the experience for multi-user, interactive scenarios.

Read on for the list of caveats and the note that it will cost extra money to flip that switch.

Leave a Comment