Press "Enter" to skip to content

Category: Microsoft Fabric

Automating Power BI Load Testing via Fabric Notebook

Gilbert Quevauvilliers grabs a query:

Load testing is essential when working with Microsoft Fabric capacity. With limited resources, deploying a Power BI report without testing can lead to performance issues, downtime, and frustrated users. In this series, I’ll show you how to automate load testing using Fabric Notebooks, making the process faster, easier, and repeatable.

Inspired by Phil Seamark’s approach, this method eliminates manual complexity and allows you to capture real user queries for accurate testing.

Read on for the first part, in which Gilbert uses the Performance Analyzer to capture query details.

Comments closed

Copying Data across Tenants with Fabric Data Factory

Ye Xu makes use of the Copy job:

Copy job is the go-to solution in Microsoft Fabric Data Factory for simplified data movement, whether you’re moving data across clouds, from on-premises systems, or between services. With native support for multiple delivery styles, including bulk copy, incremental copy, and change data capture (CDC) replication, Copy job offers the flexibility to handle a wide range of data movement scenarios—all through an intuitive, easy-to-use experience. Learn more in What is Copy job in Data Factory – Microsoft Fabric | Microsoft Learn.

With Copy job, you can also perform cross-tenant data movement between Fabric and other clouds, such as Azure. It also enables cross-tenant data sharing within OneLake, allowing you to copy data across Fabric Lakehouse, Warehouse, and SQL DB in Fabric between tenants with SPN support. This blog provides step-by-step guidance on using Copy job to copy data across different tenants.

Click through for a demonstration, as well as the security permissions that are necessary for this to work.

Comments closed

Checking Direct Lake Model Sources

Nikola Ilic wants to know if Direct Lake is using OneLake or SQL:

In my recent Microsoft Fabric training, I’ve been explaining the difference between the Direct Lake on OneLake and Direct Lake on SQL, as two flavors of Direct Lake semantic models. If you are not sure what I’m talking about, please start by reading this article. The purpose of this post is not to examine the differences between these two versions, but rather to clarify some nuances that might occur. One of the questions I got from participants in the training was:

“How do we KNOW if the Direct Lake semantic model is created as a Direct Lake on OneLake or Direct Lake on SQL model?”

Read on for that answer.

Comments closed

Job-Level Bursting in Microsoft Fabric Spark Jobs

Santhosh Kumar Ravindran announces a new feature:

  • Enabled (Default): When enabled, a single Spark job can leverage the full burst limit, consuming up to 3× CUs. This is ideal for demanding ETL processes or large analytical tasks that benefit from maximum immediate compute power.
  • Disabled: If you disable this switch, individual Spark jobs will be capped at the base capacity allocation. This prevents a single job from monopolizing the burst capacity, thereby preserving concurrency and improving the experience for multi-user, interactive scenarios.

Read on for the list of caveats and the note that it will cost extra money to flip that switch.

Comments closed

Monitoring Microsoft Fabric Costs

Chris Webb uses a report:

Following on from my blog post a few months ago about cool stuff in the Fabric Toolbox, there is now another really useful solution available there that anyone with Fabric capacities should check out: Fabric Cost Analysis (or FCA). If you have Fabric capacities it’s important to be able to monitor your Azure costs relating to them, so why not monitor your Fabric costs using a solution built using Fabric itself? This is what the folks behind FCA (who include Romain Casteres, author of this very useful blog post on FinOps for Fabric, plus Cédric Dupui, Manel Omani and Antoine Richet) decided to build and share freely with the community.

Click through to see how it works, and check out the FCA link in the graf above to get the code.

Comments closed

Customer-Managed Keys in Microsoft Fabric

Sumiran Tandon makes an announcement:

Customer managed keys were launched in preview, offering workspace administrators the ability to use keys in Azure Key Vault and Managed HSM, to protect data in certain Fabric items. Now, we are extending the encryption support to more Fabric workloads. You can now create Fabric Warehouses, Notebooks and utilize the SQL Analytics Endpoint in workspaces enabled with encryption using your keys. The changes are rolling out and should be available in all regions over the next few days.

Freddie Santos digs into what this means for Fabric Warehouse and the SQL analytics endpoint:

Fabric already ensures that your data is encrypted at rest using Microsoft-managed keys. But for many organizations—especially in regulated industries—encryption alone isn’t enough. They need the ability to control and manage the keys that protect their data, aligning with internal compliance requirements, regulatory standards, and governance best practices.

I know that there are enough companies where this is absolutely necessary for adoption of a product, but I should point out that even without bringing your own key, Microsoft does use their own generated keys to encrypt your data at rest.

Comments closed

Updates to Microsoft Fabric Dataflows Gen2

Nikola Ilic digs into some announcements:

In the ocean of announcements from the recent FabCon Europe in Vienna, one that may have gone under the radar was about the enhancements in performance and cost optimization for Dataflows Gen2.

Before we delve into explaining how these enhancements impact your current Dataflows setup, let’s take a step back and provide a brief overview of Dataflows. For those of you who are new to Microsoft Fabric – a Dataflow Gen2 is the no-code/low-code Fabric item used to extract, transform, and load the data (ETL).

It sounds like these changes move Dataflows Gen2 from the “Never choose this” option to something that has become viable in at least some circumstances.

Comments closed

Automating Semantic Model Security via Semantic Link

Marc Lelijveld writes a script:

You may be using standardized solutions like Fabric Unified Admin Monitoring (FUAM) or any other templated solution that comes with a semantic model. As part of transparency within your organization, you decided to share the insights gathered with others in the organization by adjusting the solution to apply your own security setup on top.

However, after running an update of the template, you’ve overwritten your custom security configuration and reapplying costs a lot of time, again and again after each update. Why don’t we just script this security? In this blog I will share how you can deploy security configurations to semantic models and assign users to these roles.

Click through for an example script and details on how it works.

Comments closed

OneLake Diagnostics now GA

Tom Peplow makes an announcement:

Alongside Workspace monitoring and user activity tracking accessible through Microsoft Purview, these capabilities make federated data governance a reality at enterprise scale.

Enable diagnostics at the workspace level, and OneLake streams diagnostic events as JSON into a Lakehouse you choose—within the same capacity. You can use these events to unlock usage insights, provide operational visibility, and support compliance reporting.

It does seem a bit odd that this data goes into a Lakehouse rather than into an Eventhouse. But click through to see how things work, what sorts of events this captures, and what you can do with it.

Comments closed

API Interaction with OneLake Tables

Matthew Hicks makes an announcement:

Microsoft OneLake is the unified data lake for your entire organization, built into Microsoft Fabric. It provides a single, open, and secure foundation for all your analytics workloads – eliminating data silos and simplifying data management across domains.

The preview of Microsoft OneLake Table APIs, a new way to programmatically manage and interact with your data tables in OneLake! These APIs open the door for developers and data engineers to integrate OneLake seamlessly into their workflows, enabling powerful automation and interoperability with open table formats.

Read on to see what’s available in the initial preview. It’s interesting that they started with Iceberg rather than Delta Lake.

Comments closed