Press "Enter" to skip to content

Category: Microsoft Fabric

Third-Party Support for OneLake Security

Aaron Merrill shares some guidance:

As outlined in our technical whitepaper, ‘The future of data security is interoperability, permissions that move with data is the future of data security. As modern data lakes are built on open-source technology like Delta and Iceberg, customers expect to use the analytics engines and services that best fit their needs—without copying data or redefining security. This creates a clear requirement: security must be defined once and enforced consistently everywhere data is consumed.

OneLake security now provides API support for third-party enforcement through an authorized engine model. This release extends the same principles used across Microsoft Fabric to external engines and services. OneLake security is now closer to its vision of defined once, enforced everywhere, even beyond first-party workloads.

Click through for more information.

Leave a Comment

Maps in Microsoft Fabric now GA

Johannes Kebeck makes an announcement:

When we envisioned Maps in Microsoft Fabric, our goal was to empower any data citizen to analyze data in time and space without any specialized knowledge. Introduced in preview at FabCon Europe 2025, it has since been used by customers across industries creating and sharing map-centric applications. Additional features were added at Ignite 2025, and this week at FabCon Atlanta, Maps in Microsoft Fabric is generally available – along with new capabilities that expand how geospatial data can be modeled, visualized, and operationalized at any scale.

Read on to see what’s new in maps.

Leave a Comment

What’s New in SQL Database for Fabric

Idris Motiwala makes some announcements:

The new Migration Assistant for SQL databases simplify moving SQL Server and Azure SQL workloads into Fabric. Designed for SQL developers, it imports schema via DACPACs, identifies compatibility issues, and provides clear, actionable guidance before migration. Built-in assessment and data copy workflows help teams move from evaluation to cutover with less manual effort, preserving existing SQL skills while accelerating time to value on Fabric’s unified analytics platform.  Ready to simplify your SQL migration journey? We will begin rolling this out in the coming weeks, and it will soon be accessible through the Fabric portal.

Click through for more things that are currently in place, including several items that are now GA.

Leave a Comment

What’s New in OneLake

Josh Caplan provides an update:

With shortcuts and mirroring in OneLake, you get zero-copy, zero-ETL capabilities to connect your multi-cloud data estate. Whether your data sits in Azure, AWS, Google Cloud, or Oracle, on-premises, or across platforms like SAP, Dataverse, Snowflake, and Azure Databricks, you can connect it to OneLake without data movement or duplication. No more sprawling ETL pipelines. No more out-of-date copies. No more data silos.

Today, we’re expanding mirroring to now include SharePoint lists (Preview) and adding mirroring via shortcuts for Azure Monitor and Dremio (Preview). We are also releasing mirroring for Oracle and SAP Datasphere into general availability. Beyond these core mirroring capabilities, we are now introducing extended capabilities in mirroring designed to help you operationalize mirrored sources at scale. These capabilities include Change Data Feed (CDF) and the ability to create views on top of mirrored data, starting with Snowflake and will be offered as a paid option.

Click through for more of what came out of FabCon.

Leave a Comment

Creating a Power BI Semantic Model Online

Gilbert Quevauvilliers doesn’t need Power BI Desktop:

It has been in the service for quite a while so I thought I would blog about it in terms of how you can create a power BI semantic model simply using the web interface. This means you no longer need Power BI desktop, or a Windows PC to get going.

This is quite a significant change because at times you need a lot of resources on your Windows PC or you’re working on a Mac and could not do this previously.

So, I will give an overview below on how you can create the semantic model just by using your browser.

Click through to see how.

Leave a Comment

Materialized Lake Views now GA in Microsoft Fabric

Balaji Sankaran makes an announcement:

Since introducing MLVs (Preview) at Build 2025, data engineers have used them to replace hand-built ETL pipelines with a few declarative Spark SQL statements, and their feedback directly shaped this release.

This update closes the most important gaps since reaching preview and makes MLVs production-ready at scale. With multi-schedule support, broader incremental refresh, PySpark authoring, in-place updates, and stronger data quality controls, teams can now build, run, and evolve medallion pipelines with far less operational overhead.

Click through to see what’s changed since the preview.

Leave a Comment

Managing Eventhouses and Environments with MicrosoftFabricMgmt

Rob Sewell continues a series on the MicrosoftFabricMgmt module. First up is a dive into the Kusto world:

Real-Time Intelligence (RTI) is Microsoft Fabric’s answer to streaming data workloads. If you are ingesting telemetry, IoT data, clickstreams, or any high-velocity data that needs querying with low latency, this is the part of Fabric you want. MicrosoftFabricMgmt supports the full set of RTI resources: Eventhouses, KQL Databases, KQL Dashboards, KQL Querysets, and Eventstreams.

Rob then pivots to creating an environment from scratch:

Over the past few posts we have worked through a number of the item choices that you can use in the MicrosoftFabricMgmt module. Today I want to bring it all together into a single, practical script that provisions a complete Fabric environment from scratch.

This is the kind of script I could use when setting up a new project. It is repeatable, idempotent (safe to run multiple times), fully logged, and handles errors gracefully.

Leave a Comment

Partitioned Compute and Fabric Dataflow Performance

Chris Webb performs a test:

Partitioned Compute is a new feature in Fabric Dataflows that allows you to run certain operations inside a Dataflow query in parallel and therefore improve performance. While UI support is limited at the moment it can be used in any Dataflow by adding a single line of fairly simple M code and checking a box in the Options dialog. But as with a lot of performance optimisation features (and this is particularly true of Dataflows) it can sometimes result in worse performance rather than better performance – you need to know how and when to use it. And so, in order to understand when this feature should and shouldn’t be used, I decided to do some tests and share the results here.

Click through for the test, the result, and an open door for subsequent analysis.

Leave a Comment

Dealing with Multiple Fabric Capacities

Jon Lunn provides some guidance:

You know you can have more that one capacity? Most of the clients I’ve interacted with, even since the Power BI capacity days, they have just purchased one big old capacity, and assigned it to every workspace they needed. There have been a few clients that have had multi-region capacities, spun up across the globe for thing likes, billing to specific cost centres and regions and data ownership and sovereignty issues, but for those that don’t have those issue, they just get a big capacity.

Jon provides some guidance on environment-based capacity planning. Even within an environment, there may be cases for carving out explicit capacity, such as data science activities that are occasional but potentially disruptive.

Leave a Comment

XML Processing in Microsoft Fabric Realtime Intelligence

Reitse Eskens digs into some results:

I’ve been working for quite some time on a fun solution in Fabric Realtime Intelligence. We’re processing XML files into a structured table. As you’re probably aware, XML has its own… well, let’s be nice and call them challenges.

One thing I ran into was that an element contained several other elements. Usually, you’ll see them in an array, but in this case, it wasn’t. Since these elements within the main element contain the information we need for the table, I started thinking about how to extract this data.

Read on for an example of the type of Data Reitse was looking to process, as well as how the problem ended up being a lot easier to solve than first appearances would indicate.

Leave a Comment