Press "Enter" to skip to content

Category: Microsoft Fabric

Materialized Lake Views now GA in Microsoft Fabric

Balaji Sankaran makes an announcement:

Since introducing MLVs (Preview) at Build 2025, data engineers have used them to replace hand-built ETL pipelines with a few declarative Spark SQL statements, and their feedback directly shaped this release.

This update closes the most important gaps since reaching preview and makes MLVs production-ready at scale. With multi-schedule support, broader incremental refresh, PySpark authoring, in-place updates, and stronger data quality controls, teams can now build, run, and evolve medallion pipelines with far less operational overhead.

Click through to see what’s changed since the preview.

Leave a Comment

Creating a Power BI Semantic Model Online

Gilbert Quevauvilliers doesn’t need Power BI Desktop:

It has been in the service for quite a while so I thought I would blog about it in terms of how you can create a power BI semantic model simply using the web interface. This means you no longer need Power BI desktop, or a Windows PC to get going.

This is quite a significant change because at times you need a lot of resources on your Windows PC or you’re working on a Mac and could not do this previously.

So, I will give an overview below on how you can create the semantic model just by using your browser.

Click through to see how.

Leave a Comment

Managing Eventhouses and Environments with MicrosoftFabricMgmt

Rob Sewell continues a series on the MicrosoftFabricMgmt module. First up is a dive into the Kusto world:

Real-Time Intelligence (RTI) is Microsoft Fabric’s answer to streaming data workloads. If you are ingesting telemetry, IoT data, clickstreams, or any high-velocity data that needs querying with low latency, this is the part of Fabric you want. MicrosoftFabricMgmt supports the full set of RTI resources: Eventhouses, KQL Databases, KQL Dashboards, KQL Querysets, and Eventstreams.

Rob then pivots to creating an environment from scratch:

Over the past few posts we have worked through a number of the item choices that you can use in the MicrosoftFabricMgmt module. Today I want to bring it all together into a single, practical script that provisions a complete Fabric environment from scratch.

This is the kind of script I could use when setting up a new project. It is repeatable, idempotent (safe to run multiple times), fully logged, and handles errors gracefully.

Leave a Comment

Partitioned Compute and Fabric Dataflow Performance

Chris Webb performs a test:

Partitioned Compute is a new feature in Fabric Dataflows that allows you to run certain operations inside a Dataflow query in parallel and therefore improve performance. While UI support is limited at the moment it can be used in any Dataflow by adding a single line of fairly simple M code and checking a box in the Options dialog. But as with a lot of performance optimisation features (and this is particularly true of Dataflows) it can sometimes result in worse performance rather than better performance – you need to know how and when to use it. And so, in order to understand when this feature should and shouldn’t be used, I decided to do some tests and share the results here.

Click through for the test, the result, and an open door for subsequent analysis.

Leave a Comment

Dealing with Multiple Fabric Capacities

Jon Lunn provides some guidance:

You know you can have more that one capacity? Most of the clients I’ve interacted with, even since the Power BI capacity days, they have just purchased one big old capacity, and assigned it to every workspace they needed. There have been a few clients that have had multi-region capacities, spun up across the globe for thing likes, billing to specific cost centres and regions and data ownership and sovereignty issues, but for those that don’t have those issue, they just get a big capacity.

Jon provides some guidance on environment-based capacity planning. Even within an environment, there may be cases for carving out explicit capacity, such as data science activities that are occasional but potentially disruptive.

Leave a Comment

XML Processing in Microsoft Fabric Realtime Intelligence

Reitse Eskens digs into some results:

I’ve been working for quite some time on a fun solution in Fabric Realtime Intelligence. We’re processing XML files into a structured table. As you’re probably aware, XML has its own… well, let’s be nice and call them challenges.

One thing I ran into was that an element contained several other elements. Usually, you’ll see them in an array, but in this case, it wasn’t. Since these elements within the main element contain the information we need for the table, I started thinking about how to extract this data.

Read on for an example of the type of Data Reitse was looking to process, as well as how the problem ended up being a lot easier to solve than first appearances would indicate.

Leave a Comment

Ontology Rules in Fabric Activator

Ansley Yeo creates some rules:

Ontology Rules let you define conditions and actions on top of your business entities, rather than on raw tables or telemetry streams.

These rules are evaluated using Fabric Activator, which monitors and triggers actions when conditions are met. The unique value is that the rule logic is expressed in the language of your business, using ontology entities and properties.

Ontologies are a thing I’m not quite sold on yet, whether in Microsoft Fabric or elsewhere. I get the concept of what they do and the concept that this is business logic that the business side could theoretically do. What I have trouble with is seeing the practical benefits. Any time I see “Your business users can…” I immediately add in my mind, “But they won’t.” It feels like people getting giddy over object-oriented development over the data.

That said, I am actively learning about the topic, so maybe I’ll change my mind as I learn more.

Leave a Comment

Microsoft Fabric Mirroring and SQL Server 2025

Meagan Longoria takes a peek at mirroring in Microsoft Fabric:

Mirroring of SQL Server databases in Microsoft Fabric was first released in public preview in March 2024. Mirrored databases promise near-real-time replication without the need to manage and orchestrate pipelines, copy jobs, or notebooks. John Sterrett blogged about them last year here. But since that initial release, the mechanism under the hood has evolved significantly.

Read on to see how this behaves for versions of SQL Server prior to 2025, and how it changes in 2025.

Leave a Comment

Data Extraction from Unstructured Data with Fabric AI Functions

Sandeep Pawar demonstrates functionality:

Most enterprise data lives in free text – tickets, contracts, feedback, clinical notes, and more. It holds critical information but doesn’t fit into the structured tables that pipelines expect. Traditionally, extracting structure meant rule-based parsers that break with every format to change, or custom NLP models that take weeks to build. LLMs opened new possibilities, but on their own they bring inconsistent outputs, no type of enforcement, and results that vary between runs. What production workflows need is LLM intelligence with structured-output guarantees, delivered inside the data platform teams already use.

Microsoft Fabric AI Functions deliver exactly that. Functions like ai.summarize, ai.classify, ai.translate, and ai.extract let you transform and enrich unstructured data at scale with a single line of code – no model deployment or ML infrastructure needed. For the full list, see Transform and enrich data with AI functions.

Click through for an example. The tricky part of this is, because answers won’t be deterministic, you have to do a lot of testing and verification to ensure things are working reasonably well.

Leave a Comment

Eventstream Not Sending Data to KQL Database after Resuming Fabric Capacity

Olivier Van Steenlandt troubleshoots an issue:

To continue the development of my mobile app, whose core ability is to scan barcodes of consumable articles and send them over for analytics, I’m resuming my capacity, starting to scan barcodes again, sending them to my Eventstream, and finally saving them in my KQL database.

After a couple of minutes, I wanted to validate all the scanned results in my KQL database and navigate to my scanned_barcode table.

Read on to see how Olivier diagnosed and corrected the problem.

Leave a Comment