Press "Enter" to skip to content

Category: Microsoft Fabric

Receiving Notification when a Microsoft Fabric Notebook Fails

Gilbert Quevauvilliers gets an e-mail:

What I have found is that when I created a pipeline in Microsoft Fabric that uses a notebook, when there is an error with the notebook, I do not get an alert that the notebook has failed.

This has happened to me in the past and I have found this pattern below to work consistently to notify me of errors.

In this blog post I will show you how I get notified when a notebook fails in a pipeline.

Read on to learn how.

Leave a Comment

Migrating or Copying a Semantic Model across Microsoft Fabric Workspaces

Sandeep Pawar makes a move:

Here is a quick script to copy a semantic model from one workspace to another in the same tenant, assuming you are contributor+ in both the workspaces. I tested this for a Direct Lake model but should work for any more other semantic model. This just copies the metadata (not the data in the model) so be sure to set up other configurations (RLS members, refresh schedule, settings etc.). That can also be changed programmatically, thanks to Semantic Link Labs, but I will cover that in a future post.

Read on for the script, as well as an update from Sandeep on how you can do this even more easily.

Leave a Comment

A Dive into Microsoft Fabric Real-Time Intelligence

Nikola Ilic builds us a guide:

Once upon a time, handling streaming data was considered an avanguard approach. Since the introduction of relational database management systems in the 1970s and traditional data warehousing systems in the late 1980s, all data workloads began and ended with the so-called batch processing. Batch processing relies on the concept of collecting numerous tasks in a group (or batch) and processing these tasks in a single operation. 

On the flip side, there is a concept of streaming data. Although streaming data is still sometimes considered a cutting-edge technology, it already has a solid history. Everything started in 2002, when Stanford University researchers published the paper called “Models and Issues in Data Stream Systems”. However, it wasn’t until almost one decade later (2011) that streaming data systems started to reach a wider audience, when the Apache Kafka platform for storing and processing streaming data was open-sourced. The rest is history, as people say. Nowadays, processing streaming data is not considered a luxury, but a necessity. 

This is all part of a book that Nikola and Ben Weissman are writing, and Nikola has an extended excerpt from the book available for us to read.

Leave a Comment

Using the Microsoft Fabric Capacity Metrics App

Reitse Eskens uses a tool:

In a number of previous blogs and in my session on loadtesting Microsoft Fabric, I’ve always questioned the metrics app and one specific point is the timepoint detail. When you click on a graph, you get the option to go to the timepoint detail and read more.

This is all fun and games but looking at the list of active processes at that specific point in time, you’ll quickly see processes that are way out of the selected point in time. For me, it rendered this thing useless because it messed up the things I wanted to see.

Read on to see the right way to handle this app.

Leave a Comment

Kusto Query Performance in Microsoft Fabric

Dennes Torres checks some stats:

We already discovered how to investigate Kusto query history. Let’s discover how to analyse query performance considering the information on this history.

The query history returns 3 fields we can use to make a more detailed analysis of the queries: CachedStatisticsScannedExtentsStatistics and ResultsetStatistics.

Disclaimer: There are low to no documentation about this content. In this way, the content below may not be 100% precise but will give you good guidance.

Click through to learn more about these three.

Leave a Comment

Controlling Execution Flow in Fabric Data Pipelines

Reza Rad has everything under control:

In Microsoft Fabric, the Data Factory is the workload for ETL and data integration, and the Data Pipeline is a component in that workload for orchestrating the execution flow. There are activities in the pipeline, and you can define in which order you want the activities to run. In this article and video, you will learn about the execution order and output states in Data Pipeline and how they can be used in real-world scenarios of data integration.

The mechanisms here are fundamentally similar to what we’ve had in Azure Data Factory (obviously) and SQL Server Integration Services.

Leave a Comment

Reducing Query Timeout on DAX and MDX Queries

Chris Webb shuts it down:

The recent announcement of Surge Protection gives Fabric/Power BI capacity admins a way to restrict the impact of background operations on a capacity, preventing them from causing throttling. However, at the time of writing, Surge Protection does not prevent users that are running expensive DAX or MDX queries – which are interactive operations – from causing problems on your capacity. Indeed, right now, there is no direct way to stop runaway queries from consuming a lot of CUs, although there is something you can do which will help a lot: reducing the query timeout.

Read on for information about why Surge Protection doesn’t currently work with DAX and MDX queries, and how you can change the query timeout. This is kind of interesting considering that, outside of the Microsoft Fabric world, we typically move the query timeout higher rather than lower, to deal with long-running queries.

Leave a Comment

Billing for SQL Database in Microsoft Fabric

Amar Digamber Patil makes an announcement:

Since SQL database is a native item in Fabric, it utilizes Fabric capacity units like other Fabric workloads. Compute charges apply only when the database is actively used, so you only consume what you need. Storage is billed separately on a monthly basis, as are automatic backups, which are retained for seven days.

Billing for compute usage and data storage for SQL databases in Fabric will commence after February 1st.

Click through for more information, including links to more information regarding billing and monitoring.

Leave a Comment

Automating V-Order Optimization in Microsoft Fabric

Miles Cole writes a script:

I’ve previously blogged in detail about V-Order optimization. In this post, I want to revisit the topic and demonstrate how V-Order can be strategically enabled in a programmatic fashion.

Since V-Order provides the most benefit and consistent improvement for Direct Lake Semantic Models, why not leverage platform metadata to enable it automatically—but only for Delta tables used by these models?

This will be a short blog—let’s get straight to the concept, the source code, and then move on to more strategic use of this feature.

Click through for the process and an explanation of what’s happening in the accompanying Gist.

Leave a Comment

Running a Microsoft Fabric Notebook from ADO via Service Principal

Kevin Chant needs a service principal to help:

In this post I want to share one way that you can authenticate as a service principal to run a Microsoft Fabric notebook from Azure DevOps.

Some of you may recall that I previously covered how to run a Microsoft Fabric notebook from Azure DevOps.

I decided to published a newer version of the aforementioned post to amplify the fact that the REST API that runs a notebook on demand now supports service principals.

Service principals are the way to go for this, so long as you’re having one Azure-based service communicate with another Azure-based service. No passwords, no API keys, nothing you need to remember or change every 90 days.

The problem is, this works beautifully for assets inside of Azure, but not so much outside of Azure. But that’s a story for a different day.

Leave a Comment