Press "Enter" to skip to content

Category: Microsoft Fabric

Missing Data in Microsoft Fabric Real-Time Intelligence Workloads

Greg Low covers a common scenario:

But another area that I see very few handling well, is the data that is missing, rather than just the data that is present. There’s a huge difference between data that arrived, and is odd, and data that just didn’t arrive at all.

One tool that’s great at working with streams of data is the Real Time Intelligence workload for Microsoft Fabric. And it’s also great at working with data that is missing from those streams.

Greg covers some of the scenarios around missing data, though not a lot on the process to fix them.

Comments closed

Tenant Switching in Microsoft Fabric

Koen Verbeeck has good news:

Praise whatever deity you believe in, because it’s finally here, a tenant switcher for Microsoft Fabric (which includes Power BI). A what? Let me explain. When you have a organization with multiple tenants in Azure (also called directories in some products like Azure Devops), or you’re a consultant like me who works with multiple clients (with each their own tenants), it’s possible that you can log into multiple tenants using the same email address. This can happen if your user account was added as an external user to another tenant.

This has been a real pain, and unfortunately, that pain still exists for Power BI Desktop.

Comments closed

Cross-Workspace KQL Queries in Microsoft Fabric

Sandeep Pawar drinks your milkshake:

In Fabric, if you want to query a delta table from a lakehouse in another workspace, you create a shortcut to that table. Similarly, in Eventhouse, you can also create shortcuts to Eventhouses in other workspaces, but the option might not be immediately obvious in the GUI. If you click on New > OneLake shortcut, it creates a shortcut to a delta table, not an Eventhouse.

Click through to see how to do this via UI and programmatically.

Comments closed

SCD Types in Microsoft Fabric

Kenneth Omorodion reminds us that the Kimball model is still quite valuable:

In modern data warehousing, how we handle updates to dimension tables is crucial. There are several approaches; but the decision often comes down to two primary strategies: Slowly Changing Dimensions (SCD) Type 2 and overwriting tables. Each has its own benefits, use cases, and trade-offs. This tip will explore the two methods and why SCD Type 2 is often a better option in many data warehouse scenarios.

Read on for this overview of the benefits of type-2 slowly changing dimensions, as well as a little bit of coverage of several other types of slowly changing dimensions.

Comments closed

Create a Case Insensitive Warehouse in Microsoft Fabric

Gilbert Quevauvilliers is speaking my language:

This is a quick blog post to show you how to use a Microsoft Fabric Notebook to quickly and easily create a Case Insensitive Warehouse.

Just a quick note when I talk about a Case Insensitive Warehouse, what that means is that the upper casing and lower casing of column names and text are ignored. By default, Warehouses and Lakehouse’s are case sensitive in Microsoft Fabric.

Case sensitivity is a trap, so I applaud Gilbert’s commitment to excellence here.

Comments closed

The Challenge of Importing Items into a Fabric Workspace

Marc Lelijveld performs an airing of grievances:

Obviously, you don’t want to start every solution from scratch. Therefore, it might be beneficial to kick-start your new solution by just importing components you already developed at earlier stages. Recently, I wanted to import a notebook to a Fabric workspace but was a bit confused. In this blog, I will further elaborate on the confusion and show how, in the end, I successfully imported the notebook to the workspace.

Read on for a story of pain.

Comments closed

Execute a Collection of Child Pipelines from Metadata in Data Factory

Andy Leonard continues a series on design patterns:

In this post, I clone and modify the dynamic parent pipeline from the previous post to retrieve metadata from an Azure SQL database table for several child pipelines, and then call each child pipeline from a parent pipeline.

When we’re done, this pipeline will:

  1. Read pipeline metadata from a table in an Azure SQL database
  2. Store some of the metadata (a collection of pipelineID values) in the (existing) pipelineIdArray variable
  3. Iterate the pipelineIdArray variable’s collection of pipelineID values
  4. Execute each child pipeline represented by each pipelineID value stored in the pipelineIdArray variable

Read on to learn how.

Comments closed

Analyzing Delta Table Measures in Microsoft Fabric

Sandeep Pawar has a script for us:

I have been sitting on this code for a long time. I shared the first version in one of my blogs on Direct Lake last year. I have been making updates to it since then as needed. I waited for the lakehouse schema to become available and then forgot to blog about it. Yesterday, someone reached out asking if the above could be used for warehouse delta tables in Fabric, so here you go. It’s 250+ lines so let me just explain what’s going on here:

Read on for the explanation, the script itself, a demonstration, and several additional notes.

Comments closed

Reviewing Kusto Query History in Microsoft Fabric

Dennes Torres looks over prior commands:

We can consume a Kusto database in Fabric from many different places: Notebooks, semantic models, real time dashboards and more. Kusto register all queries sent by the consumers in the query history.

Sometimes, either for logging purpose or to analyze and fix some bug, we need to identify the queries the database is receiving and executing.

Read on to see what you can do with query history in Kusto.

Comments closed