Press "Enter" to skip to content

Category: Microsoft Fabric

Loading Scanner API JSON Data from Fabric into Power BI Desktop

Gilbert Quevauvilliers hunts for some output files:

How to get data from a Fabric Lakehouse File into Power BI Desktop – Using Scanner API JSON

In this blog post I am going to show you how I connected to my Scanner API JSON file which is stored in the files section of my Microsoft Fabric Lakehouse.

Full credit on how to complete this comes from Marc’s blog post https://data-marc.com/2023/08/25/access-onelake-files-from-power-bi-desktop/

Click through for the instructions.

Comments closed

mssparkutils now notebookutils and Validating DAGs in Fabric

Sandeep Pawar gives us two quick hits:

First, if you haven’t noticed mssparkutils has been officially renamed to notebookutils. Check out the official documentation for details. Be sure to use/update your notebooks to notebookutils.

Read on for a pair of notes around this name change, as well as some capabilities to validate DAGs when using runMultiple to orchestrate multiple notebook executions.

Comments closed

Time Series Anomaly Detection in Microsoft Fabric

Adi Eldar talks anomaly detection:

Anomaly Detector, one of Azure AI services, enables you to monitor and detect anomalies in your time series data. This service is based on advanced algorithms, SR-CNN for univariate analysis and MTAD-GAT for multivariate analysis. This service is being retired by October 2026, and as part of the migration process

  • The algorithms were open sourced and published by the new time-series-anomaly-detector · PyPI package.
  • We offer a time series anomaly detection workflow in Microsoft Fabric data platform.

Read on to see what replacements exist and how you can use the time-series-anomaly-detector package in Microsoft Fabric.

Comments closed

Building a Graph Visual with Fabric and KQL

Brian Sherwin builds a graph:

For quite a while, I have been extremely interested in data visualization. Over the last few years, I have been focused on ways to visualize graph databases (regardless of where the data comes from Using force directed graphs to highlight the similarities or “connected communities” in data is incredibly powerful. The purpose of this post is to highlight the recent work that the Kusto.Explorer team has done to visualize graphs in Azure Data Explorer database with data coming from a Fabric KQL Database.

Note: The Kusto.Explorer application used to visualize the graph is currently only supported on Windows.

I’ll just sit here in the corner with my Linux laptop, not jealous or anything.

Comments closed

Downloading Power Automate Scanner API Data into a Notebook

Gilbert Quevauvilliers creates a notebook:

I was recently working with a customer where they had more then 100 app workspaces and I was running into some challenges when using the Scanner API in Power Automate.

I then discovered this blog post where they detailed how to download the Scanner API data (DataXbi – admin-scan.py), it was not quite in the format that I needed, so below is my modified code.

The reason that I am downloading the Scanner API into a JSON file is that I find it easier to extract the data that I need using Power BI Desktop.

Click through for the code and how it all works.

Comments closed

Troubleshooting Non-Editable Power Query Parameters in Microsoft Fabric

Soheil Bakhshi digs into a problem:

Power Query is a powerful tool within the Microsoft Fabric environment, enabling users to manage data sources and transform data efficiently. However, a common issue you may face is that after publishing the Semantic Model, the Power Query parameters either do not appear or are greyed out, making them non-editable. In this post and its accompanying YouTube video, I’ll walk you through the steps to diagnose and fix these problems, ensuring that your parameters work as expected in your published semantic models.

Click through for the video and a pair of common reasons.

Comments closed

Finding Columns in Memory in Power BI Direct Lake Mode

Chris Webb goes searching:

As you probably know, in Power BI Direct Lake mode column data is only loaded into memory when it is needed by a query. I gave a few examples of this – and how to monitor it using DMVs – in this blog post from last year. But which columns are loaded into memory in which circumstances? I was thinking about this recently and realised I didn’t know for sure, so I decided to do some tests. Some of the results were obvious, some were a surprise.

Read on for the answer.

Comments closed

Microsoft Fabric Capacity Limits

Teo Lachev builds a pair of tables:

Here is table that is getting more and more difficult to find as searching for Fabric capacity limits returns results about CU compute units (for the most part meaningless in my opinion). I embed in a searchable format below before it vanishes on Internet. The most important column for semantic modeling is the max memory which denotes the upper limit of memory Fabric will grant a semantic model.

Click through for that table, followed by a table comparing Fabric SKUs to Power BI SKUs.

Comments closed

A Reference Architecture for Microsoft Fabric

James Serra draws boxes:

Microsoft Fabric uses a data lakehouse architecture, which means it does not use a relational data warehouse (with its relational engine and relational storage) and instead uses only a data lake to store data. Data is stored in Delta lake format so that the data lake acquires relational data warehouse-like features (check out my book that goes into much detail on this, or my video). Here is what a typical architecture looks like when using Fabric (click here for the .vsd):

Click through for the image as well as James’s explanation of the components.

Comments closed

Building a Microsoft Fabric AI Skill to Generate Data

Sandeep Pawar tries out AI Skills in Microsoft Fabric:

In the last blog I wrote, I showed how to call the AI Skills endpoint in a Fabric notebook. Being able to call the endpoint programmatically creates many opportunities to integrate AI Skills in different applications. One that I thought of was using AI Skills as a function. Function Calling or Tools is a specific use case in Gen AI to create structured output based on a function or behavior instructed by the user. I am not referring to that as AI Skills can only return a table. Instead, what if you created a number of these AI Skills that are grounded in your data with specific functions built to get the intended output? You could serve/share these with end users who can call these functions to generate data/results. Think of these as macros in Excel.

Click through for an example of how it works.

Comments closed