Press "Enter" to skip to content

Category: Notebooks

Tracking Microsoft Fabric Notebook Progress

Gilbert Quevauvilliers asks are we there yet? are we there yet?

How to view or track the progress of Notebook while it is running in Microsoft Fabric

I was recently working with a Notebook in Microsoft Fabric that was started via a Data Pipeline.

The challenge I had was that I had no idea how far the notebook had gone (as there were quite a lot of cells in this particular notebook).

In this blog post I am going to show you how I can use Microsoft Fabric to identify exactly which cell my notebook is currently on.

Click through for the answer. And so help me, if you ask that question one more time, I’m turning this thing around and we’re going back home.

Comments closed

Updating the Default Lakehouse of a Notebook

Sandeep Pawar makes a change:

I have written about default lakehouse of a Fabric notebook before here and here. However, unless you used the notebook API, there was no easy/quick way of removing all/selective lakehouses or updating the default lakehouse of a notebook. But thanks to tip from Yi Lin from Notebooks product team, notebookutils.notebook.updateDefinition has two extra parameters, defaultLakehouse and defaultLakehouseWorkspace which can be used to update the default lakehouse of a notebook. You can also use it to update environment attached to a notebook. Below are some scenarios how it can be used.

Click through for those scenarios.

Comments closed

Querying a Fabric SQL Endpoint via Notebook and T-SQL

Sandeep Pawar talks about a Spark connector:

I am not sharing anything new. The spark data warehouse connector has been available for a couple months now. It had some bugs, but it seems to be stable now. This connector allows you to query the lakehouse or warehouse endpoint in the Fabric notebook using spark. You can read the documentation for details but below is a quick pattern that you may find handy.

Despite it not being anything new, it is still interesting to see the use case of writing T-SQL instead of Spark SQL.

Comments closed

mssparkutils now notebookutils and Validating DAGs in Fabric

Sandeep Pawar gives us two quick hits:

First, if you haven’t noticed mssparkutils has been officially renamed to notebookutils. Check out the official documentation for details. Be sure to use/update your notebooks to notebookutils.

Read on for a pair of notes around this name change, as well as some capabilities to validate DAGs when using runMultiple to orchestrate multiple notebook executions.

Comments closed

Databricks Notebook Package Installation and Variables

Chen Hirsh diagnoses a problem:

A friend called to ask for my help with a weird issue. In a Databricks notebook using Python, he declares and assigns a variable in the first cell. Something like that:

my_var = 1

He then runs the rest of the notebook, and somewhere along the way, tries to use this variable, and gets this message:

NameError: name 'my_var' is not defined

Going back to cell 1, and checking the value of my_var, he gets the same error.

Read on for the root cause of the issue, as well as a pair of helpful tips from Chen.

Comments closed

Downloading Power Automate Scanner API Data into a Notebook

Gilbert Quevauvilliers creates a notebook:

I was recently working with a customer where they had more then 100 app workspaces and I was running into some challenges when using the Scanner API in Power Automate.

I then discovered this blog post where they detailed how to download the Scanner API data (DataXbi – admin-scan.py), it was not quite in the format that I needed, so below is my modified code.

The reason that I am downloading the Scanner API into a JSON file is that I find it easier to extract the data that I need using Power BI Desktop.

Click through for the code and how it all works.

Comments closed

Using AI Skills as Cell Magics in Microsoft Fabric Notebooks

Sandeep Pawar takes a look at a new preview capability:

The public preview of AI Skills in Microsoft Fabric was announced yesterday. AI Skills allows Fabric developers to create their own GenAI experience using data in the lakehouse. Unlike Copilot, which is an AI assistant, AI Skills lets users build a validated Q&A application that queries lakehouse data by converting natural language questions into T-SQL queries. It’s only available in paid F64+ SKUs. You can watch the below video for Copilot, AI Skills and Gen AI experiences in Fabric:

Read on for more details on how it works.

Comments closed

Defining the Default Lakehouse for a Fabric Notebook

Sandeep Pawar sets up a default lakehouse:

I wrote a blog post a while ago on mounting a lakehouse (or generally speaking a storage location) to all nodes in a Fabric spark notebook. This allows you to use the File API file path from the mounted lakehouse.

Mounting a lakehouse using mssparkutils.fs.mount() doesn’t define the default lakehouse of a notebook. To do so, you can use the configure magic as below:

Read on for that command, as well as some notes around using it.

Comments closed

Getting the Top N Results in a PySpark Notebook

Gilbert Quevauvilliers only needs the top 1:

How to get the TopN rows using Python in Fabric Notebooks

When working with data there are sometimes weird and wonderful requirements which must be created in order to get to the desired solution.

In today’s blog post I had a situation where I wanted to get a single row with the highest duration.

Gilbert uses the Spark SQL version, specifically the Python function variant. You could also use Spark SQL and write a query using the LIMIT operator.

Comments closed