Press "Enter" to skip to content

Category: Microsoft Fabric

Renaming a Column in Microsoft Fabric via Python Notebook

Gilbert Quevauvilliers performs a rename:

I thought it would be good to help others in terms of my learning journey when working with partner notebooks and Microsoft fabric.

In today’s blog post, I am going to show you how to rename a column. In my experience this came up because I had a column name which had a forward slash “/” in it which caused the loading of the data for the table to fail because this is a reserved character.

Read on for the code an example of how it works in action.

Comments closed

Environmental Deployment in Microsoft Fabric

Kevin Chant takes us through deployment pipelines in Microsoft Fabric:

One question that I get frequently asked is how many workspaces are required? In reality, the answer is that it depends.

However, if you want your solution to be flexible and loosely coupled I do recommend at the very least one Microsoft Fabric workspace per environment.

That’s also required if you’re using deployment pipelines, as each stage in the pipeline pushes to a unique workspace.

Comments closed

Notes on Data Engineering in Microsoft Fabric

John Miner shares some notes. Part 1 looks at getting started and tables, both managed and unmanaged:

The architectural diagram shows how information flows from a source system, into a delta lake house, transformed by programs, and used by end users. To get source data into the lake, we can use any of the three methods to retrieve the data as files: pipelines – traditional Azure Data Factory components, dataflows – wrangling data flows based on Power Query and shortcuts – the ability to link external storage to the lake. Once the data is in the lake, there are two types of programs that can transform the data files: spark notebooks and data flows.

Part 2 covers file and folder management:

In practice, I have seen an additional quality zone called raw be used to stage files in their native format before converting to a delta file format. Please note, the lake house uses either shortcuts or pipelines to get files into the lake. We will talk more about bronze, silver and gold zones when I cover full and incremental loading later in this article.

Read on for John’s thoughts.

Comments closed

Viewing DAX in Microsoft Fabric with SemPy

Kevin Chant talks about a recent issue:

Recently I have been helping others get up to speed with Microsoft Fabric. Which includes going through some Power BI topics.

One issue that came up was how to show them the DAX used for a measure within a Power BI report that had been published to Microsoft Fabric. To link working with measures in Power BI Desktop with working in Microsoft Fabric.

Kevin shows the normal way of doing this, as well as an alternative using the SemPy library.

Comments closed

Accessing the Purview Portal in Your Fabric Environment

Kevin Chant enables a feature:

In this post I want to cover accessing the new Microsoft Purview portal in your own Microsoft Fabric environment.

To clarify, I mean a Microsoft Fabric environment you have created for your own use. Like the one I covered in a previous post.

You can do this in a trial environment thanks to the new capability provided by Microsoft last year to infuse Microsoft Fabric items into Microsoft Purview. Which Microsoft covered in a blog post about Microsoft Fabric items in Microsoft Purview.

Read on to see how.

Comments closed

Automating Microsoft Fabric Capacity Scaling via Logic App

Soheil Bakhshi does some scaling:

In a previous post I explained how to manage the capacity costs of a Fabric F capacity (under Pay-As-You-Go pricing model) using Logic Apps to Suspend and Resume it.

A customer who read my previous blog asked me “Can we use a similar method to scale up and down before and after specific workloads?”. This blog post is to answer exactly that.

This is pretty neat, though I wonder how long it takes and how much downtime it produces.

Comments closed

Generating Synthetic Data for Streaming in Microsoft Fabric

Sandeep Pawar builds out some data:

If you want to learn or demo Real Time Analytics in Microsoft Fabric, you will need a streaming data source. You can use the built-in samples to get started. But there are several data generators which you can use to create custom streaming sample datasets, Azure Stream Analytics data generator being one of them. You can see them here. In this blog, I will show how to set one up to use with Fabric Eventstream.

Read on for a step-by-step guide.

Comments closed

VARCHAR() in Microsoft Fabric Lakehouses and SQL Endpoints

Gerhard Brueckl models some data:

Defining data types and knowing the schema of your data has always been a crucial factor for performant data platforms, especially when it comes to string datatypes which can potentially consume a lot of space and memory. For Lakehouses in general (not only Fabric Lakehouses), there is usually only one data type for text data which is a generic STRING of an arbitrary length. In terms of Apache Spark, this is StringType(). While this applies to Spark dataframes, this is not entirely true for Spark tables – here is what the docs say:

Read through for more information on that, as well as how to define a table in a Microsoft Fabric lakehouse using VARCHAR(). The display is a little weird, but Greg Low explains why in the comments.

Comments closed