Press "Enter" to skip to content

Category: Microsoft Fabric

Creating a Microsoft Fabric Warehouse with Service Principal

Gilbert Quevauvilliers sets up a new warehouse:

In this blog post I am going to show you how to create a Microsoft Fabric Warehouse, where the owner will be the Service Principal.

As mentioned in the blog post here are some of the advantages of having the Service Principal as the Warehouse Owner.

  • Using a Service Principal to create the warehouse avoids issue where the person who created the warehouse leaves the organization and issues arise when the users account is deleted from Entra ID.
  • You avoid the painful logging in with the user account to ensure the password remains updated.
  • The organization now owns the warehouse and not an individual user.

I will show you how I created a Warehouse with the owner being a Service Principal this using a Microsoft Fabric Notebook

Click through for the notebook and additional commentary.

Comments closed

Changing the Source Lakehouse in a Power BI Deployment Pipeline

Chris Webb makes a switch:

If you’re using deployment pipelines with Direct Lake semantic models in Power BI you’ll have found that when you deploy your model from one stage to another by default the model still points to the Lakehouse it was originally bound to. So, for example, if you deploy your model from your Development stage to your test stage, the model in the Test stage still points to the Lakehouse in the Development stage. The good news is that you can use the deployment rules feature of deployment pipelines to make sure the model in the Test stage points to a Lakehouse in the Test stage and in this post I’ll show you how.

Click through for the process.

Comments closed

Exploring SQL Databases in Microsoft Fabric

Jared Westover looks at the bright side of life:

Over the past few months, I’ve toyed with Microsoft Fabric, focusing on the Data Factory and Power BI experiences. Everything I’ve developed so far is in the proof-of-concept (POC) phase. Naturally, I’m skeptical about new game-changing features, and Fabric is no exception. Any new flashy tech brings bugs along in the early stages. We’ve all been there, working for weeks on a project to have random bugs throw a wrench in everything.

When Microsoft announced SQL databases in Fabric, I was intrigued. After watching the Ignite session, Power AI apps with insights from SQL database in Fabric, a few features instantly stood out, and I want to share my first impressions.

Read on to learn more.

Comments closed

Finding Capacity-Level Fabric Settings with Semantic Link Labs

Sandeep Pawar lists some Microsoft Fabric properties:

Just before the holidays last year Michael Kovalsky released version 0.8.10 of Semantic Links Labs with a bunch of new helpful functions, among them list_server_properties() lists properties of an Analysis Services instance. As you know, in Fabric, the workspace acts as a server which is tied to a capacity. You define these server properties in the Capacity Settings. As far as I am aware, there wasn’t an API to get these capacity settings for audit/monitoring/debugging. With this new function, you can programmatically get the Semantic Model (i.e. Power BI workload) settings.

Click through for an example.

Comments closed

Microsoft Fabric and Power Platform Resources

Jon Voege has a collection of links for us:

This week, to round off the year, we try something different. I wanted to throw a shout out to all the community heroes out there, who also help make the most of Microsoft Fabric, through the use of Microsoft Power Platform (and vice versa).

Also, I wanted to highlight some of their contributions, and hopefully give you all a list of resources to peruse.

Click through for more than 20 links, showing how you can work with Power Automate, Power Apps, Power Pages, and data in Dataverse from Microsoft Fabric.

Comments closed

Switching between Python and PySpark Notebooks in Fabric

Sandeep Pawar wants to save some money:

File this under a test I have been wanting to do for some time. If I am exploring some data in a Fabric notebook using PySpark, can I switch between Python and PySpark engines with minimal code changes in an interactive session? The goal is to use the Python notebook for some exploration or use existing PySpark/SparkSQL or develop the logic in a low compute environment (to save CUs) and scale it in a distributed Spark environment. Understandably, there will be limitations with this approach given the difference in environments, configs etc., but can it be done?

Read on for the answer, as well as plenty of notes around it.

Comments closed

Scanning Fabric Workspaces via Semantic Link Labs

Sandeep Pawar takes us through the Scanner API:

It’s finally here! Thanks to Michael Kovalsky, one of the most requested & anticipated APIs in now available in Semantic Link Labs (v0.8.10) – the Scanner API. The Scanner API in Fabric Admin REST APIs allows Fabric administrators to retrieve detailed metadata about their organization’s Fabric items, supporting governance and compliance efforts. It provides information such as item names, descriptions, date created, lineage, connection strings etc. It’s not new, we have been using it in Power BI for a long time but in the Fabric world, it’s even more important given the number of items and configurations.

Read on to see what’s available and how this works.

Comments closed

Fabric Benchmarking: Moving CSV Files

Eugene Meidinger breaks out the abacus:

First, a disclaimer: I am not a data engineer, and I have never worked with Fabric in a professional capacity. With the announcement of Fabric SQL DBs, there’s been some discussion on whether they are better for Power BI import than Lakehouses. I was hoping to do some tests, but along the way I ended up on an extensive Yak Shaving expedition.

I have likely done some of these tests inefficiently. I have posted as much detail and source code as I can and if there is a better way for any of these, I’m happy to redo the tests and update the results.

Part one focuses on loading CSV files to the files portion of a lakehouse. Future benchmarks look at CSV to delta and PBI imports.

I think Eugene did a fine job documenting everything in the process, and it was interesting to see relative price differences between different techniques for uploading a very large CSV file.

Comments closed