Press "Enter" to skip to content

Category: Microsoft Fabric

It’s Always Permissions (or DNS)

Kristina Mishra takes us through troubleshooting a problem:

Ah, you’ve setup a deployment pipeline and let your people know it’s ready for them to do the thing. Everything looks fine on your end, so you shoot off a message to the group and go about your busy day. (Nevermind your Test environment was set up 4 months ago, Production 3 days ago, and Development was replaced 2 months ago with a new Development environment because your region changed.) You’ve added all the permission groups to each environment and added your “contributors” as Admin to the deployment pipeline (no comment), so everything should be grand.

Famous last words, indeed.

Leave a Comment

The FabricTools Powershell Module

Kamil Nowinski has a module for us:

In the world of Microsoft Fabric, DevOps is still maturing. Unlike Azure Data Factory (ADF), which has been around long enough to have established tooling – like the #ADFTools I developed 5 years ago – Fabric is new, broad, and complex, in a very positive way!

Microsoft Fabric integrates data engineering, warehousing, real-time analytics, and BI. With this scale, the need for solid DevOps tooling is more critical than ever.

Click through to read a little bit of the history behind the project, as well as what’s currently available. And it’s all free and open-source.

Leave a Comment

The New Item Creation Experience in Microsoft Fabric

Dan Liu has an announcement:

Have you ever found yourself frustrated by inconsistent item creation? Maybe you’ve struggled to select the right workspace or folder when creating a new item or ended up with a cluttered workspace due to accidental item creation.

We hear you—and we’re excited to introduce the new item creation experience in Fabric! This update is designed to address those pain points head-on. With the new unified creation flow, you’ll have a streamlined way to select the exact workspace and folder where your items belong. More importantly, this improvement brings consistency and cohesion to item creation across all Fabric items, so you can stay organized and productive.

Click through to see what it looks like.

Leave a Comment

Result Set Caching in Microsoft Fabric Data Warehouse

Emily Tehrani makes an announcement:

Result Set Caching is now available in preview for Microsoft Fabric Data Warehouse and Lakehouse SQL analytics endpoint. This performance optimization works transparently to cache the results of eligible T-SQL queries. When the same query is issued again, it directly retrieves the stored result, instead of recompiling and recomputing the original query. This operation drastically cuts execution time for complex queries. The cache is then automatically managed on the user’s behalf. This lightweight performance boost is most beneficial for workloads like reports, that issue many repetitive T-SQL queries to the DW and SQL analytics endpoint.

This is something I’ve wished we had on-premises for years and years, especially for data warehouses where you know the data only changes once every x hours or days. You can, of course, do this yourself with the cache-aside pattern and some caching solution, but that implies you have a layer between your end user and the data source that you fully control.

Leave a Comment

Fronting Fabric APIs with Azure API Management

Ed Lima combines expensive with expensive:

Integrating Azure API Management (APIM) with Microsoft Fabric’s API for GraphQL can significantly enhance your API’s capabilities by providing robust scalability and security features such as identity management, rate limiting, and caching. This post will guide you through the process of setting up and configuring these features.

API Management is a really neat service, though it’s rather costly. That’s my biggest complaint about it, though it is a doozy.

Leave a Comment

Custom Libraries in Microsoft Fabric Data Engineering

Gerhard Brueckl isn’t content with the defaults:

When working with Spark or data engineering in general in Microsoft Fabric, you will sooner or later come to the point where you need to reuse some of the code that you have already written in another notebook. Best practice is to put these code pieces into a central place from where it can be referenced and reused. This way you can make sure all notebooks always use the very same code and it is also easy to develop, update and test the common functions.

As Gerhard mentions, having common notebooks with utilities is fine for when you’re getting started with development, but being able to centralize functions in proper libraries can make that code a lot more useful, not just in the context of the single notebook.

I believe that this does allow for arbitrary code execution, so someone with sufficient permissions to create a notebook and import code from arbitrary locations would be able to execute that code. I think there are ways of limiting this risk (such as not allowing your Fabric hosts to connect to any remote servers other than ones you explicitly allow), but it’s something I’d have to puzzle through.

Leave a Comment

Refreshing SQL Analytics Endpoint Metadata in Fabric

Ancy Philip makes an announcement:

We’re excited to announce that the long-awaited refresh SQL analytics endpoint metadata REST API is now available in preview. You can now programmatically trigger a refresh of your SQL analytics endpoint to keep tables in sync with any changes made in the parent artifact, ensuring that you can keep your data up to date as needed.

Click through to see how it works.

Leave a Comment

Microsoft Fabric Mirroring and Live Monitoring

Teo Lachev is waiting for a message:

A current project called for mirroring a Google BigQuery dataset to Fabric. This feature is currently in private preview so don’t try to find it. However, the tips I share here should be applicable to other available mirroring scenarios, such as mirroring from Azure SQL Database.

One of the GBQ tables was a transaction fact table with some 130 million rows. The issue was that the mirroring window would show this table as normally replicating table with Running green status, but we waited and waited and nothing was happening…

Read on to learn more and how Teo was able to get a better idea of how the initial sync progressed.

Comments closed

Debugging Fabric UDFs in Visual Studio Code

Sunitha Muthukrishna takes us through a debugging exercise:

Debugging your code is important to identify issues and mitigate them when you’re working with user data functions in Microsoft Fabric. You want to make sure everything works as it should and that’s where local debugging lets you catch problems in your code without messing with the live environment. In this blog post, I will walk you through the steps to make local debugging easier and faster.

Click through to see what you’ll need, as well as the process to debug a function locally.

Comments closed