Press "Enter" to skip to content

Category: Microsoft Fabric

Top Microsoft Fabric Features from 2025

Nikola Ilic builds an end of year list:

Microsoft Fabric just turned two a couple of weeks ago (at Ignite in November, to be more precise). As the product is still very much a “work in progress”, we have overseen literally hundreds of new features in the last 365 days. Obviously, not all of them are equally important – some were simply trying to fix the obvious issues in the existing workloads, or trying to catch up either with competitors or with some functionalities we had in the older Microsoft data platform solutions, whereas the others were targeting super niche use cases.

Therefore, in this article, I’ll try to distill what I consider the biggest announcements around Microsoft Fabric in 2025.

Read on for three caveats, followed by the list and quite a few additional nominees.

Leave a Comment

A Look at Fabric IQ

Teo Lachev shares some thoughts on Fabric IQ:

At Ignite in November, 2025, Microsoft introduced Fabric IQ. I noted to go beyond the marketing hype and check if Fabric IQ makes any sense. The next thing I know, around the holidays I’m talking to an enterprise strategy manager from an airline company and McKinsey consultant about ontologies.

Ontology – A branch of philosophy, ontology is the study of being that investigates the nature of existence, the features all entities have in common, and how they are divided into basic categories of being. In computer science and AI, ontology refers to a set of concepts and categories in a subject area or domain that shows their properties and the relations between them.

So, what better way to spend the holidays than to play with new shaky software?

Read on for Teo’s standard format of the good, the bad, and the ugly.

Leave a Comment

Thoughts on Power BI Pro/PPU to Fabric

Teo Lachev shares some advice:

Performance is difficult to translate because Power BI Pro/PPU run in a shared capacity, meaning compute resources (v‑cores) are pooled across many tenants and dynamically allocated, whereas Fabric capacities are dedicated, meaning that Microsoft grants specific resources expressed as number of cores and memory. Therefore, Fabric performance is predicable while Pro/PPU might not be, although I’m yet to hear from client complaining about unpredictable performance.

Read on for some high-level thoughts on performance and cost.

Leave a Comment

Accessing Microsoft Graph API via Fabric Data Factory

Paul Hernandez makes a connection:

This article is an updated version of my 2022 post on using Synapse pipelines to retrieve security groups and their members through the Microsoft Graph API. Some customers recently asked for a Microsoft Fabric–based approach, and I also noticed that many developers are still defaulting to Python clients to interact with Graph. While Python works perfectly fine, this walkthrough demonstrates how you can accomplish the same using a parameterized Copy Data activity inside a Fabric Data Factory pipeline.

Read on to see how.

Leave a Comment

Connecting Microsoft Fabric to Azure DevOps via Service Principal

Yaron Pri Gal doesn’t need no steenkin’ passwords:

Following Azure DevOps Service Principal & Cross Tenant Support (Generally Available) announcement for service principal and cross-tenant support – Microsoft Fabric Git Integration with Azure DevOps (ADO), this blog post serves as a guide to connecting Fabric workspaces to Azure DevOps repositories using service principal.

Fabric Git Integration is the foundation for organizations implementing fully automated CI/CD pipelines, enabling seamless movement of assets across Development, Test, and Production environments.

Currently, Fabric Git Integration supports two major Git providers: Azure DevOps and GitHub. This blog post addresses the new service principal capability for Azure DevOps.

Click through for more info and a link to Microsoft Learn that contains the instructions.

Leave a Comment

DATE_BUCKET() Now GA in Fabric Data Warehouse

Jovan Popovic makes an announcement:

We have introduced a new DATE_BUCKET() function in Fabric Data Warehouse SQL language that makes reporting and analytics even easier.

In this blog post, you’ll discover how it simplifies time-based reporting and makes grouping dates effortless.

My experience is that DATE_BUCKET() takes a bit of effort getting used to, as it is not an intuitive function. That said, it can be really powerful for dealing with time series data. It is also available in SQL Server, as of SQL Server 2022.

Leave a Comment

Microsoft Fabric Lakehouse Schemas now GA

Ted Vilutis makes an announcement:

Schema lakehouses are now Generally Available. By using schemas in lakehouses, users can arrange their tables more efficiently and make it easier to find data. When creating new lakehouses, schema-enabled lakehouses will now be the default choice. However, users still have the option to create lakehouses without a schema if they prefer.

Read on to see how they work, as well as a bug(?) around pinned lakehouses.

Leave a Comment

The Good and Bad of Microsoft Fabric Variable Libraries

Jon Lunn digs in:

One of the big issues with Deployment Pipelines in Fabric, or as I call them Disappointment Pipelines, has been the lack of being able to parameterise connections. You do have deployment rules in the pipelines, but they are limited in functionality and don’t support pipeline parameters (boo!), so if you need to push and change items between workspaces in a typical Development, Test and Production workspaces scenario, you had to configure the connections manually, which is a massive pain. Variable Libraries should make the experience of deployment a lot easier.

Read on to see how they work, as well as some of the existing pain points around them.

Leave a Comment

OneLake Security ReadWrite Access

Kiefer Sheldon practices least privilege:

Many data teams face the same challenge: balancing the need for open collaboration with the responsibility of protecting sensitive information. As organizations grow, data often lives across multiple domains—some containing critical or confidential datasets—while partner teams may only need access to a subset of that information.

Until recently, maintaining this balance often meant trade-offs. Teams had to choose between a fragmented storage setup or overexposing data just to keep their workflows running smoothly.

Read on to see how this works.

Comments closed