Press "Enter" to skip to content

Category: Cloud

Accessing Delta Lake Tables as Iceberg Data

Matthew Hicks makes an announcement:

We’re thrilled to announce an exciting new Preview capability in OneLake: you can now automatically read Delta Lake tables using Apache Iceberg compatible readers, with no need for migration, copying, or manual conversion. This enhancement gives data engineers and analytics teams unprecedented flexibility in how they access and interact with their data.

This is pretty neat, given that Iceberg is the other popular format for data lakes.

Comments closed

Goodbye Default Contributor Role in Fabric Workspace Identities

Varun Jain makes a security announcement:

Fabric workspace identity is an automatically managed service principal that can be associated with a Fabric workspace. Fabric workspaces with a workspace identity can securely read or write to firewall-enabled Azure Data Lake Storage Gen2 accounts through trusted workspace access for OneLake shortcuts. Fabric items can use the identity when connecting to resources that support Microsoft Entra authentication. Fabric uses workspace identities to obtain Microsoft Entra tokens without the customer having to manage any credentials. 

Previously, a workspace identity was automatically assigned the workspace contributor role and had access to workspace items.  

Read on to see what’s changing, why, and what you can do instead.

Comments closed

Bad Request Error Running Powershell in Azure DevOps

Koen Verbeeck wants good requests:

I needed to run a PowerShell cmdlet in an Azure Devops pipeline. The cmdlet in question was New-AzRoleAssignment, but the cmdlet itself isn’t important. What is important is that I needed to pass the object ID of a service principal to the command. Even though I was pretty sure the syntax and everything was correct, I got a “Operation returned an invalid status code ‘BadRequest’” error when the PowerShell was run (inside an Azure PowerShell task):

Read on to see how Koen diagnosed and resolved the issue.

Comments closed

Writing Back to a Fabric Data Warehouse via UDF

Jon Vöge continues a series on write-back options into Microsoft Fabric:

In that article, we took advantage of some of the built-in sample code from the User Data Function editor, as well as some great code examples from Sujata: Example User data functions for Translytical task flows · GitHub

The problem? All of these samples use SQL Databases in Fabric as the backend item.

Jon switches this from a SQL database into a Fabric Data Warehouse, and notes some of the challenges along the way.

Comments closed

Event Streaming in Microsoft Fabric

Rayis Imayev streams some data:

In my post last week (https://datanrg.blogspot.com/2025/06/salesforce-cdc-data-integration.html), I talked about Salesforce Change Data Capture (CDC) event data streaming, where the initial event destination was file storage in Azure. But what if we anticipate a higher volume of incoming Salesforce source data or the addition of a new data feed? This could create the need for an alternative method of managing incoming events.

Read on to learn more.

Comments closed

Trying out Fabric Unified Admin Monitoring

Reitse Eskens tries out a tool for monitoring Microsoft Fabric installations:

Let me set the scene quickly for you. You’re working for an organisation where Fabric is in the process of being adopted or it’s already fully in use. Regardless of the number of capacities, workspaces, etc, you’re interested in what’s going on in your Fabric environment. You have questions like “Who is using the reports?”, “Who is changing settings in the Admin panel?” and “How is my capacity being used?”.

Read on for a single tool that can solve these sorts of questions.

Comments closed

Incremental Copy Job in Microsoft Fabric now GA

Ye Xu has an announcement:

Copy job has been a go-to tool for simplified data ingestion in Microsoft Fabric, offering a seamless data movement experience from any source to any destination. Whether you need batch or incremental copying, it provides the flexibility to meet diverse data needs while maintaining a simple and intuitive workflow.

We continuously refine Copy job based on customer feedback, enhancing both functionality and user experience. In this update, we’re introducing several key improvements designed to streamline your workflow and boost efficiency.

Click through to see what’s new.

Comments closed