Press "Enter" to skip to content

Category: ETL / ELT

Cloud Connections in Microsoft Fabric

Dennes Torres makes a connection:

wrote about cloud connections when they were in a very early stage.

Cloud connections evolved and are now sharable. We call the “regular” connection as “personal connection”.

The problem with the “personal connections” is the difficult to make teamwork. The personal connections belong to you and different developers can’t use them. When a different developer needs to work with the same objects, they are required to create their own connection.

Using cloud connections, we can create a single, reusable connection to the data source and share it with all the developers in the team.

Read on to learn more about how they work now that the feature is a bit more mature.

Leave a Comment

Using the Log Analytics Ingestion API with Sentinel

Abhishek Sharan explains how the Log Ingestion API works when feeding data into a Log Analytics workspace:

In this blog, I’m going to delve into ingesting and transforming application logs to log analytics workspace using Log Ingestion API technique. Before we jump into the details, let’s explore the different types of ingestion techniques based on the application log types.

  • If applications and services logs information to text files instead of standard logging services such as Windows Event log or Syslog, Custom Text Logs can be leveraged to ingest such text file logs in log analytics workspace.
  • If applications and services logs information to JSON files instead of standard logging services such as Windows Event log or Syslog, Custom JSON Logs can be leveraged to ingest such text file logs in log analytics workspace.
  • If an application understands how to send data to an API then you can leverage Log Ingestion API to send the data to Log Analytics Workspace.

Custom Text/JSON Logs are out of scope for this blog, I might write a separate blog dedicated to these techniques later. In this blog, my focus will be on streaming data to log analytics workspace using Log Ingestion API and transforming the data for optimal usage.

Note: This blog aims to demonstrate how to ingest logs using the log ingestion API. To keep things straightforward, I’ll refer to our public documentation.

Click through for details on the process.

Leave a Comment

Mounting Azure Data Factory in Fabric Data Factory

Andy Leonard takes up a factory job:

Thanks to the hard work of the Microsoft Fabric Data Factory Team, it’s now possible to mount an Azure Data Factory in Fabric Data Factory. This post describes one way to mount an existing Azure Data Factory in Fabric Data Factory. In this post, we will:

  • Mount an existing Azure Data Factory in Fabric Data Factory
  • Open the Azure Data Factory in Fabric Data Factory
  • Test-execute two ADF pipelines
  • Modify and publish an ADF pipeline

Read on to see how it all works. One of the odd things about Microsoft Fabric—and its predecessor, Azure Synapse Analytics—is the penchant for similar-but-not-quite-the-same services. Yes, we have Data Factory…but it’s not quite the same. Yes, we have Azure Data Explorer (and KQL)…but it’s not quite the same. I get that there are reasons for this (such as not having a resource group with a dozen separate services hanging around), but I’m sure it’s a bit frustrating working on several separate code bases and trying to keep them all approximately in sync.

Leave a Comment

Trying out the Databricks For-Each Task

Chen Hirsh goes in a loop:

Databricks recently added a for-each task to their workflow capability. Workflows are Databricks jobs, like Data factory pipelines, or SQL server jobs, a pipeline that you can schedule, that include a number of tasks that together complete some business logic.

Theoretically, the long-awaited for-each task should make the run of multiple processes easier. for example, one of the things I often do is run a list of notebooks, each processing a different table, with no dependencies between them. At the moment I use parallel notebooks – https://docs.databricks.com/en/notebooks/notebook-workflows.html#run-multiple-notebooks-concurrently

As you will see later, this use case is not supported yet. But before that, let’s see what we can do with the for-each task.

Read on to see what it currently can do, and what it cannot.

Leave a Comment

Data Ingestion with Microsoft Fabric Copy Jobs

Reitse Eskens spends a bunch of time at the copier:

The copy job is essentially an abstraction of a pipeline reading data from the source system and writing the data into either a Lakehouse or a Warehouse. It really is ingesting data and nothing else. In my opinion that what copy data flows are meant to do and are very good at too.

The big challenge we all keep facing is how to create incremental loads. We have to build some sort of metadata database where we keep the latest ID, data or other column we use to discern the increment on. In our flow, we need to get that value, compare it against the source system and get the differences. The biggest task is to find out if records are deleted.

With the Copy Job, a large part of this task is taken out of your hands. The Copy Job has a configuration GUI (or wizard) that helps you out quite quickly. So let’s not waste anymore characters and dig in!

Read on to see how it works and its capabilities and limitations. The key question, as always, is whether your workload fits into the wheelhouse. If so, this sounds really useful. If not, it’s a proper struggle.

Leave a Comment

Cosmos DB HTAP into Azure Synapse Analytics and Microsoft Fabric

Paul Hernandez doesn’t want to write ETL jobs:

In the ever-evolving landscape of data management and analytics, choosing the right tools and approaches is crucial for optimizing performance and achieving business goals. Two prominent solutions that have gained traction are Azure Synapse Link for Azure Cosmos DB and Mirroring in Microsoft Fabric. Both offer unique benefits and cater to different needs, making it essential to understand their differences and use cases.

Read on to see how each of these works, as well as a quick demonstration of efficacy.

Comments closed

Type 2 SCDs in Microsoft Fabric

Reza Rad has changes to make:

In the previous article, I explained SCD (Slowly Changing Dimension) and its different types. In this article, I’ll show you how to implement SCD Type 2 (one of the most common types) using Microsoft Fabric and Power BI. This article includes using Lakehouse, Dataflow, Warehouse, Data Pipeline, SQL Stored Procedures, Power BI Semantic model, and report in Microsoft Fabric.

Click through to learn more about the structure of a type-2 slowly changing dimension, as well as how you can store and load this information to track changes over time.

Comments closed

Cross-Workspace Data Transfer in Microsoft Fabric

Reitse Eskens moves some data around:

When you open Fabric, the first thing you need to do is choose a so-called workspace. This serves as a container for all your Fabric items. You can have one or more workspaces and the design is entirely up to you. From one workspace to rule them all to one workspace for each set of items (Lakehouse, Warehouse, Semantic model and Report, Pipeline, Notebook etc). Until yesterday (the day this blogpost came online) it was impossible to use a pipeline to get data across different workspaces.

You could work around it with tricks like shortcuts, but it feels more natural (or maybe I’m just old ;)) to be able to read data from workspace 1 and write it into workspace 2.
So let’s see how this works and, where capacity is used!

Click through to see it in action.

Comments closed

Configuring Microsoft Fabric Data Mirroring for Snowflake

Koen Verbeeck copies some data:

We have a couple of Snowflake databases and would like to have that data available in Microsoft Fabric as well. Is there an easy solution to get the data quickly in Fabric? We don’t have many technical people on staff, so writing complex ETL is not an option.

Read on for more information on how it works. Mind you, you’re probably still writing the T and some of the L after using mirroring.

Comments closed