Press "Enter" to skip to content

Day: February 20, 2024

Combining Kafka and Flink

Gautam Goswami shares some thoughts:

In short, the process of collecting data in real-time as streams of events from event sources such as databases, sensors, and software applications is known as event streaming. With real-time data processing and analytics in mind, Apache Flink is a potent open-source program. For situations where quick insights and minimal processing latency are critical, it offers a consistent and effective platform for managing continuous streams of data. 

I’ve found it interesting that Confluent people have spent a lot of time the past several months talking up Apache Flink and Kafka+Flink combinations.

Comments closed

Checking for the Existence of Multiple Values in SQL

Lukas Eder does a performance test:

But what if you want to check if there are at least 2 (or N) rows? In that case, you cannot use EXISTS, but have to revert to using COUNT(*). However, instead of just counting all matches, why not add a LIMIT clause as well? So, if you want to check if actors called WAHLBERG have played in at least 2 films, instead of this:

Lukas compares performance between two query options in Postgres, Oracle, SQL Server, and MySQL. Because Oracle has weird behavior in the test, Lukas shares a third option that works well for it.

Comments closed

Feature Toggles for Databases

Phil Factor discusses feature toggles for the database:

In software development the concept of feature toggles are used to selectively turn on and off features. They are, for example, used to restrict some newly introduced features to a select group to see how these features work. While this concept has been long used for user-facing application code, it is also a practice that is useful for database code.

In databases, feature toggles are used for several loosely related purposes. One common use is to separate, or decouple, rollout of database features from deployment of new versions of the application code. This allows developers to release software faster and with less risk.

The idea of using roles to determine whether a given user should see a particular feature makes sense, though I was a bit confused at first because I was thinking about feature flags: behavioral changes you slowly turn on for groups of people. This is a bit different.

Comments closed

Notes on Data Engineering in Microsoft Fabric

John Miner shares some notes. Part 1 looks at getting started and tables, both managed and unmanaged:

The architectural diagram shows how information flows from a source system, into a delta lake house, transformed by programs, and used by end users. To get source data into the lake, we can use any of the three methods to retrieve the data as files: pipelines – traditional Azure Data Factory components, dataflows – wrangling data flows based on Power Query and shortcuts – the ability to link external storage to the lake. Once the data is in the lake, there are two types of programs that can transform the data files: spark notebooks and data flows.

Part 2 covers file and folder management:

In practice, I have seen an additional quality zone called raw be used to stage files in their native format before converting to a delta file format. Please note, the lake house uses either shortcuts or pipelines to get files into the lake. We will talk more about bronze, silver and gold zones when I cover full and incremental loading later in this article.

Read on for John’s thoughts.

Comments closed

Environmental Deployment in Microsoft Fabric

Kevin Chant takes us through deployment pipelines in Microsoft Fabric:

One question that I get frequently asked is how many workspaces are required? In reality, the answer is that it depends.

However, if you want your solution to be flexible and loosely coupled I do recommend at the very least one Microsoft Fabric workspace per environment.

That’s also required if you’re using deployment pipelines, as each stage in the pipeline pushes to a unique workspace.

Comments closed