Press "Enter" to skip to content

Category: Streaming

Apache Flink 1.16 Released

Godfrey He makes an announcement:

To reduce the cost of migrating Hive to Flink, we introduce HiveServer2 Endpoint and Hive Syntax Improvements in this version:

The HiveServer2 Endpoint allows users to interact with SQL Gateway with Hive JDBC/Beeline and migrate with Flink into the Hive ecosystem (DBeaver, Apache Superset, Apache DolphinScheduler, and Apache Zeppelin). When users connect to the HiveServer2 endpoint, the SQL Gateway registers the Hive Catalog, switches to Hive Dialect, and uses batch execution mode to execute jobs. With these steps, users can have the same experience as HiveServer2.

Read on for a pretty large hit list.

Comments closed

Debugging Stream Table Joins

Philip Schmitt dives in to a problem:

Joining two topics to aggregate the data is one of the fundamental operations in stream processing. But that’s not to say that it’s simple. Let me show you what can go wrong! This article chronicles my journey to join two Apache Kafka topics—stumbling into and out of various pitfalls. I‘m going to show you…

– How to debug co-partitioning with kcat (formerly kafkacat)

– How to avoid the number one pitfall of using kcat

– Stream–table join semantics in action

There’s a lot of useful information in this post.

Comments closed

Designing Event Streams for Kafka

Dave Shook announces a new course:

Properly designing your events and event streams is essential for any event-driven architecture. Precisely how you design and implement them will significantly affect not only what you can do today, but what you can do tomorrow. For such a critical part of any data infrastructure, most event streaming tutorials gloss over event design.

In the new course on Confluent Developer, events and event streams are put front and center. We’re going to look at the dimensions of event and event stream design and how to apply them to real-world problems. But dimensions and theory are nothing without best practices, so we are also going to take a look at these to help keep you clear of pitfalls and set you up for success. This course also includes hands-on exercises, during which you will work through use cases related to the different dimensions of event design and event streaming.

Click through to learn more about what’s in the course and to check it out–it is free, after all.

Comments closed

Thoughts on Current 2022

Markos Sfikas, et al, recap the Current 2022 conference:

Throughout the conference, the theme of Batch vs. Streaming was apparent. Discussions covered how they can be unified, how batch processing’s performance can / must be improved for real-time applications, and more. There was even a dedicated panel discussion with Adi Polak, Amy Chen, Eric Sammer and Tyler Akidau discussing the state of streaming adoption today, and debating if streaming will ever fully replace batch. You can view some interesting points from the panel discussion in the Twitter thread from Robin Moffatt.

Click through for the full recap.

Comments closed

Streaming Datasets in Power BI

Reza Rad needs data in real time:

Datasets in Power BI can have connection types such as Import, DirectQuery or Live Connection. However, there is also one specific type of dataset which is different. This type of dataset is called Streaming Dataset. A streaming dataset is for a real-time dashboard and comes with various setups and configurations. In this video and article, we’ll talk about this type of dataset.

Reza includes a video as well as a very helpful walkthrough.

Comments closed

Real-Time Streaming ETL with Kafka and Debezium

Dursun Koc doesn’t have time for batched ETL:

Debezium is not extracting data using SQL. It uses database log files to track the changes in the database, so it has minimum effect on the source system. For more information about Debezium, please visit their website

After the data is extracted, we need Kafka Connect to stream it into Apache Kafka in order to play with it and reshape it as we required. And we will be using ksqlDB in order to reshape the raw data in a way we are required in the target system. Let’s consider a simple ordering system database in which we have a customer table, a product table, and an orders table, as shown below.

Read on for an overview as well as a link to the GitHub repo where you can try this all out.

Comments closed

Apache Flink Updates

Danny Cranmer announces Flink 1.15.2:

The Apache Flink Community is pleased to announce the second bug fix release of the Flink 1.15 series.

This release includes 30 bug fixes, vulnerability fixes, and minor improvements for Flink 1.15. Below you will find a list of all bugfixes and improvements (excluding improvements to the build infrastructure and build stability). For a complete list of all changes see: JIRA.

We highly recommend all users upgrade to Flink 1.15.2.

In addition to that, Jingsong Lee announces Flink Table Store 0.2.0:

Flink Table Store is a data lake storage for streaming updates/deletes changelog ingestion and high-performance queries in real time.

As a new type of updatable data lake, Flink Table Store has the following features:

– Large throughput data ingestion while offering good query performance.

– High performance query with primary key filters, as fast as 100ms.

– Streaming reads are available on Lake Storage, lake storage can also be integrated with Kafka to provide second-level streaming reads.

Read on for the changes in both platforms.

Comments closed

Watermarking in Spark Structured Streaming

Max Fisher takes us through an important feature for Spark streaming:

When building real-time pipelines, one of the realities that teams have to work with is that distributed data ingestion is inherently unordered. Additionally, in the context of stateful streaming operations, teams need to be able to properly track event time progress in the stream of data they are ingesting for the proper calculation of time-window aggregations and other stateful operations. We can solve for all of this using Structured Streaming.

For example, let’s say we are a team working on building a pipeline to help our company do proactive maintenance on our mining machines that we lease to our customers. These machines always need to be running in top condition so we monitor them in real-time. We will need to perform stateful aggregations on the streaming data to understand and identify problems in the machines.

This is where we need to leverage Structured Streaming and Watermarking to produce the necessary stateful aggregations that will help inform decisions around predictive maintenance and more for these machines.

Read on to see how watermarking works in various scenarios, including when you join together streams.

Comments closed

Securing Kafka Streams

Amani Newton gives us a primer on Apache Kafka security:

The largest companies in the world use Apache Kafka® for their real-time streaming data pipelines and applications. Kafka is the basis for the real-time fraud text alerts from your bank and the network-connected medical devices used in your local hospital. Securing customer or patient data as it flows through the Kafka system is crucial. However, out of the box, Kafka has relatively little security enabled. This blog post previews the free Confluent Developer course that teaches the basics of securing your Apache Kafka-based system.

Click through for the overview.

Comments closed

From Kafka to Azure Data Explorer with Protobuf Data

Anshul Sharma and Ramachandran G do a bit of converting:

Kafka is increasingly become a popular choice of scalable message queueing for large data processing workloads. This makes it very popular in IoT based ecosystem where there is large ingress in data before data processing (or) data storage. Azure Data Explorer  is a very powerful time series and analytics database that suits IoT scale data ingestion and data querying.  

Kafka supports ingestion of data in multiple formats including JSON, Avro, Protobuf and String. ADX supports ingestion of data from Kafka into ADX in all these formats. Due to excellent schema support, extensibility to various platforms and compression, [protobuf](https://developers.google.com/protocol-buffers) is increasingly becoming a data exchange choice in IoT based systems. The ADX Kafka sink connector leverages the Kafka Connect framework and provides an adapter to ingest data from Kafka in all these formats. 

The following section aims to provide configuration to support ingestion of protobuf data from Kafka to ADX. 

Click through for the high-level architecture and a deeper dive into the process.

Comments closed