Michael Noll digs into Kafka Streams, showing how to enrich data and collect aggregates:
The stream of user click events is considered to be a record stream, where each data record represents a self-contained datum. In contrast, the stream of user geo-location updates is interpreted as a changelog stream, where each data record represents an update (i.e. any previous data records having the same record key will be replaced by the latest update). In Kafka Streams, a record stream is represented via the so-called KStream interface and a changelog stream via the KTable interface. Going from the high-level view to the technical view, this means that our streaming application will demonstrate how to perform a join operation between a KStream and a KTable, i.e. it is an example of a stateful computation. This KStream-KTable join also happens to be Kafka Streams’ equivalent of performing a table lookup in a streaming context, where the table is updated continuously and concurrently. Specifically, for each user click event in the KStream, we will lookup the user’s region (e.g. “europe”) in the KTable in order to subsequently compute the total number of user clicks per region.
Let’s showcase the beginning (input) and the end (expected output) of this data pipeline with some example data.
This article is fairly detailed, but it covers a rather interesting topic in a good way.