Let’s look at the application domain in more detail. In the previous blog series on Kongo, a Kafka focussed IoT logistics application, we persisted business “violations” to Cassandra for future use using Kafka Connect. For example, we could have used the data in Cassandra to check and certify that a delivery was free of violations across its complete storage and transportation chain.
An appropriate scenario for a Platform application involving Kafka and Cassandra has the following characteristics:
-
Large volumes of streaming data is ingested into Kafka (at variable rates)
-
Data is sent to Cassandra for long term persistence
-
Streams processing is triggered by the incoming events in real-time
-
Historic data is requested from Cassandra
-
Historic data is retrieved from Cassandra
-
Historic data is processed, and
-
A result is produced.
It looks like he’s focusing on changepoint detection, which is one of several good techniques for generalized anomaly detection. I’ll be interested in following this series.