Kafka Offset Management With Spark Streaming

Guru Medasana and Jordan Hambleton explain how to perform Kafka offset management when using Spark Streaming:

Enabling Spark Streaming’s checkpoint is the simplest method for storing offsets, as it is readily available within Spark’s framework. Streaming checkpoints are purposely designed to save the state of the application, in our case to HDFS, so that it can be recovered upon failure.

Checkpointing the Kafka Stream will cause the offset ranges to be stored in the checkpoint. If there is a failure, the Spark Streaming application can begin reading the messages from the checkpoint offset ranges. However, Spark Streaming checkpoints are not recoverable across applications or Spark upgrades and hence not very reliable, especially if you are using this mechanism for a critical production application. We do not recommend managing offsets via Spark checkpoints.

The authors give several options, so check it out and pick the one that works best for you.

Related Posts

Kafka Streams Basics

Anuj Saxena walks through Kafka Streams and provides a quick example: The features provided by Kafka Streams: Highly scalable, elastic, distributed, and fault-tolerant application. Stateful and stateless processing. Event-time processing with windowing, joins, and aggregations. We can use the already-defined most common transformation operation using Kafka Streams DSL or the lower-level processor API, which allow us […]

Read More

R For Apache Impala

Ian Cook describes implyr, an R interface for Apache Impala: dplyr provides a grammar of data manipulation, consisting of set of verbs (including mutate(), select(), filter(), summarise(), and arrange()) that can be used together to perform common data manipulation tasks. The implyr package helps dplyr translate this grammar into Impala-compatible SQL commands. This gives R users access to Impala’s scale and speed […]

Read More

Categories

June 2017
MTWTFSS
« May Jul »
 1234
567891011
12131415161718
19202122232425
2627282930