Working With Kafka At Scale

Kevin Feasel

2018-08-15

Hadoop

Tony Mancill has some tips for working with large-scale Kafka clusters:

Unless you have architectural needs that require you to do otherwise, use random partitioning when writing to topics.¬†When you’re operating at scale, uneven data rates among partitions can be difficult to manage. There are three main reasons for this:

  • First, consumers of the “hot” (higher throughput) partitions will have to process more messages than other consumers in the consumer group, potentially leading to processing and networking bottlenecks.

  • Second, topic retention must be sized for the partition with the highest data rate, which can result in increased disk usage across other partitions in the topic.

  • Third, attaining an optimum balance in terms of partition leadership is more complex than simply spreading the leadership across all brokers. A “hot” partition might carry 10 times the weight of another partition in the same topic.

There’s some interesting advice in here.

Related Posts

Working with Columns in Spark

Achilleus has a two-parter on working with columns in Spark. Part 1 covers some of the basic syntax and several functions: Also, we can have typed columns which is basically a column with an expression encoder specified for the expected input and return type. scala> val name = $"name".as[String]name: org.apache.spark.sql.TypedColumn[Any,String] = namescala> val name = […]

Read More

Creating Threadpools with ExecutorService in Kafka

Prasanth Nair shows how we can use Java’s ExecutorService to create threadpools for Kafka consumers: Apache Kafka is one of today’s most commonly used event streaming platforms. While using the Kafka platform, quite often, we run into a scenario where we have to process a large number of events/messages that are placed on a broker. […]

Read More

Categories

August 2018
MTWTFSS
« Jul Sep »
 12345
6789101112
13141516171819
20212223242526
2728293031