An Introduction To Kafka

Kevin Feasel

2017-07-14

Hadoop

Prashant Sharma explains the basics of Apache Kafka:

Apache describes Kafka as a distributed streaming platform that lets us:

  1. Publish and subscribe to streams of records.

  2. Store streams of records in a fault-tolerant way.

  3. Process streams of records as they occur.

Kafka is probably the most generally interesting of the current Hadoop ecosystem, with Spark not too far behind.  By “generally interesting,” I mean in the sense that companies with no vested interest in Hadoop as a whole could still be excited by the prospect of Kafka.

Related Posts

Page Ranking With Kafka Streams

Hunter Kelly walks through a page ranking algorithm: Once you have the adjacency matrix, you perform some straightforward matrix calculations to calculate a vector of Hub scores and a vector of Authority scores as follows: Sum across the columns and normalize, this becomes your Hub vector Multiply the Hub vector element-wise across the adjacency matrix […]

Read More

Stateful Processing In Spark Streaming

Bill Chambers and Jules Damji look at a couple of stateful scenarios within Spark Streaming: No streaming events are free of duplicate entries. Dropping duplicate entries in record-at-a-time systems is imperative—and often a cumbersome operation for a couple of reasons. First, you’ll have to process small or large batches of records at time to discard […]

Read More

Categories

July 2017
MTWTFSS
« Jun Aug »
 12
3456789
10111213141516
17181920212223
24252627282930
31