Manish Mishra explains the fundamental abstraction of Spark Streaming:
Before going into details of the operations available on the DStream API, let us look at the input sources from which we can start a Stream. There are multiple ways in which we can get the inputs from e.g. Kafka, Flume, etc. Or simple Idle files. To get the details on the available input sources supported by Spark, you can refer to this section. As part of this blog, we will take the example of Kafka.
Read on to see an example of pulling data from Kafka and converting inputs into microbatches.