The Basics Of RDDs In Apache Spark

Anmol Sarna walks us through some of the basics of Resilient Distributed Datasets in Apache Spark:

  • Resilient, i.e. fault-tolerant with the help of RDD lineage graph and so able to recompute missing or damaged partitions due to node failures.

  • Distributed with data residing on multiple nodes in a cluster.

  • Dataset is a collection of partitioned data.

Now we know what RDD stands for. Now let’s try to understand it.

It’s a nice intro to the topic.  And even though there are other data models which sit on top of RDDs to make life easier for developers, it’s still important to understand the core model in Spark.

Related Posts

Your Data’s Not That Big

Larry White throws a bit of cold water on the distributed computing movement: Someone recently told me about a data analysis application written in Python. He managed five Java engineers who built the cluster management and pipeline infrastructure needed to make the analysis run in the 12 hours allotted. They used Python, he said, because […]

Read More

Faster User-Defined Functions In SparkR

Liang Zhang and Hossein Falaki note a major performance improvement for functions in SparkR using the latest version of the Databricks Runtime: SparkR offers four APIs that run a user-defined function in R to a SparkDataFrame dapply() dapplyCollect() gapply() gapplyCollect() dapply() allows you to run an R function on each partition of the SparkDataFrame and returns […]

Read More

Categories

July 2018
MTWTFSS
« Jun Aug »
 1
2345678
9101112131415
16171819202122
23242526272829
3031