Adding An Index To A Spark RDD

Kevin Feasel

2016-06-13

Spark

Arijit Tarafdar gives us a good method for adding an index column to a Spark data frame based on a non-unique value:

The basic idea is to create a lookup table of distinct categories indexed by unique integer identifiers. The way to avoid is to collect the unique categories to the driver, loop through them to add the corresponding index to each to create the lookup table (as Map or equivalent) and then broadcast the lookup table to all executors. The amount of data that can be collected at the driver is controlled by the spark.driver.maxResultSize configuration which by default is set at 1 GB for Spark 1.6.1. Both collect and broadcast will eventually run into the physical memory limits of the driver and the executors respectively at some point beyond certain number of distinct categories, resulting in a non-scalable solution.

The solution is pretty interesting:  build out a new RDD of unique results, and then join that set back.  If you’re using SQL (including Spark SQL), I would use the DENSE_RANK() window function.

Related Posts

ACID Transactions on Spark

Achilleus explains one of the big announcements at Spark+AI Summit 2019: Delta Lake is basically a compute layer that would sit on top of your existing On Prem HDFS cluster, your favourite Cloud storage or even run it locally on your laptop(Best part)! Data is stored on the above-mentioned storage as versioned Parquet files. Any data that is read using […]

Read More

Spark+AI Summit 2019 Announcements

Victoria Holt creates a roundup of Spark+AI Summit 2019 announcements: Rohan Kumar  of Microsoft announced .NET for Apache Spark, making Apache Spark accessible to .NET developers – Git Hub I’m very happy that the Spark for .NET team added F# support. Spark is so much nicer when using functional programming.

Read More

Categories

June 2016
MTWTFSS
« May Jul »
 12345
6789101112
13141516171819
20212223242526
27282930