Brad Llewellyn takes us through more Spark RDD and DataFrame exercises, including joins:
We can make use of the built-in .join() function for RDDs. Similar to the .aggregateByKey() function we saw in the previous post, the .join() function for RDDs requires a 2-element tuple, with the first element being the key and the second element being the value. So, we need to use the .map() function to restructure our RDDs to store the keys in the first element and the original array/tuple in the second element. After the join, we end up with an awkward nested structure of arrays and tuples that we need to restructure using another .map() function, leading to a lengthy code snippet.
This is a place where DataFrames make so much more sense.