Ofer Habushi solves a clickstream aggregation problem using Spark:
At this point, an interesting question came up for us: How can we keep the data partitioned and sorted?
That’s a challenge. When we sort the entire data set, we shuffle in order to get sorted RDDs and create new partitions, which are different than the partitions we got from Step 1. And what if we do the opposite?
Sort first by creation time and then partition the data? We’ll encounter the same problem. The re-partitioning will cause a shuffle and we’ll lose the sort. How can we avoid that?
Partition→sort = losing the original partitioning
Sort→partition = losing the original sort
There’s a solution for that in Spark. In order to partition and sort in Spark, you can use repartitionAndSortWithinPartitions.
This is an interesting solution to an ever-more-common problem.