Justin Brandenburg has a tutorial on combining Python and Spark on the MapR platform:
Looking at the first 5 records of the RDD
kddcup_data.take(5)
This output is difficult to read. This is because we are asking PySpark to show us data that is in the RDD format. PySpark has a DataFrame functionality. If the Python version is 2.7 or higher, you can utilize the pandas package. However, pandas doesn’t work on Python versions 2.6, so we use the Spark SQL functionality to create DataFrames for exploration.
The full example is a fairly simple k-means clustering process, which is a great introduction to PySpark.
Comments closed