David Crook shows how to save data to disk from PySpark:
This is working on HDInsight v3.5 w/Spark 2.0 and Azure Data Lake Storage as the underlying storage system. What is nice about this is that my cluster only has access to its cluster section of the folder structure. I have the structure root/clusters/dasciencecluster. This particular cluster starts at dasciencecluster, while other clusters may start somewhere else. Therefor my data is saved to root/clusters/dasciencecluster/data/open_data/RF_Model.txt
It’s pretty easy to do, and the Scala code would look suspiciously similar. The Java version of the code would be seven pages long.