Tomaz Kastrun continues a series on Apache Spark. Day 3 shows off the CLI and web UI:
In CLI we will type and run a simple Scala script and observe the behaviour in the WEB UI.
We will read the text file into RDD (Resilient Distributed Dataset). Spark engine resides on location:
/usr/local/Cellar/apache-spark/3.2.0 for MacOS and
C:\SparkApp\spark-3.2.0-bin-hadoop3.2 for Windows (based on the blogpost from Dec.1)
Day 4 compares local mode versus cluster mode:
Finding the best way to write Spark will be dependent of the language flavour. As we have mentioned, Spark runs both on Windows and Mac OS or Linux (both UNIX-like systems). And you will need Java installed to run the clusters. Spark runs on Java 8/11, Scala 2.12, Python 2.7+/3.4+ and R 3.1+. And the language flavour can also determine which IDE will be used.
Day 5 shows the setup of a Spark cluster:
Spark can run both by itself, or over several existing cluster managers. It currently provides several options for deployment. If you decide to use Hadoop and YARN, there is usually the installation needed to install everything on nodes. Installing Java, JavaJDK, Hadoop and setting all the needed configuration. This installation is preferred when installing several nodes. A good example and explanation is available here. you will also be installing HDFS that comes with Hadoop.
Check out all three posts and get caught up on Spark.
Comments closed