Now interact with SparkSQL through a Zeppelin UI, but re-use the table definitions you created in the Hive metadata store. You’ll create another table in SparkSQL later in this post to show how that would have been done there.
Connect to the Zeppelin UI and create a new notebook under the Notebook tab. Query to show the tables. You can see that the two tables you created in Hive are also available in SparkSQL.
There are a bunch of tools in here, but for me, the moral of the story is that SQL is a great language for data processing. Spark SQL has gaps, but has filled many of those gaps over the past year or so, and I recommend giving it a shot.