I take a look at running a Hadoop query against a big(gish) data set:
Nearly 12 minutes doesn’t sound fantastic, but let’s remember that this is running on a single-node sandbox hosted on my laptop. That’s hardly a fair setup for a distributed processing system. Also, I have done nothing to optimize the files; I’m using compressed, comma-separated text files, have not partitioned the data in any meaningful way, and have taken the easy way out whenever possible. This means that an optimized file structure running on a real cluster with powerful servers behind it could return the data set a lot faster…but for our purposes, that’s not very important. I’m using the same hardware in all three cases, so in that sense this is a fair comp.
Despite my hemming and hawing, Polybase still performed as well as Hive and kicked sand in the linked server’s face. I have several ideas for how to tune and want to continue down this track, showing various ways to optimize Polybase and Hive queries.