I have a post on Polybase queries against Hadoop which do not generate MapReduce jobs:
The dm_exec_external_work DMV tells us which execution we care about; in this case, I ended up running the same query twice, but I decided to look at the first run of it. Then, I can get step information from dm_exec_distributed_request_steps. This shows that we created a table in tempdb called TEMP_ID_14 and streamed results into it. The engine also created some statistics (though I’m not quite sure where it got the 24 rows from), and then we perform a round-robin query. Each Polybase compute node queries its temp table and streams the data back to the head node. Even though our current setup only has one compute node, the operation is the same as if we had a dozen Polybase compute nodes.
Click through for Wireshark-related fun.
Comments closed