Press "Enter" to skip to content

Category: Query Tuning

Dynamic Partition Pruning in Apache Spark 3.0

Anjali Sharma walks us through a nice improvement in Spark SQL coming with Apache Spark 3.0:

Partition pruning in Spark is a performance optimization that limits the number of files and partitions that Spark reads when querying. After partitioning the data, queries that match certain partition filter criteria improve performance by allowing Spark to only read a subset of the directories and files. When partition filters are present, the catalyst optimizer pushes down the partition filters. The scan reads only the directories that match the partition filters, thus reducing disk I/O.

However, in reality data engineers don’t just execute a single query, or single filter in their queries, and the common case is that they actually have dimensional tables, small tables that they need to join with a larger fact table. So in this case, we can no longer apply static partition pruning because the filter is on one side of the join, and the table that is more appealing and more attractive to prune is on the other side of the join. So, we have a problem now.

And that’s where dynamic partition pruning comes into play.

Comments closed

Pulling GROUP BY Above a Join

Paul White has fun with a SQL Server optimization:

When the optimizer moves a Group By above a Join it has to preserve the semantics. The new sequence of operations must be guaranteed to return the same results as the original in all possible circumstances.

One cannot just pick up a Group By and arbitrarily move it around the query tree without risking incorrect results.

Read on for the demo, including a couple ingenious tricks the optimizer has up its sleeve.

Comments closed

Exchange Demand Partitioning and Parallel Queries

Joe Obbish takes us through a fun concurrency problem:

Very little has been written about exchange operators with a partitioning type of demand, so I forgive you for not hearing of it before today. There is a brief explanation available here, an example of using demand partitioning to improve some query plans involving partitioned tables, and a Stack Exchange answer for someone comparing round robin and demand partitioning. You have the honor of reading perhaps the fourth blog post about the subject.

Read on for an in-depth look at the problem.

Comments closed

Misleading Query Plan Estimates from Missing Nodes

Hugo Kornelis points out another issue when checking arrow width when reading execution plans:

The issue I will talk about today is caused when a node is missing, due to a post-optimization rewrite. I myself have so far only seen this in execution plans where a bitmap filter (created by a Bitmap operator, by a Batch Hash Table Build operator, or by a Hash Match operator with a BitmapCreator property) was pushed into an Index ScanClustered Index Scan, or Columnstore Index Scan operator. So that’s what I’ll focus on here. But do keep in mind that there may be other situations where a post-optimization rewrite removes a node.

To understand the root cause of this (and, credit where credit is due, I only understood this myself after getting it explained by Paul White, whose explanation I am now paraphrasing here), we need to dig deeper into the internals of the optimizer.

This is where you put on the internals mining cap and watch out for falling rocks.

Comments closed

Rows Read Versus Rows Returned

Hugo Kornelis explains another issue with arrow widths in tools like SQL Server Management Studio:

The visual of the arrows in an execution plan strongly suggests that they represent the flow of rows from one operator to another. And hence, the width of that arrow strongly appears to be an indication of how many rows are passed between the connected operators. That used to be always the case. But unfortunately, this changed in December 2017 (or earlier, but that was when I first noticed the change).

The root cause is the addition of a new property, returned by Scan and Seek operators: Number of Rows Read (as well as its estimated counterpart: Estimated Number of Rows to be Read). Now don’t get me wrong, those properties are awesome and I’m super happy that they were added to execution plans. When a filter condition is pushed into a scan or seek operator as a Predicate property, I want to be able to see how effective it is, and comparing Actual Number of Rows to Number of Rows Read (or Estimated Number of Rows to Estimated Number of Rows to be Read) helps me assess just that. Which in turn can help me decide whether I should change my indexing, or try to rewrite the query.  But I digress.

Click through to learn what the issue is. Hugo describes a tricky situation where there are two valuable measures but only one way to show them. If you agree with Hugo’s preferences, here’s a Feedback item for you.

Comments closed

When Arrow Widths Mislead

Hugo Kornelis provides one example of misleading arrow widths in execution plans:

In my previous post, I talked about the Actual Number of Rows and Estimated Number of Rows properties, their visual representation in execution plans, and the most important ways in which this can be used.

But life is not always perfect, and Microsoft likes to remind us of that. Sometimes, these properties report values in weird and confusing ways. Sometimes the arrow width, as the visual representation in the execution plan, misleads us.

Let’s look at one such case.

Read the whole thing and vote for Hugo’s Feedback item to improve this experience.

Comments closed

Row Counts and Arrow Widths in SSMS

Hugo Kornelis takes us through one of the niceties SQL Server Management Studio (and subsequent tools like SentryOne Plan Explorer and Azure Data Studio) has for reading execution plans:

Because both the Actual Number of Rows and the Estimated Number of Rows are very valuable for many use cases, Microsoft has built their primary tool for viewing execution plans (SQL Server Management Studio) to highlight the value of at least one of these two properties, in a visual way, when you look at the graphical representation of an execution plan.

Look at the arrows that show which operator (is called by and) returns rows to which operator, and that therefore can be thought of as representing the data stream. In most execution plans, they are not all the same width. Management Studio uses the width of the arrow as a quick, simple visual representation of the value of Actual Number of Rows (in an execution plan plus run-time statistics, aka “execution plan plus” or “actual execution plan”), or of the value of the Estimated Number of Rows in an execution plan only (aka “estimated execution plan”).

This first post is the happy path post, where the information is correct and useful. Later posts promise to cover cases in which this does not hold.

Comments closed

Issues with Window Functions and Views / Derived Tables

Paul White takes us through some of the difficulties you are liable to see when using window functions in views and derived tables:

Our expectation is that the execution plan for this new query will be exactly the same as before we created the view. The query optimizer should be able to push the product filter specified in the WHERE clause down into the view, resulting in an index seek.

We need to stop and think a bit at this point, however. The query optimizer can only produce execution plans that are guaranteed to produce the same results as the logical query specification. Is it safe to push our WHERE clause into the view?

Read the whole thing.

Comments closed

The Halloween Problem, Continued

Paul White continues a series on the Halloween Problem. Part 2 looks at insert and delete statements:

In the first part of this series, we saw how the Halloween Problem applies to UPDATE queries. To recap briefly, the problem was that an index used to locate records to update had its keys modified by the update operation itself (another good reason to use included columns in an index rather than extending the keys). The query optimizer introduced an Eager Table Spool operator to separate the reading and writing sides of the execution plan to avoid the problem. In this post, we will see how the same underlying issue can affect INSERT and DELETE statements.

Part 3 looks at the train wreck MERGE operator:

In the right circumstances, the SQL Server optimizer can recognize that the MERGE statement is hole-filling, which is just another way of saying that the statement only adds rows where there is an existing gap in the target table’s key.

For this optimization to be applied, the values used in the WHEN NOT MATCHED BY TARGET clause must exactly match the ON part of the USING clause. Also, the target table must have a unique key (a requirement satisfied by the PRIMARY KEY in the present case).

Where these requirements are met, the MERGE statement does not require protection from the Halloween Problem.

If only it weren’t busted in so many other ways!

Part 4 wraps up the series:

The SQL Server optimizer has specific features that allow it to reason about the level of Halloween Protection (HP) required at each point in the query plan, and the detailed effect each operator has. These extra features are incorporated into the same property framework the optimizer uses to keep track of hundreds of other important bits of information during its search activities.

Each operator has a required HP property and a delivered HP property. The required property indicates the level of HP needed at that point in the tree for correct results. The delivered property reflects the HP provided by the current operator and the cumulative HP effects provided by its subtree.

This last one goes into some nice detail.

Comments closed

Reading Query Plans in Spark

Daniel Ciocirlan has a primer on query plans in Apache Spark:

Let’s go over some examples of query plans and how to read them. Let’s go back to the one we’ve just shown:

 == Physical Plan == *(1) Project [(id#0L * 5) AS id#2L]

+- *(1) Range (1, 1000000, step=1, splits=6)

We read this plan backwards, bottom to top:

Spark does have some UI components which make this a bit easier, but you’ll probably end up in a situation where you need to read it in this format.

Comments closed