Press "Enter" to skip to content

Category: Internals

Sources of Query Plan Duplication

Eitan Blumin has a script for us:

Running the above script with the parameter @RCA set to 0 will output a list of the top query hashes (sorted based on their size in bytes so that you’d see the most impactful queries first) that have multiple different query plan hashes cached for them.

Running the script with the parameter @RCA set to 1 will output the same list, with additional columns that will help you do deeper root cause analysis.

Unlike the similar script provided by Brent, my script counts the number of query plan hashes rather than the query plan handles per each query hash. I found that this results in far fewer “false positives”. But you can still set the parameter @CountByPlanHandleInsteadOfPlanHash to 1 if you want it to count based on query plan handles instead.

Click through for the script, details on how it works, and various classes of reason why you might get to this outcome.

Comments closed

Undocumented Goodies in SQL Server 2022 RC0

Brent Ozar digs through some internals:

Microsoft has an official list of what’s new in 2022 overall, but here I’m specifically focusing on system objects that might be interesting to script developers like you and I, dear reader.

New stored procedure sp_get_table_card_est_and_avg_col_len – I assume the “card” refers to statistics and cardinality, not Hallmark. SQL Server has historically struggled with memory grants because it uses datatype size to budget memory grants, and bigger-than-necessary data types (like NVARCHAR(MAX)) have led to larger-than-necessary grants. It’d be cool if this proc was a down payment to mitigate that problem, but I have a sinking feeling it has to do with external data sources. I would tell you more about it, but when I run it, I get:

Read on for the full list.

Comments closed

Formatting Binary LSN Values

Michael J. Swart does a bit of shuffling:

Typically as developers, we don’t care about these values. But when we do want to dig into the transaction log, we can do so with sys.fn_dblog which takes two optional parameters. These parameters are LSN values which limit the results of sys.fn_dblog. But the weird thing is that sys.fn_dblogis a function whose LSN parameters are NVARCHAR(25).

The function sys.fn_dblog doesn’t expect binary(10) values for its LSN parameters, it wants the LSN values as a formatted string, something like: 0x00000029:00001a3c:0002.

Never fear, though: Michael’s got us covered. Click through for a conversion function.

Comments closed

Z-Ordering with Apache Impala

Zoltan Borok-Nagy and Norbert Luksa show off a performance improvement in Apache Impala:

So we’ll have great search capabilities against the partition columns plus one data column (which drives the ordering in the data files). With our sample schema above, this means we could specify a SORT BY “platform” to enable fast analysis of all Android or iOS users. But what if we wanted to understand how well version 5.16 of our app is doing across platforms and countries?

Can we do more? It turns out that we can. There are exotic orderings out there that can also sort data by multiple columns. In this post, we will describe how Z-order allows ordering of multidimensional data (multiple columns) with the help of a space-filling curve. This ordering enables us to efficiently search against more columns. More on that later.

It looks like a really good technique for nearly-static data, sort of like you’d see with a data warehouse which refreshes once a day.

Comments closed

Save and Unsafe Simple Parameterization

Paul White puts on the safety glasses:

When a statement passes the earlier parser and decoder checks, it arrives at the trivial plan stage as a prepared (parameterized) statement. The query processor now needs to decide if the parameterization attempt is safe.

Parameterization is considered safe if the query processor would generate the same plan for all possible future parameter values. This might seem like a complex determination to make, but SQL Server takes a practical approach.

Read on to learn more about the process.

Comments closed

Concurrency Control and VACUUM in Postgres

Paul Randal explains how multi-version concurrency control works in Postgres:

PostgreSQL uses an optimistic isolation system known as Multi-Version Concurrency Control (MVCC). MVCC ensures transactions writing data to the database don’t block concurrent transactions needing to read the data being modified. This works through the magic of row-versioning—PostgreSQL creates versions of rows in the database tables to minimize blocking from concurrent access. As more and more versions are generated, a garbage control mechanism called VACUUM must be used to ensure the tables are properly maintained. In this article, I’ll explain how all this works via a series of examples.

This is quite similar to Read Committed Snapshot Isolation in SQL Server but with a couple of twists, including the need to vacuum tuples.

Comments closed

Buffer Pool Parallel Scans in SQL Server 2022

David Pless talks about an internal optimization in SQL Server 2022:

Operations such as database startup/shutdown, creating a new database, file drop operations, backup/restore operations, Always On failover events, DBCC CHECKDB and DBCC Check Table, log restore operations, and other internal operations (e.g., checkpoint) will all benefit from Buffer Pool Parallel Scan.

In SQL Server 2019 and previous releases, operations that require scanning the buffer pool can be slow, especially on large memory machines such as the M-series Azure SQL virtual machine and large on-premises SQL Server environments. Even log restore operations and availability group failover operations can be impacted. Currently, there’s no way to eliminate this issue prior to SQL Server 2022, and dropping buffers using DBCC DROPCLEANBUFFERS would likely result in some degree of performance degradation as any subsequent query executions will have to reread the data from the database files increasing I/O.

Read on to understand why these operations can be slow on high-memory boxes and how much of a benefit you might get on certain administrative activities.

Comments closed

The Power of Hash Match Joins

Jared Poche looks at hash matching:

When I began working at Microsoft, I was very much a novice at performance troubleshooting. There was a lot to learn, and hash match joins were pointed out to me multiple times as the potential cause for a given issue. So, for a while I had it in my head, “hash match == bad”. But this really isn’t the case.

Hash matches aren’t inefficient; they are the best way to join large result sets together. The caveat is that you have a large result set, and that itself may not be optimal. Should it be returning this many rows? Have you included all the filters you can? Are you returning columns you don’t need?

I might throw in one caveat about hash match joins and being the best performers for two really large datasets joining together: merge join can be more efficient so long as both sets are guaranteed to be ordered in the same way without an explicit sort operator. That last clause is usually the kicker.

Comments closed

Fun with Nested Loops

Jared Poche explains my favorite type of join:

Nested loops joins are the join operator you are likely to see the most often. It tends to operate best on smaller data sets, especially when the first of the two tables being joined has a small data set.

In row mode, the first table returns rows one at a time to the join operator. The join operator then performs a seek\scan against the second table for each row passed in from the first table. It searches that table based on the data provided by the first table, and the columns defined in our ON or WHERE clauses.

Read on for more information about nested loop joins.

Comments closed

Memory Fractions in SQL Server

Hugo Kornelis explains the notion of memory fractions:

Some time ago a reader reached out to me with a request for help. He showed me a query and accompanying execution plan, and asked if I could help reduce (or, better yet, eliminate) the many hash spills that were killing his performance.

While helping him work through the plan, I was once more reminded of one of my pet peeves with execution plans: we get to see the requested memory for the plan (the Memory Grant and MemoryGrantInfo properties), which is of course based on the estimated total memory usage of operators that are active at the same time. We also get to see the actual memory used by each individual operator (in the Memory Usage property). But there is no way to see how much memory the optimizer estimates for each individual operator.

Read on for a detailed explanation.

Comments closed