When Table Variables Have Realistic Estimates, Unrealistic Results May Occur

Milos Radivojevic wraps up a series on deferred compilation for table variables by looking at a hack which used to work but no longer does:

With this change, the query is executed very fast, with the appropriate execution plan:

SQL Server Execution Times:
CPU time = 31 ms,  elapsed time = 197 ms.

However, the LOOP hint does not affect estimations and the optimizer decisions related to them; it just replaces join operators chosen by the optimizer by Nested Loop Joins specified in the hint. SQL Server still expects billions of rows, and therefore the query got more than 2 GB memory grant for sorting data, although only 3.222 rows need to be sorted. The hint helped optimizer to produce a good execution plan (which is great; otherwise this query would take very long and probably will not be finished at all), but high memory grant issue is not solved.

As you might guess, now it’s time for table variables.

This is an interesting article with workarounds and counter-workarounds to solve a nasty estimation problem.

Related Posts

VARCHAR Size And Memory Grant Estimates

Arthur Daniels shows us a good reason for using better data sizes than just VARCHAR(MAX) everywhere: That’s a lot of desired memory, 1,493,120 KB aka 1.4 GB, but there was only 25 MB used in the sort. So why was SQL Server so far off the right estimate? It’s the data types we picked. That’s a […]

Read More

Visualizing Hash Match Joins

Bert Wagner continues his series on visualizing physical join operators: Hash Match joins are the dependable workhorses of physical join operators.While Nested Loops joins will fail if the data is too large to fit into memory, and Merge Joins require that the input data are sorted, a Hash Match will join any two data inputs […]

Read More

Categories

October 2018
MTWTFSS
« Sep Nov »
1234567
891011121314
15161718192021
22232425262728
293031