Project Normalization In UDFs

Dmitry Pilugin looks into how the optimizer (using the 2014-and-on cardinality estimator) processes user-defined functions:

If we remember, for the CE 120 it was a one row estimate, and in this case server decided, that it is cheaper to use a non-clustered index and then make a lookup into clustered. Not very effective if we remember that our predicate returns all rows.

In CE 130 there was a 365 rows estimate, which is too expensive for key lookup and server decided to make a clustered index scan.

But, wait, what we see is that in the second plan the estimate is also 1 row!

That fact seemed to me very curious and that’s why I’m writing this post. To find the answer, let’s look in more deep details at how the optimization process goes.

This was an interesting look at how the optimizer looks at scalar user-defined functions.

Related Posts

VARCHAR Size And Memory Grant Estimates

Arthur Daniels shows us a good reason for using better data sizes than just VARCHAR(MAX) everywhere: That’s a lot of desired memory, 1,493,120 KB aka 1.4 GB, but there was only 25 MB used in the sort. So why was SQL Server so far off the right estimate? It’s the data types we picked. That’s a […]

Read More

Visualizing Hash Match Joins

Bert Wagner continues his series on visualizing physical join operators: Hash Match joins are the dependable workhorses of physical join operators.While Nested Loops joins will fail if the data is too large to fit into memory, and Merge Joins require that the input data are sorted, a Hash Match will join any two data inputs […]

Read More

Categories

January 2017
MTWTFSS
« Dec Feb »
 1
2345678
9101112131415
16171819202122
23242526272829
3031