Randolph West walks us through a performance troubleshooting issue with a twist:
So we branch the the code in source control, and start writing a helper class to manage the data for us closer to the application. We throw in a
SqlDataAdapter
, use theFill()
method to bring back all the rows from the query in one go, and then use a caching layer to keep it in memory in case we need it again. SQL Server’s part in this story has now faded into the background. This narrow table consumes a tiny 8 MB of RAM, and having two or more copies in memory isn’t the end of the world for our testing. So far, so good again.We run the new code, first stepping through to make sure that it still does what it used to, massaging here and there so that in the end, a grid is populated on the application with the results of the query. Success! We then compile it in Release mode, and run it without any breakpoints to do some performance testing.
And then we find that it runs at exactly the same speed to produce exactly the same report, using our caching and
SqlDataAdapter
, and we’ve wasted another hour of our time waiting for the grid and report. Where did we go wrong?
As people get better at tuning, we start to make assumptions based on prior experience. That, on net, is a good thing, but as Randolph shows, those assumptions can still be wrong.