Brent Ozar recommends a strategy:
When I’m tuning queries, the normal answer is to make the query perform better – either via changing the T-SQL, adding hints, or adding indexes so that the data’s better prepared for the query.
However, sometimes when I’m looking at the output of sp_BlitzCache, I scroll across to the Average Rows column and double-check that the query’s actually returning a reasonable number of rows out in the wild.
Click through for some horror stories and the benefits of pagination. Brent’s last case was a scenario in which people received 1000 rows and nobody ever complained. I’ve worked in systems where the customers did want and need every row. And in those cases, pagination still works. Because 85-90% of customers only need the first page or two.
Now, I wish that OFFSET
/FETCH
actually performed well. Sadly, it generally is a dog when you work with larger datasets and get past the first few pages. That’s because if you want to show rows 10,000-10,050, you first need to scan from rows 1-9999 and throw that data away, then grab the 50 rows you need. There are some clever sorting tricks you can use to reduce the pain level on repeated fetch operations, and I show one of them in this demo script (scroll down to the OFFSET/FETCH demo, number 6 in the list). The prior key approach is, in my experience, the best performer but it assumes you only move in one direction and don’t allow users to select arbitrary page numbers or go straight to the end.