The same basic set of structures, scans against both tables, to arrive at the data. Cost estimates between the two plans are very different though, with the targeted queries having a much lower estimated cost.
Performance-wise, interestingly enough, the average execution time of the first query, only returning the 10 rows, is 157ms on average, while the query grouping directly on the SalesPersonID averages about 190ms. Now, the reads tell a slightly different story with 17428 on the generic query and 5721 on the specific query. So, maybe a server under load will see a significant performance increase. However, let’s deal with what we have in front of us and say that, at least for these tests, the catch-all GROUP BY query performs well.
Grant’s recommendation is to split this out into several procedures, and if you’re having performance problems, that’s a solid move. I’m a bit more likely to keep them (especially in warehousing reports), but it’s nice to have options.
The important thing to understand with window functions is that there is a frame at any point in time when the data is being scanned or processed. I’m not sure what the best term to use is.
Let’s look at the same data set Kathi used. For simplicity, I’ll use a few images of her dataset, but I’ll examine the SalesOrderID. I think that can be easier than looking at the amounts.
Here’s the base dataset for two customers, separated by CustomerID and ordered by the OrderDate. I’ve included amount, but it’s really not important.
Steve goes into detail and explains what’s going on each step of the way. Window functions are extremely useful; check them out if you’re not already familiar with them.
A new developer has been troubleshooting a sometimes-slow stored procedure, and wants you to review their progress so far. Tell me what could go wrong here.
This one’s a bit tougher than some of the early interview questions, and I think you can see that based on the responses in the comments.
Which show you that the FLOAT had to be converted to VARCHAR. You can see the same thing if you try it with various versions of INT or DATE datatypes as well. Like I said earlier. No big deal with INT or even DATE. Those come back in a fairly expected format. (INTs look exactly the same and DATEs come back as ‘YYYY-MM-DD’). FLOAT and REAL however are floating point so they don’t always convert the same way. If you do the conversion deliberately you get this:
Understand your data types; otherwise, it might come back to hurt you later.
This is logically equivalent to the first version of the code, but I find it makes for more readable code. It just looks cleaner.
For those of us who are
lazylooking to maximize efficiency, this could save a whole lot of key strokes.
This is true, but if you’re on SQL Server 2012 or later, check out CONCAT for concatenation, as it handles NULL values more elegantly.
It has to be said, if performance is important (and it should be), you’ll know that user-defined scalar T-SQL functions can kill your query. So you might want to implement the above in the form of a CROSS APPLY instead
Here’s where I advocate for two tables on every instance: a tally table and a date table. The date table should look like a date dimension, including every potentially-interesting piece of information about a date—including if it’s a weekday. It might not perform quite as fast as Daniel’s solution (I’d have to test to know for sure), but it’s definitely easier. If you can’t get a viable date table on your instances for whatever reason, Daniel’s solution does work and does not require any additional objects.
There’s a really nice mathematical way that I found on Stack Overflow, using the golden ratio (from Wikipedia). And it’s actually set-based. I don’t have the requisite mathematical skills to evaluate the correctness or precision. Note that it returns afloat value result, the upside of which is that you can calculate much higher values. The downside is the loss of precision that comes with float.
This is a bit of a brain teaser but if you learn the techniques, they can definitely come in handy in the future. I like the third solution because it’s a reminder that writing code is just as much about understanding the domain as it is understanding the syntax.
If NULL is used in an in-row expression, the full expression will be NULL. This makes sense for most arithmetic operations, because NULL means unknown.
Here is an example with NULL plus/minus ONE:
There’s a consistency to NULL operations, but it’s sometimes a weird consistency.
Probably the most common distinct aggregate is COUNT(DISTINCT). In some respects, it’s similar to the windowed function DENSE_RANK(). DENSE_RANK(), as opposed to ROW_NUMBER(), will only increment the row counter when the ordering column(s) actually change from one row to the next, meaning that we can use DENSE_RANK() as a form of windowed distinct count
This is a very interesting approach to the problem. Read the whole thing.
We have said that a NULL value for s.DateOfSale is not in the range we are interested in. This means the rows with NULLs in the s.DateOfSale column (our employees yet to make a sale) will be filtered out. It will also filter out employees with sales in months other than March. We have converted the LEFT JOIN into an INNER JOIN.
James’s fix is to move the filter to the join clause, which eliminates the implicit inner join. When I see a condition like this in a code review, the first question on my mind is whether the correct fix is James’s fix or whether the developer really meant to do an inner join. There’s a potential performance gain from using an inner join over a left outer join (due to being able to drive from either table and thus having a larger number of potential execution plans) if it turns out you really do want to filter all rows and not just making the join criterion more specific.