A new developer has been troubleshooting a sometimes-slow stored procedure, and wants you to review their progress so far. Tell me what could go wrong here.
This one’s a bit tougher than some of the early interview questions, and I think you can see that based on the responses in the comments.
Which show you that the FLOAT had to be converted to VARCHAR. You can see the same thing if you try it with various versions of INT or DATE datatypes as well. Like I said earlier. No big deal with INT or even DATE. Those come back in a fairly expected format. (INTs look exactly the same and DATEs come back as ‘YYYY-MM-DD’). FLOAT and REAL however are floating point so they don’t always convert the same way. If you do the conversion deliberately you get this:
Understand your data types; otherwise, it might come back to hurt you later.
This is logically equivalent to the first version of the code, but I find it makes for more readable code. It just looks cleaner.
For those of us who are
lazylooking to maximize efficiency, this could save a whole lot of key strokes.
This is true, but if you’re on SQL Server 2012 or later, check out CONCAT for concatenation, as it handles NULL values more elegantly.
It has to be said, if performance is important (and it should be), you’ll know that user-defined scalar T-SQL functions can kill your query. So you might want to implement the above in the form of a CROSS APPLY instead
Here’s where I advocate for two tables on every instance: a tally table and a date table. The date table should look like a date dimension, including every potentially-interesting piece of information about a date—including if it’s a weekday. It might not perform quite as fast as Daniel’s solution (I’d have to test to know for sure), but it’s definitely easier. If you can’t get a viable date table on your instances for whatever reason, Daniel’s solution does work and does not require any additional objects.
There’s a really nice mathematical way that I found on Stack Overflow, using the golden ratio (from Wikipedia). And it’s actually set-based. I don’t have the requisite mathematical skills to evaluate the correctness or precision. Note that it returns afloat value result, the upside of which is that you can calculate much higher values. The downside is the loss of precision that comes with float.
This is a bit of a brain teaser but if you learn the techniques, they can definitely come in handy in the future. I like the third solution because it’s a reminder that writing code is just as much about understanding the domain as it is understanding the syntax.
If NULL is used in an in-row expression, the full expression will be NULL. This makes sense for most arithmetic operations, because NULL means unknown.
Here is an example with NULL plus/minus ONE:
There’s a consistency to NULL operations, but it’s sometimes a weird consistency.
Probably the most common distinct aggregate is COUNT(DISTINCT). In some respects, it’s similar to the windowed function DENSE_RANK(). DENSE_RANK(), as opposed to ROW_NUMBER(), will only increment the row counter when the ordering column(s) actually change from one row to the next, meaning that we can use DENSE_RANK() as a form of windowed distinct count
This is a very interesting approach to the problem. Read the whole thing.
We have said that a NULL value for s.DateOfSale is not in the range we are interested in. This means the rows with NULLs in the s.DateOfSale column (our employees yet to make a sale) will be filtered out. It will also filter out employees with sales in months other than March. We have converted the LEFT JOIN into an INNER JOIN.
James’s fix is to move the filter to the join clause, which eliminates the implicit inner join. When I see a condition like this in a code review, the first question on my mind is whether the correct fix is James’s fix or whether the developer really meant to do an inner join. There’s a potential performance gain from using an inner join over a left outer join (due to being able to drive from either table and thus having a larger number of potential execution plans) if it turns out you really do want to filter all rows and not just making the join criterion more specific.
So here, the JSON and
STRING_SPLITmethods took about 10 seconds each, while the Numbers table, CLR, and XML approaches took less than a second. Perplexed, I investigated the waits, and sure enough, the four methods on the left incurred significant
LATCH_EXwaits (about 25 seconds) not seen in the other three, and there were no other significant waits to speak of.
And since the latch waits were greater than total duration, it gave me a clue that this had to do with parallelism (this particular machine has 4 cores). So I generated test code again, changing just one line to see what would happen without parallelism:
There’s a lot going on in that post, so I recommend checking it out.
Adventureworks seems just fine to me. Only four instances of procedures calling procedures. I looked at the database I work with most. Hundreds of procedures (representing 15% of the procedures) call other procedures. On the other end of the spectrum is Stackoverflow. I understand that they don’t use stored procedures at all.
Check out the comments for more notes.