The app’s plan was cached the day before. But wait a second! My assumption was that it had recompiled this morning due to the updated stats.
Also check out Aaron Morelli’s link to an older blog post on how the plan cache works.
Comments closedA Fine Slice Of SQL Server
The app’s plan was cached the day before. But wait a second! My assumption was that it had recompiled this morning due to the updated stats.
Also check out Aaron Morelli’s link to an older blog post on how the plan cache works.
Comments closedKendra Little has a few warning signs for backups:
5. Your log backups run every 30 minutes. I have yet to find a company with log backups running every 30 minutes who was actually OK with losing 30+ minutes of data. Maybe you are part of the company where it’s actually true, but if you’re not 100% sure, get someone to sign off on it. With an ink pen. Really.
Funnily enough, I’ve experienced exactly this, except the business side was flabbergasted that I wanted to take transaction log backups so quickly—they had a 24-hour RPO, so why bother with such frequent backups? I kept a straight face and explained that if I had my druthers, I’d take a transaction log backup every 1-3 minutes.
Comments closedSteve Jones compares writing a function versus using CROSS APPLY to write the same function:
The conclusion I’d take here is that CROSS APPLY ought to be a tool you keep in the front of your toolbox and use when you must execute a function for each row of a set of tables. This is one of the T-SQL techniques that I never learned early in my career (it wasn’t available), and I haven’t used much outside of looking for execution plans, but it’s a join capability I will certainly look to use in the future.
I’m one of the biggest fans of the APPLY operator out there—my favorite talk is based on it, even. But in this case, I’m going to say that writing “CROSS APPLY” really didn’t do anything here—times are similar enough that I’d be suspicious that the database engine is doing the same thing both times.
Comments closedGail Shaw has a follow-up with more query “tricks” that aren’t fooling anyone:
In a similar vein to last week’s blog post… I heard an interesting comment recently. “Change that Column != 2 to a Column > 2 or Column < 2 combination, it can use indexes better.”
Sounds like something that clearly needs testing!
Not shockingly, this did nothing to make the query run faster or use fewer resources. There are ways to rewrite queries to improve performance while maintaining the same result structure (a common example being rewriting query using a cursor or WHILE loop to perform one set-based operation), but Gail’s point is vital: test your changes and make sure that if you’re saying it will perform better, that it actually perform better.
Comments closedDan Holmes points out a quirk of estimated row counts with CLR-based functions:
That is an enormous amount of data. What if you needed to sort that? What if you joined this to another table or view and a spool was required. What it it was a hash join and a memory grant was required? The demand that this seemingly innocuous statement placed on your server could be overwhelming.
The memory grant could create system variability that is very difficult to find. There is a thread on MSDN that I started which exposes what prompted this post. (The plan that was causing much of the problem is at this link.)
It’s important to keep in mind the good enough “big round figures” that SQL Server uses for row estimation when stats are unavailable (e.g., linked server to Hive or a CLR function like in the post). These estimates aren’t always correct, and there are edge cases like the one in the post in which the estimates will be radically wrong and begin to affect your server.
Comments closedAaron Bertrand uses a Visual Studio Online outage to talk about query hints:
These are all things that may have been necessary under the old estimator, but are likely just tying the optimizer’s hands under the new one. This is a query that could have, and should have, been tested in their dev / staging / QA environments under the new cardinality estimator long before they flipped the switch in production, and probably could have gone through series of tests where different combinations of those hints and options could have been removed. This is something for which that team can only blame themselves.
Also check out Aaron Morelli’s comment on the post.
Comments closedDave Mattingly goes spatial on Cleveland:
From here, we can:
zoom in for more detail
hover over a building, road, or other feature to see its name or other column
display a label on the results
apply filters to only show parts of the data
change the widths of the features by changing the STBuffer
do lots of other cool stuff
Spatial types and display in SQL Server has always been a weak point for me, so I enjoy seeing the fruits of somebody who is very good at it.
Comments closedSQL Wayne cautions you to think about drive failure:
Now is when the MTBF comes in. If all of the drives were from the same batch, then they have approximately the same MTBF. One drive failed. Thus, all of the drives are not far from failure. And what happens when the failed drive is replaced? The RAID controller rebuilds it. How does it rebuild the new drive? It reads the existing drives to recalculate the checksums and rebuild the data on the new drive. So you now have a VERY I/O intensive operation going on with heavy read activity on a bunch of drives that are probably pushing end of life.
This is where it’s important to keep spares and cycle out hardware.
Comments closedDavid Smith gives us a tutorial on credit card fraud detection:
If you have a database of credit-card transactions with a small percentage tagged as fraudulent, how can you create a process that automatically flags likely fraudulent transactions in the future? That’s the premise behind the latest Data Science Deep Dive on MSDN. This tutorial provides a step by step to using the R language and the big-data statistical models of the RevoScaleR package of SQL Server 2016 R Services to build and use a predictive model to detect fraud.
This looks to be a follow-up from the fraud detection series.
Comments closedEd Pollack digs into the plan cache:
The data in the plan cache is not static, and will change over time. Execution plans, along with their associated query and resource metrics will remain in memory for as long as they are deemed relevant. Plans can be removed from cache when there is memory pressure, when they age out (they get stale), or when a new plan is created, rendering the old one obsolete. Keep this in mind as we are looking around: The info we find in the plan cache is transient and indicative of a server’s current and recent activity and does not reflect a long-term history. As a result, be sure to do thorough research on the plan cache prior to making any significant decisions based on that data.
The plan cache is one of the best ways of figuring out what’s going on in your SQL Server instances, but there’s a little bit of complexity to it.
Comments closed