Press "Enter" to skip to content

Month: February 2016

Simple Recovery Mode And Transaction Logs

Kenneth Fisher discusses changing your database’s recovery model to Simple in order to clean out a transaction log:

Clearing out a full transaction log is a common problem. A quick search will find you dozens of forum entries and blog posts. Because of that I’m not going to talk about the correct methods of dealing with a transaction log full error. What I want to discuss is why you shouldn’t use the following method.

And Kenneth also hits the one legitimate use:  dire emergency.  If this is a normal part of some process (e.g., warehouse loading), bite the bullet and either live in Simple recovery mode (understanding the risks) or get the disk space to keep it in Full mode.  Switching back and forth—especially if you aren’t taking full backups immediately after switching back—is a good way to get yourself burned.

Comments closed

Custom SSAS Assemblies

Jens Vestergaard shows us custom assemblies in SSAS:

The example in this post will be the well known Hello World example in the context of SSAS, and I trust this will illustrate the possibilities with this technique well enough, for you to apply your own solution, to your challenges.

If you’re at all familiar with CLR in the database engine, this looks to be the Analysis Services equivalent.  Hopefully it doesn’t have the same “We can’t possibly use this!” taboo that CLR seems to have in the database engine world.

Comments closed

Backup Problems

Kendra Little has a few warning signs for backups:

5. Your log backups run every 30 minutes. I have yet to find a company with log backups running every 30 minutes who was actually OK with losing 30+ minutes of data. Maybe you are part of the company where it’s actually true, but if you’re not 100% sure, get someone to sign off on it. With an ink pen. Really.

Funnily enough, I’ve experienced exactly this, except the business side was flabbergasted that I wanted to take transaction log backups so quickly—they had a 24-hour RPO, so why bother with such frequent backups?  I kept a straight face and explained that if I had my druthers, I’d take a transaction log backup every 1-3 minutes.

Comments closed

CROSS APPLY

Steve Jones compares writing a function versus using CROSS APPLY to write the same function:

The conclusion I’d take here is that CROSS APPLY ought to be a tool you keep in the front of your toolbox and use when you must execute a function for each row of a set of tables. This is one of the T-SQL  techniques that I never learned early in my career (it wasn’t available), and I haven’t used much outside of looking for execution plans, but it’s a join capability I will certainly look to use in the future.

I’m one of the biggest fans of the APPLY operator out there—my favorite talk is based on it, even.  But in this case, I’m going to say that writing “CROSS APPLY” really didn’t do anything here—times are similar enough that I’d be suspicious that the database engine is doing the same thing both times.

Comments closed

Query Tricks Which Don’t Trick

Gail Shaw has a follow-up with more query “tricks” that aren’t fooling anyone:

In a similar vein to last week’s blog post… I heard an interesting comment recently. “Change that Column != 2 to a Column > 2 or Column < 2 combination, it can use indexes better.”

Sounds like something that clearly needs testing!

Not shockingly, this did nothing to make the query run faster or use fewer resources.  There are ways to rewrite queries to improve performance while maintaining the same result structure (a common example being rewriting query using a cursor or WHILE loop to perform one set-based operation), but Gail’s point is vital:  test your changes and make sure that if you’re saying it will perform better, that it actually perform better.

Comments closed

Cardinality Estimation And String Splits

Dan Holmes points out a quirk of estimated row counts with CLR-based functions:

That is an enormous amount of data.  What if you needed to sort that?  What if you joined this to another table or view and a spool was required.  What it it was a hash join and a memory grant was required?  The demand that this seemingly innocuous statement placed on your server could be overwhelming.

The memory grant could create system variability that is very difficult to find.  There is a thread on MSDN that I started which exposes what prompted this post.  (The plan that was causing much of the problem is at this link.)

It’s important to keep in mind the good enough “big round figures” that SQL Server uses for row estimation when stats are unavailable (e.g., linked server to Hive or a CLR function like in the post).  These estimates aren’t always correct, and there are edge cases like the one in the post in which the estimates will be radically wrong and begin to affect your server.

Comments closed

Cardinality Estimation And Hints

Aaron Bertrand uses a Visual Studio Online outage to talk about query hints:

These are all things that may have been necessary under the old estimator, but are likely just tying the optimizer’s hands under the new one. This is a query that could have, and should have, been tested in their dev / staging / QA environments under the new cardinality estimator long before they flipped the switch in production, and probably could have gone through series of tests where different combinations of those hints and options could have been removed. This is something for which that team can only blame themselves.

Also check out Aaron Morelli’s comment on the post.

Comments closed

Investigating Cleveland

Dave Mattingly goes spatial on Cleveland:

From here, we can:

  • zoom in for more detail

  • hover over a building, road, or other feature to see its name or other column

  • display a label on the results

  • apply filters to only show parts of the data

  • change the widths of the features by changing the STBuffer

  • do lots of other cool stuff

Spatial types and display in SQL Server has always been a weak point for me, so I enjoy seeing the fruits of somebody who is very good at it.

Comments closed

Drive Failure

SQL Wayne cautions you to think about drive failure:

Now is when the MTBF comes in. If all of the drives were from the same batch, then they have approximately the same MTBF. One drive failed. Thus, all of the drives are not far from failure. And what happens when the failed drive is replaced? The RAID controller rebuilds it. How does it rebuild the new drive? It reads the existing drives to recalculate the checksums and rebuild the data on the new drive. So you now have a VERY I/O intensive operation going on with heavy read activity on a bunch of drives that are probably pushing end of life.

This is where it’s important to keep spares and cycle out hardware.

Comments closed