Press "Enter" to skip to content

Category: Administration

Cycle Your Logs

Andrea Allred shows how to cycle error logs to prevent them from growing out of hand:

What is a good size?  I usually try to get it to roll over around 10 MB.  I use a monitoring tool and when the large error log alert is triggered, I have it run sp_cycle_errorlog for me so mine always stay a healthy size.  You don’t need fancy tools to do this though.  If you know about how fast your logs grow, you can set up a SQL Agent job to run it on a schedule to keep your logs healthy.

How many logs should I keep? This is completely up to you, but since I keep my logs so small, I try to keep 15 of them.  Why so many? I do it so I can go back and see issues further back if needed. You can adjust the amount you keep by right clicking on SQL Server Logs in SSMS and selecting “Configure”

I’d personally prefer to keep more logs—at least 32-45 days worth—but that’s going to depend upon the environment.

Comments closed

Daily Briefings

Brent Ozar shows a way to see high-priority sp_Blitz items across an environment:

Step 3: run sp_Blitz @IgnorePrioritiesAbove = 50, @CheckUserDatabaseObjects = 0. This gets you the fast headline news, especially when used with the improved priorities in the latest version in our SQL Server download kit

It’s important to have one or two high-level systems you can glance at to see top-level health of a system.  This could be one of those systems.

Comments closed

Spring Cleaning

Tara Kizer gives some advice on proactive administrative measures:

If a database has a high number of Virtual Log Files (VLFs), it can impact the the speed of transaction log backups and database recovery. I once had a database with 75,000 VLFs. I didn’t even know what a VLF was at the time (hence having so many). After rebooting the server, a mission critical database with extremely high SLAs took 45 minutes to complete recovery. We were in the process of opening a support case with Microsoft when the database finally came online. The next day, I contacted a Microsoft engineer and learned about VLFs.

There are a number of helpful tidbits here, so check it out.

Comments closed

ALTER DATABASE

Dave Mason notes that ALTER DATABASE needs to be in an autocommitted transaction and will not roll back:

If you create that trigger and subsequently change the recovery model, you’ll get an error. But the recovery model doesn’t get changed back to its original setting. As the TSql comment notes, ROLLBACK doesn’t work in this context. And that’s a shame too. Sometimes the recovery model gets changed on accident, by a third-party vendor’s software installation, or by someone that isn’t familiar with the ramifications. I was trying to prevent that. <Grumble> At least I knew it wasn’t allowed. I just didn’t know why. After seeing this Connect item, “why” didn’t matter.

Dave goes on to point out that under certain circumstances, ALTER DATABASE will simply fail, so there are preventative checks; we just can’t build our own.

Comments closed

NUMA Node Count

Denny Cherry shows us how to find the number of NUMA nodes a SQL Server has:

The next way to see what your NUMA configuration is, is to look at the ERRORLOG file. Every time SQL Server is started it will log the NUMA configuration towards the beginning of the ERRORLOG file. That line will look something like this example.

Server Node configuration: node 0: CPU mask: 0x00000000000000ff:0 Active CPU mask: 0x00000000000000ff:0. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.

Denny shows four different ways of getting this answer, and then provides some general advice on the maximum number of nodes you probably want.

Comments closed

Monitoring MapR With ELK

Mathieu Dumoulin shows how to feed MapR metrics into ElasticSearch and monitor with Kibana:

There are several ways to keep the data updated: a cron job, a linux daemon running as a service, or a stream tool such as Streamsets.

The easiest way might be to run the task as a cron job with an interval of one to thirty seconds depending on monitoring needs. This may be suitable for a proof of concept or a small test cluster or even a production cluster. The main drawback of using a cron is that the control over the execution is limited to running the script and resources aren’t shared, meaning we are opening and closing a connection to Elasticsearch as well as doing the work to call the rest endpoint for each invocation.

Kibana makes for some pretty dashboards.

Comments closed

Minion CheckDB Beta

Sean McCown has released a beta of Minion CheckDB:

We’ve had many of you asking to be part of the Minion CheckDB beta and now is the time. We’re putting the finishing touches on the 1st beta and it’s looking great with some fabulous features.
So this is the open call for beta users. If you’d like to meet Codex before anyone else then send me an email.
We have some requirements though. We don’t want dead beta users. This is your chance to shape the product and we want to hear from you. So if you’re serious about putting the product through its paces then we definitely want you. So you should be ready to provide real feedback, report bugs as you find them, and work with us to fix them.

Considering that I’ve bothered Sean about this at every SQL Saturday the two of us have been at this year, I’d better get moving and join the beta.

Comments closed

DBA Scripts Update

Rob Sewell has updated his DBA-Database project:

By making use of the dbo.InstanceList in my DBA database I am able to target instances, by SQL Version, OS Version, Environment, Data Centre, System, Client or any other variable I choose. An agent job that runs every night will automatically pick up the instances and the scripts that are marked as needing installing. This is great when people release updates to the above scripts allowing you to target the development environment and test before they get put onto live.

I talked to a lot of people in Hannover and they all suggested that I placed the scripts onto GitHub and after some how-to instructions from a few people (Thank you Luke) I spent the weekend updating and cleaning up the code and you can now find it on GitHub here

Check out his solution, especially if you do not already have an administrative database on your instances.

Comments closed

Disable Performance Counters

Dan Taylor shows how to disable SQL Server performance counters using a registry setting:

About 6 or 7 years ago, I had an issue where my SQL Server performance counters were not available when looking within performance monitor. I went thru the steps of unloading and reloading the counters according to Microsoft documentation and by looking at many of the blog posts out there. The performance counters still would not show.

My next step was to dig into the registry and look at the performance counters entry for SQL Server.

I’d have to imagine that the number of good use cases for disabling performance counters is low, but it’s occasionally necessary for troubleshooting.

Comments closed

“RAID” And Backups

Kenneth Fisher explains that you can set up backup strategies similar to different RAID types:

RAID 0

Splitting the backup data between multiple files.

This is actually a fairly common way to speed up a large backup. IO is one of the slowest part of the whole process so by splitting the backup into multiple files we can reduce our backup time by quite a bit. Having multiple files will also increase your restore time but you do have to make sure that all of the files are available. If you lose one file then the whole backup is useless.

This is done by listing multiple locations for the backup.

This is an interesting use of RAID as metaphor.

Comments closed