Press "Enter" to skip to content

Category: Administration

Robocopy Is Your Friend

Jes Borland wants us to stop copying and pasting files between servers:

IT professionals (and amateurs), it’s time we had a chat. It’s time to stop dragging and dropping (or copying and pasting) files between servers and/or workstations.

It’s clumsy. It’s childish. It uses memory on the server.

Oh, and there’s a really easy tool to copy files built into Windows – Robocopy.

The syntax is pretty easy and robocopy handles small files well.  Check out Nic Cain’s comment, though, if you’re going to copy large files in production.

Comments closed

More On Widening Identity Columns

Aaron Bertrand has part 3 in his series on identity columns:

This post investigated two potential workarounds to either buy you time before changing your existing IDENTITY column, or abandoning IDENTITY altogether right now in favor of a SEQUENCE. If neither of these workarounds are acceptable to you, please watch for part 4, where we’ll tackle this problem head-on.

This is your weekly reminder to plan for appropriate data sizes.

Comments closed

Capturing SQL Server Perfmon Counters

Andy Galbraith shows how to collect and store Perfmon counters:

As you can see, Page Life Expectancy (PLE) on this graph dips, gradually climbs, and then dips again.  With a collection every five minutes you may not catch the exact peak – all you know is that the PLE was 50,000 at 12:55am and then only 100 at 1:00am on 03/13.  It may have climbed higher than that before it dipped, but by 1:00am it had dipped down to around 100 (coincidentally at 1am the CheckDB job had kicked off on a large database).

If you really need to know (in this example) exactly how high PLE gets before it dips, or exactly how low it dips, or at what specific time it valleys or dips, you need to actively watch or set up a collector with a more frequent collection.  You will find that in most cases this absolute value isn’t important – it is sufficient to know that a certain item peaks/valleys in a certain five minute interval, or that during a certain five minute interval (“The server was slow last night at 3am”) a value was in an acceptable/unacceptable range.

Andy also gives us a set of counters he uses by default and how to set up automated counter collection.  Left to the reader is integrating that into an administrator’s workflow.

Comments closed

Database Maintenance

SQLWayne describes his maintenance routines:

The most critical thing as a SQL Server DBA is to ensure that your databases can be restored in the event of the loss of a server for whatever reason: disk crash, fire in the server room, tribble invasion, whatever.  To do this, not only do you have to back up your databases, you also have to test restores!  Create a database and restore the backups of your production DB to them.  It’s the safest way to make sure that everything works.  This test restore can be automated to run every night, but that’s outside the scope of what I want to talk about right now.

There are lots of places that problems can creep in, this is just one part of how you’ll need to monitor systems.  This is how I’ve done things for a number of years, and thus far it has served me well.

Depending upon your instance count, average database size, maintenance windows, etc. etc. etc., some of these things may or may not work, but the principle is the same:  protect the data, and automate your processes to protect that data.  This is a good article to read for ideas, and then from there dig into other administrative blog posts, videos, and books to become better versed in the tools and techniques available to protect your data.

Comments closed

Widening Indexed Identity Columns

Aaron Bertrand shows what happens when you try to widen an identity integer column associated with an index or computed column:

Summary: We will need to drop and re-create any indexes, clustered or not, that reference the IDENTITY column – in the key or the INCLUDE. If the IDENTITY column is part of the clustered index, this means all indexes, since they will all reference the clustering key by definition. And disabling them isn’t enough.

Getting column sizes right at the beginning is your best bet.  Stay tuned for other alternatives.

Comments closed

Simple Recovery Mode And Transaction Logs

Kenneth Fisher discusses changing your database’s recovery model to Simple in order to clean out a transaction log:

Clearing out a full transaction log is a common problem. A quick search will find you dozens of forum entries and blog posts. Because of that I’m not going to talk about the correct methods of dealing with a transaction log full error. What I want to discuss is why you shouldn’t use the following method.

And Kenneth also hits the one legitimate use:  dire emergency.  If this is a normal part of some process (e.g., warehouse loading), bite the bullet and either live in Simple recovery mode (understanding the risks) or get the disk space to keep it in Full mode.  Switching back and forth—especially if you aren’t taking full backups immediately after switching back—is a good way to get yourself burned.

Comments closed

Drive Failure

SQL Wayne cautions you to think about drive failure:

Now is when the MTBF comes in. If all of the drives were from the same batch, then they have approximately the same MTBF. One drive failed. Thus, all of the drives are not far from failure. And what happens when the failed drive is replaced? The RAID controller rebuilds it. How does it rebuild the new drive? It reads the existing drives to recalculate the checksums and rebuild the data on the new drive. So you now have a VERY I/O intensive operation going on with heavy read activity on a bunch of drives that are probably pushing end of life.

This is where it’s important to keep spares and cycle out hardware.

Comments closed

Automating SQL Server Installations

Joey D’Antoni argues in favor of automating SQL Server installations:

Anyway, on to my original topic—I know most shops aren’t Comcast where we were deploying multiple servers a week. However, automating your installation process has a lot of other benefits—you don’t have to worry about misconfigurations, unless you put in into your script. It also forces you into adopting a standard drive letter approach to all of your servers. So when something breaks, you know exactly where to go. And more importantly you save 15 minutes of clicking next.

For any shop which deploys SQL Server more than a couple times every few years, automating your installation process is a smart move.  Even if you rarely deploy, the consistency benefits make it worthwhile.

Comments closed

Shrinking TempDB

Tara Kizer shows how to shrink a recalcitrant tempdb:

I came across this solution recently when I had to shrink tempdb. I tried shrinking each of the 8 data files plus CHECKPOINTs, repeatedly. It would not budge. I almost threw in the towel and emailed out that the space issue would be fixed during our next patching window, but then I found David Levy’s reply. DBCC FREEPROCCACHE worked like a charm.

Word of warning:  understand what FREEPROCCACHE does before running it.  In an emergency like the scenario Tara describes, the benefit outweighs the cost, but do be aware that there is a cost.

Comments closed