Press "Enter" to skip to content

Category: Backups

Backup Strategy

Andy Galbraith has some advice for backups:

The workaround to this is to back the databases up to one tool and then to *copy* those backup files to the other tool.  The best recommendation I have in these situation is always to run “regular” SQL Server Agent job backups to a file location (either a local or network drive) and then to have your third-party tool use file backup for the actual BAK/TRN files from that location *rather* than running the “database agent” on the third party tool to backup the database directly.

In this model the third party tool never directly touches SQL Server – in this client’s environment you would run SQL Server Agent jobs similar to the current maintenance plan (although something better than maintenance plans, such as Ola Hallengren’s scripts or maybe MinionBackup) to backup to actual disk (in thiscase the B: drive) and then use Arcserve to backup the files that have been written on the B: drive.

There’s good advice around not using multiple tools to take backups.  This eliminates the possibility of needing to track down backups from two separate devices in order to restore to a point in time (e.g., if some of the log backups are on one device and some on the other).

Comments closed

Thinking About The Gitlab Outage

Brent Ozar shares his thoughts on the recent Gitlab outage:

You can read more about the details in GitLab’s outage timeline doc, which they heroically shared while they worked on the outage. Oh, and they streamed the whole thing live on YouTube with over 5,000 viewers.

There are so many amazing lessons to learn from this outage: transparency, accountability, processes, checklists, you name it. I’m not sure that you, dear reader, can actually put a lot of those lessons to use, though. After all, your company probably isn’t going to let you live stream your outages. (I do pledge to you that I’m gonna do my damnedest to do that ourselves with our own services, though.)

There are some good pointers in here.

Comments closed

Getting Ahead Of Corruption

Mike Walsh has some recommendations before you have corrupt databases:

So. Always. Always. ALWAYS choose to be proactive and prepared. Don’t wait for corruption to catch you! When we do our SQL Server health assessments, seeing the findings that together mean you aren’t prepared for corruption is a huge red flag. Partially it is because as a consultant, I end up seeing corruption a lot – and it is always “after the fact” and usually from clients who either chose or, more likely didn’t realize they were choosing, the option with less preparation.

So this post won’t really talk about recovering from corruption. It will focus on prevention and preparedness. A follow on post will talk about some initial steps to do if you get a report of corruption.

If you already know how you’ll solve the problem (and ideally, have a step-by-step runbook so you don’t miss anything), corruption is more of an annoyance than a catastrophe.

Comments closed

Restoring A Database To A Different Location

Mike Fal shows how to restore a database to a different location using Powershell:

The cmdlet is straightforward in its use. Fundamentally, all we need to declare is an instance, database name, and backup file. However, if we don’t declare anything else, the cmdlet will try and restore the database files to their original locations. Keep in mind this is no different than how a normal RESTORE DATABASE command works.

This is where we make our lives easier with PowerShell. First off, to move files using Restore-SqlDatabase, we need to create a collection of RelocateFile objects. Don’t let the .Net-ness of this freak you out. All we’re doing is creating something that has the logical file name and the new physical file name. In other words, it’s just an abstraction of the MOVE statement in RESTORE DATABASE.

Read the whole thing.

Comments closed

Backing Up To Azure Storage

Neil Gelder shows how to back up directly to Azure blob storage:

The URL is the one from the container we made a note of and the credential is the one we created in the last step.

Now if we return to the container screen in the Azure Console and refresh the screen you’ll see your backup file like below

My personal preference here would be to back up locally and then have a job migrate backups to Azure or S3.  That storage is 1-3 cents per GB per month (and even cheaper if you’re willing to store the data in Glacier), so for more small to mid-sized organizations running databases in the tens of gigs, it’s a great way of getting around only being able to store a week or two worth of backups on-site.

Comments closed

Restoring An Encrypted Master Database

Dave Mason clarifies how to restore the master database to a new instance when the master database backup is encrypted:

This strikes me as an odd chicken-and-egg problem. I’d need to create the certificate to decrypt the [master] backup on the instance I’m restoring [master] to…and the certificate is stored in [master], which I’d be overwriting. As weird as it sounds, this is exactly what needs to happen. Maybe it’s not as complicated as it sounds.

Read on for the solution.  You might also want to check out that one time he met Larry Bird.

Comments closed

COPY_ONLY Backups

Kenneth Fisher explains what the COPY_ONLY flag means for a backup:

So to put that in simple terms. I have a database, Test. I take a full backup, changes happen, I take a differential backup, changes happen, I take a differential backup, etc. Ignoring all of the log backups that are happening if the database is in FULL recovery of course.

Currently, both differentials contain everything that has happened between the time of the full backup and the time the differential was taken. But what happens if I take a second full backup between the first and second differential? Now, that second differential will only contain data between the second full backup and the differential.

Read on for more.

Comments closed

Last Log Backup Time

Kevin Hill comments on a recent Connect item by Ola Hallengren:

And his proposed solution:

Add a new column to sys.dm_db_log_space_usage or sys.database_recovery_status called LastLogBackupTime.

I LOVE this idea…back up the T-log more frequently during busy times, less often during off hours.  At my current client, there is almost nothing happening outside of a 12 hour workday window, so this would be perfect here.

Now, I am possibly misunderstanding Ola’s request or the intent…and that’s ok.  This query from the msdb..backupset table already contains this info via a relatively short amount of code:

Click through for more details as well as Ola’s Connect item.

Comments closed