Press "Enter" to skip to content

Category: Backups

Parallel Backups Using Powershell Workflows

Sander Stad wants to use Powershell workflows to back up databases in parallel:

It’s possible to use a workflow to execute your backups. You have to take into consideration that there is a downside. If you execute all the backups at once you’ll probably get issues with throughput if you’re dependent on a slow network for example.

You could always add another parameter to filter out specific databases to make sure you execute it as a specific set.

Sander does include his script, so check that out.

Comments closed

Finding Database Restoration Times

Kenneth Fisher wants to know the last time his database was restored:

I frequently need to know where backups went and I restore those backups for operational recovery on a regular basis. Would you believe in 20+ years as a DBA I can count the number of database restores for a disaster on my fingers? (Which is good because taking off your shoes at the office is considered bad form.)

I think the important take-away from this post is that you should leave your shoes on at work.  You don’t know what kind of disgusting things are in that carpet.  Also, read on to learn where to find database restoration history.

Comments closed

SQL Server 2016 CU1 Bugfixes

Andrew Pruski notes an important bugfix in SQL Server 2016 CU1:

SQL Server 2016 CU1 has been released and one thing I noticed was: –

FIX: Canceling a backup task crashes SQL Server 2014 or 2016

That’s pretty nasty, when I originally clicked on the link I was expecting to see detailed a pretty precise set of circumstances in which that bug can occur but no no, apparently not. Cancelling any backup task can lead to this happening.

Andrew then argues in favor of waiting for SPs before deploying new versions of software, having been burned on it in the past.  I don’t agree with that philosophy; regardless, I recommend reading his post.

Comments closed

Restoring With Standby

Kenneth Fisher describes the WITH STANDBY option for database restorations:

It then leaves the database in a read-only state. So now you can restore the database back to a point in time (even mid log backup). Check the data. Restore a few minutes into the future. Check the data again. Over and over again until you are where you need to be. It’s still going to be tedious but better than doing the full restore over again each time you need to check, right?

On top of that we can use the same idea and combine it with log shipping. Now you not only have a spare in case of a DR situation but that spare can be read only most of the time (except when actually restoring). You can use it to run reports, ad-hoc queries, etc. (Don’t use it for CHECKDBs.)

Those are a few good uses of the WITH STANDBY option.

Comments closed

HBase Incremental Backup And Restore

Carter Shanklin and Vladimir Rodionov discuss incremental backup and restore coming to HBase & Phoenix:

If your tables are large it may not be possible to restore them under a different name due to space constraints. The really powerful thing about HBase backups is they are stored in WAL files that can be parsed using a simple interface that can be consumed either in Java or using the “hbase wal” utility.

Consider this scenario: A customer rep deleted some data because he thought it was unimportant. A week later the customer is upset because the data was important and you need to restore these few pieces of information. With HBase backups all you need to do is parse through the backups with a WAL reader and extract the historical values, which you can then add back in. With other databases you would have to bring another database instance online and load the backups into it. Having backups in open, well-understood formats unlocks many powerful opportunities and can bring recovery times down from days to minutes.

Read on if you manage a Hadoop cluster with HBase (or you’re likely to administer one soon).

Comments closed

Finding Failed Backups And Jobs

Thomas Rushton builds a nasty query to answer an important question:

Assumptions

  1. Backup jobs that do full backups don’t overlap
  2. There’s nothing else doing full backups
  3. erm…
  4. that’s it

The reason I was thinking about this is that we have occasional-but-annoyingly-frequent backup job failures, wherein most of the databases back up just fine, but the odd one fails. (SharePoint box, I’m looking at you…) Rather than trawling through the error logs to find out which particular database didn’t back up successfully, I wanted a query to do the heavy lifting. Yes, I’m a lazy lone DBA…

The end result is a pivoted query showing days in which full backups fail.  There’s a lot of information in there, so that might be something I’d want to visualize in Excel or R, changing cell colors for failed jobs so they stand out better.  Nevertheless, check this out, especially if you don’t have a solution in place to monitor backups.

Comments closed

TDE And Backup Compression

Erik Darling notes that databases using Transparent Data Encryption now support backup compression:

First, the database without a Max Transfer Size at the bottom was a full backup I took with compression, before applying TDE. It took a little longer because I actually backed it up to disk. All of the looped backups I took after TDE was enabled, and Max Transfer Size was set, were backed up to NUL. This was going to take long enough to process without backing up to Hyper-V VM disks and blah blah blah.

The second backup up, just like the blog man said, no compression happens when you specify 65536 as the Max Transfer Size.

You can see pretty well that the difference between compressed backup sizes with and without TDE is negligible.

Check it out, including the table Erik put together.  I’m glad that backup compression is now supported, although I’m kind of curious how they can do that while retaining encrypted backups—are they decrypting data, writing to backup (and compressing), and then encrypting the backup?  That’d be worth checking out with a hex editor.

Comments closed

Dealing With Backup Files

Kenneth Fisher discusses backup files, especially when they mysteriously double in size:

Did you know a single backup file can contain multiple database backups? When you backup a database to a file, if that file already exists, then by default the backup will be appended to the existing file. Causing the file to increase in size. If it’s the same database (and yes you can have a single file containing backups from multiple databases) then the file size will double or more. This behavior is controlled by the INIT/NOINIT clause of the BACKUP DATABASE command. NOINIT (the default) tells SQL to append the new backup to the existing file. INIT tells SQL to overwrite the existing backup files. Note the header of the file is not initialized.

This is good information to know.

Comments closed

SQL Server Backup Buffers

SQLsasquatch (whose name I now know but I like the handle so much) has a few questions about backup buffers:

I wonder:
-If ‘max server memory’ wasn’t being overridden by a more reasonable target (because max server memory is 120 GB on a 32GB VM), would the behavior still be the same before reaching target?  I bet it would be.
-Is this behavior specific to not having reached target?  Or when reaching target would backup buffers be allocated, potentially triggering a shrink of the bpool, then returned to the OS afterward requiring the bpool to grow again?
-What other allocations are returned directly to the OS rather than given back to the memory manager to add to free memory?  I bet CLR does this, too.  Really large query plans?  Nah, I bet they go back into the memory manager’s free memory.
-Does this make a big deal?  I bet it could.  Especially if a system is prone to develop persistent foreign memory among its NUMA nodes.  In most cases, it probably wouldn’t matter.

Good questions, to which I have zero answers.

Comments closed

Dueling Log Backup Jobs

Robert Davis ran into HADR_WORK_QUEUE waits recently:

Our 3rd party monitoring solution collects blocking information, but not for system threads. There was no additional information available for this blocking incident, but I could see that the system thread was a background process with the command “UNKNOWN TOKEN” and was sitting in a wait type of “HADR_WORK_QUEUE”. It was clearly the worker thread for the AG of a specific database.

A little later, we had blocking again involving that same thread, but this time, the AG worker thread was blocking the log backup thread. Seemed logical that if the worker thread could block the log backup, then the log backup could have also blocked the worker thread, but still it did not make sense to me.

This is one of those cases in which the answer makes perfect sense after the fact, but can be maddening until then.

Comments closed