Press "Enter" to skip to content

Category: Backups

Issues With TDE And Backup Compression

Ned Otter describes the troubled history of the union of Transparent Data Encryption and backup compression:

The history of TDE and backup compression is that until SQL 2016, they were great features that didn’t play well together – if TDE was in play, backup compression didn’t work well, or at all.

However, with the release of SQL 2016, Microsoft aimed to have these two awesome features get along better (the blog post announcing this feature interoperability is here). Then there was this “you need to patch” post, due to edge cases that might cause your backup to not be restored. So if you haven’t patched in a while, now would be a good time to do so, because Microsoft says those issues have been resolved (although that seems to be disputed here).

My sympathies definitely lie toward backup compression over TDE if forced to choose between the two.

Comments closed

Reducing Backup Sizes

Ned Otter gives us some options for reducing the size of full backups:

SPARSE columns

IF a column contains mostly NULLs, then depending on the data type, you can achieve space savings by using the SPARSE property (documentation here). SPARSE columns can be used with filtered indexes to theoretically reduce storage space and increase query performance. But there are a boatload of gotchas, such as issues with query plan caching (filtered indexes), and the fact that if you use SPARSE columns, neither the table or indexes can have any form of compression (the documentation is clear about not supporting table compression, but does not mention index compression being an issue – but it is).

As the documentation clearly states, when converting a column from non-sparse to sparse, the following steps are taken:

  1. Adds a new column to the table in the new storage size and format
  2. For each row in the table, updates and copies the value stored in the old column to the new column
  3. Removes the old column from the table schema
  4. Rebuilds the table (if there is no clustered index) or rebuilds the clustered index to reclaim space used by the old column

For large tables with even a few columns that you wanted to convert to SPARSE, this process would take forever, because you must do this for each column you want to convert.

I don’t like sparse columns at all, but I do like the rest of Ned’s options.

Comments closed

Test Your Restore Process

Clive Strong tells a tale about a mental flub while restoring a backup:

Our automated restore process works really nicely. We take full backups on Saturday and differential backups through the week. We also take log backups through the day, but we were not going to be restoring those for this task. We have a number of internal platforms we restore to in full (or in part following a cut down process) so which gives us good validation of our backup files on a regular basis. We also have regular test restores from tape just for good measure.

However, a while ago I was asked to build a new server and restore the databases up to a specific date. We didn’t need a point in time restore, just to a specific day, so I pulled the full and differentials and wrote the script to do the restore for me. The script restored the full backup and the differential backup for Sunday, Monday, Tuesday, Wednesday and Thursday. I gave it the once over and executed the script. A while later and I came back and it was unexpectedly, still running. I eventually left the office and noted it finished in the early hours and ran for many hours longer than I had anticipated.

Read on for Clive’s more detailed explanation of the whoopsie moment.

Comments closed

Restoration And That CHECKDB Message

Mike Fal investigates an interesting message in the SQL Server error log after a database restoration:

Recently I was doing some work with a friend around some database restores. It was pretty routine stuff. However, after one restore my friend came across something in the SQL Error Log that caught him by surprise. As part of the restore, there was a CHECKDB message for the restored database:

My friend’s first reaction was “why is SQL Server doing a DBCC CHECKDB as part of the restore?” He was concerned, because CHECKDB is a pretty hefty operation and this could really impact the restore time if he had to wait on a CHECKDB to complete. But the other confusing thing was that the date for the CHECKDB didn’t match up with the restore timing.

Click through to learn the answer.

Comments closed

Database Backups And GDPR

Grant Fritchey digs into one of the more contentious areas of GDPR:

Nothing within Article 17 talks about backups, offsite storage, readable secondaries, log shipping, or any of that stuff. In fact, there’s nothing technical there at all. No help to tell you what to do about this question.

Now, each article has expansions that further detail the information within the article called recitals. In the case of the right to be forgotten, there are two, Recital 55 and Recital 66. Recital 55 has nothing for us, at all. Recital 66 does talk about the fact that, because we’re dealing in an online world, the best available technical means should be used to deal with the fact that a person’s data may be in more than one location and we’ll need to clean that up.

And that’s it.

In fact, you can search the GDPR and not find the word, backup.

Read on for Grant’s thoughts, including what he argues is a defensible position (though we won’t know for sure until the bureaucracy runs its course).

Comments closed

Encrypt Those Database Backups

David Fowler shows how easy it is to encrypt database backups in SQL Server:

We all go to great lengths to make sure that our databases are secure (or at least I really hope that we do), we make sure that only authorised users have access and then only to the databases and levels that they need.  We’re careful that all our passwords conform to crazy complexity rules.  We’re a pretty security minded lot aren’t we?

But what about our backups?  Most of the time they’re just sitting on, either a local disk on the server itself or some share somewhere.  Wherever they are, what happens if someone manages to get hold of our backup files?  They can easily just restore them onto a server where they have permissions and voila, all our super secret data is now theirs.

They could even just open the files with a hex editor read the data, it’s really not a difficult thing to do.

By default, SQL Server makes absolutely no effort to encrypt or in any way protect our backups.  That’s pretty bad in my mind.

If you’re on Standard Edition, this became available in 2014.  For Enterprise Edition, it was available earlier.  At this point, I don’t think there are good reasons for not encrypting backups on production systems.

Comments closed

Backing Up Azure SQL Databases

Arun Sirpal enumerates the options we have for backups of Azure SQL Databases:

If you have a business requirement which has a need to retain database backups for longer than 35 days, then you have an option to use long-term backup retention. This feature utilises the Azure Recovery Services Vault where you can store up to 10 years’ worth of backups for up to 1000 databases per vault and 25 vaults per subscription.

There are some guidelines that you need to follow to successful set this up:

  • Your vault MUST be in the same region, subscription and resource group as your logical SQL Server, if not then you will not be able to set this up.

  • Register the vault to the server.

  • Create a protection policy.

  • Apply the above policy to the databases that require long-term backup retention.

Arun also looks at restoration options.

Comments closed

Backup And Restore With Move

John Morehouse has a script for database migration onto a differently configured server:

Now, not every environment is the same.  Instances get configured differently or things change just due to the nature of the business.  In a previous life I would routinely have to backup a database and restore it to another server.  However, the server I was using to restore to had a different drive configuration.  It happens.  Anyway, I wanted a script that would give me

  • A backup statement
  • A restore statement with the appropriate MOVE switch

This would allow me to easily execute the backup statement and then on the target server execute the restore statement.  Since the restore statement already had the appropriate MOVE switch provided, I didn’t have to manually compose the statement.

Click through for the script.

Comments closed

Backup Management Is More Than Taking Backups

Kenneth Fisher makes a great point regarding backups:

I’ve said before that backups are at once one of the easiest things DBAs do, one of the most important, and one of the most complicated. Take a full backup, restore it. Pretty simple right? And yet it’s vital when accident or corruption require recovering data. And as simple as it can be on the surface, the more you dig, the more there is to know, and the more complicated it can become. Well, one of those complications is the backup of the backup files. I mean, assuming you are using native backups, that full backup is sitting on a drive somewhere, and hopefully, that drive gets backed up right?

Why? Well, for performance purposes you probably back up your databases locally. To a drive attached to the server. Now you may not, heck you could be backing up to Azure, but for the sake of this argument let’s say you are. Part of a careful disaster recovery plan is making sure you have access to those backups. I’ve heard stories of entire data centers going underwater (literally). You need to at least have a copy of your backups in a separate system, separate location from production.

The proliferation of S3/Blob Storage for “warm” backups and Glacier/Cool Blob Storage for “cold” backups has made it much cheaper to retain longer-term backups.

Comments closed