Press "Enter" to skip to content

Category: Backups

Why Your Transaction Log Is Full: LOG_BACKUP

Jen McCown explains why you might get the error message “The transaction log for database ‘<your database>’ is full due to ‘LOG_BACKUP'”:

Your transaction log is full. Both Microsoft, and about 100 articles and blogs have covered this topic, but let’s take a quick look anyway. Because, you know, it comes up all the time.

Summary:

  1. This error message points to a lack of log backups.

  2. Make sure using sys.databases.

  3. Start backing up the log.

  4. You can shrink the log if necessary.

  5. A note on SIMPLE mode, and why it’s often a terrible idea.

This is a good summary of the problem and various solutions.

Comments closed

User-Defined Restore Points In Azure SQL DW

Kevin Ngo announces a new feature in Azure SQL Data Warehouse:

Previously, SQL DW supported only automated snapshots guaranteeing an eight-hour recovery point objective (RPO). While this snapshot policy provided high levels of protection, customers asked for more control over restore points to enable more efficient data warehouse management capabilities leading to quicker times of recovery in the event of any workload interruptions or user errors.

Now, with user-defined restore points, in addition to the automated snapshots, you can initiate snapshots before and after significant operations on your data warehouse. With more granular restore points, you ensure that each restore point is logically consistent and limit the impact and reduce recovery time of restoring the data warehouse should this be needed. User-defined restore points can also be labeled so they are easy to identify afterwards.

Creating a user-defined restore point is a one-liner in Powershell, and it’s something you could do after each warehouse load, for example.

Comments closed

Restoring Point-In-Time To Another Azure SQL Managed Instance

Jovan Popovic announces an improvement to Azure SQL Database Managed Instances:

Azure SQL Database Managed Instance enables you to create a database as a copy of another database at some point in time in the past. This is known as point-in-time restore feature, and up till now you could perform point-in-time restore only within the same instance.

The latest release of Azure SQL Database Managed Instance enables you to perform point-in-time restore of a database from one instance to another. This might be useful if you need to be sure that you could easily restore a database to another instance if there is some issue on the original instance, or if you need a database for testing or auditing purposes on the test instance and you want to use copy of some of the existing database on another server.

Click through for the current requirements and limitations, as well as a sample.

Comments closed

Issues With TDE And Backup Compression

Ned Otter describes the troubled history of the union of Transparent Data Encryption and backup compression:

The history of TDE and backup compression is that until SQL 2016, they were great features that didn’t play well together – if TDE was in play, backup compression didn’t work well, or at all.

However, with the release of SQL 2016, Microsoft aimed to have these two awesome features get along better (the blog post announcing this feature interoperability is here). Then there was this “you need to patch” post, due to edge cases that might cause your backup to not be restored. So if you haven’t patched in a while, now would be a good time to do so, because Microsoft says those issues have been resolved (although that seems to be disputed here).

My sympathies definitely lie toward backup compression over TDE if forced to choose between the two.

Comments closed

Reducing Backup Sizes

Ned Otter gives us some options for reducing the size of full backups:

SPARSE columns

IF a column contains mostly NULLs, then depending on the data type, you can achieve space savings by using the SPARSE property (documentation here). SPARSE columns can be used with filtered indexes to theoretically reduce storage space and increase query performance. But there are a boatload of gotchas, such as issues with query plan caching (filtered indexes), and the fact that if you use SPARSE columns, neither the table or indexes can have any form of compression (the documentation is clear about not supporting table compression, but does not mention index compression being an issue – but it is).

As the documentation clearly states, when converting a column from non-sparse to sparse, the following steps are taken:

  1. Adds a new column to the table in the new storage size and format
  2. For each row in the table, updates and copies the value stored in the old column to the new column
  3. Removes the old column from the table schema
  4. Rebuilds the table (if there is no clustered index) or rebuilds the clustered index to reclaim space used by the old column

For large tables with even a few columns that you wanted to convert to SPARSE, this process would take forever, because you must do this for each column you want to convert.

I don’t like sparse columns at all, but I do like the rest of Ned’s options.

Comments closed

Test Your Restore Process

Clive Strong tells a tale about a mental flub while restoring a backup:

Our automated restore process works really nicely. We take full backups on Saturday and differential backups through the week. We also take log backups through the day, but we were not going to be restoring those for this task. We have a number of internal platforms we restore to in full (or in part following a cut down process) so which gives us good validation of our backup files on a regular basis. We also have regular test restores from tape just for good measure.

However, a while ago I was asked to build a new server and restore the databases up to a specific date. We didn’t need a point in time restore, just to a specific day, so I pulled the full and differentials and wrote the script to do the restore for me. The script restored the full backup and the differential backup for Sunday, Monday, Tuesday, Wednesday and Thursday. I gave it the once over and executed the script. A while later and I came back and it was unexpectedly, still running. I eventually left the office and noted it finished in the early hours and ran for many hours longer than I had anticipated.

Read on for Clive’s more detailed explanation of the whoopsie moment.

Comments closed

Restoration And That CHECKDB Message

Mike Fal investigates an interesting message in the SQL Server error log after a database restoration:

Recently I was doing some work with a friend around some database restores. It was pretty routine stuff. However, after one restore my friend came across something in the SQL Error Log that caught him by surprise. As part of the restore, there was a CHECKDB message for the restored database:

My friend’s first reaction was “why is SQL Server doing a DBCC CHECKDB as part of the restore?” He was concerned, because CHECKDB is a pretty hefty operation and this could really impact the restore time if he had to wait on a CHECKDB to complete. But the other confusing thing was that the date for the CHECKDB didn’t match up with the restore timing.

Click through to learn the answer.

Comments closed

Database Backups And GDPR

Grant Fritchey digs into one of the more contentious areas of GDPR:

Nothing within Article 17 talks about backups, offsite storage, readable secondaries, log shipping, or any of that stuff. In fact, there’s nothing technical there at all. No help to tell you what to do about this question.

Now, each article has expansions that further detail the information within the article called recitals. In the case of the right to be forgotten, there are two, Recital 55 and Recital 66. Recital 55 has nothing for us, at all. Recital 66 does talk about the fact that, because we’re dealing in an online world, the best available technical means should be used to deal with the fact that a person’s data may be in more than one location and we’ll need to clean that up.

And that’s it.

In fact, you can search the GDPR and not find the word, backup.

Read on for Grant’s thoughts, including what he argues is a defensible position (though we won’t know for sure until the bureaucracy runs its course).

Comments closed

Encrypt Those Database Backups

David Fowler shows how easy it is to encrypt database backups in SQL Server:

We all go to great lengths to make sure that our databases are secure (or at least I really hope that we do), we make sure that only authorised users have access and then only to the databases and levels that they need.  We’re careful that all our passwords conform to crazy complexity rules.  We’re a pretty security minded lot aren’t we?

But what about our backups?  Most of the time they’re just sitting on, either a local disk on the server itself or some share somewhere.  Wherever they are, what happens if someone manages to get hold of our backup files?  They can easily just restore them onto a server where they have permissions and voila, all our super secret data is now theirs.

They could even just open the files with a hex editor read the data, it’s really not a difficult thing to do.

By default, SQL Server makes absolutely no effort to encrypt or in any way protect our backups.  That’s pretty bad in my mind.

If you’re on Standard Edition, this became available in 2014.  For Enterprise Edition, it was available earlier.  At this point, I don’t think there are good reasons for not encrypting backups on production systems.

Comments closed