Press "Enter" to skip to content

Category: Backups

Smart Transaction Log Backups

Tracy Boggiano has started a new series on smart transaction log backups.  Part one involves taking smarter transaction log backups in SQL Server 2017:

SQL Server 2017 introduced two fields to help with taking smart backups.  One was for taking smarter log backups, for this have DMV sys.dm_db_log_stats that have two fields log_backup_time and log_since_last_backup_mb.  With the combination of these two fields, we can put some logic in the jobs that I use for Ola’s scripts that use my config tables from my Github repository.  To support this change we will be added three new fields to the DatabaseBackupConfig table:

  • SmartBackup

  • LogBackupTimeThresholdMin

  • LogBackupSizeThresholdMB

Click through for scripts.

Comments closed

Backups And Distributed File Shares

Wayne Sheffield ran into a new oddity recently:

I was working on a client’s site today, setting up database backup routines. Part of which is to perform a database backup, and verify that everything went okay. I had Windows Explorer open to the location that the backup was going to. When the backup finished, I navigated over to Windows Explorer… and I have a missing database backup. There wasn’t a file in the directory for the backup that I had just performed.

After double and triple checking that I was looking at the same path that I had backed up the database to, I went in search of the network sysadmin to help me figure it out.

Read on for the solution.

Comments closed

Additional Restore-DbaDatabase Functionality

Stuart Moore shows off a few examples of advanced Restore-DbaDatabase usage:

No matter how hard the dbatools; team tries, there’s always someone who wants to do things we’d never thought. This is one of the great things with getting feedback direct from a great community. Unfortunately a lot of these ideas are either too niche to implement, or would be a lot of complex code for a single use case.

As part of the Restore-DbaDatabase stack rewrite, I wanted to do make things easier for users to be able to get their hands dirty within the Restore stack. Not necessarily needing to dive into the core code and the world of GitHub Pull Requests, but by manipulating the data flowing through the pipeline using standard PowerShell techniques, all the while being able to do the heavy lifting without code.

Click through for several examples.

Comments closed

Backups Are Faster With SQL Server 2017

Parikshit Savjani explains how the SQL Server team was able to use indirect checkpoints to improve backup performance:

In RDBMS, whenever tables get larger, one of the technique to tune and optimize the scans on the tables is by partitioning it. With indirect checkpoints, we do the same.

In indirect checkpoint, for every database which has target_recovery_time set, a dirty page manager and dirty page list is created. The dirty page list is further partitioned by scheduler allowing the dirty page tracking to scale further. This decouples the dirty page scan for a given database from the size of the buffer pool and allows the scan to scale and be much faster than automatic checkpoint algorithm.

As Bob Dorr mentions in his blog here, a new database creation process in SQL Server 2016 requires only 250 buffers to scan as opposed to 500 Million buffers with former algorithm. This is the rationale for making indirect checkpoint a default which is much more scalable algorithm to track dirty pages in the buffer pool compared to automatic checkpoints.

Read on to see how this technology led to faster backups.

Comments closed

Using The Restore-DbaDatabase Pipeline

Stuart Moore describes the updated Restore-DbaDatabase cmdlet:

The biggest change is that Restore-DbaDatabase is now a wrapper around 5 public functions. The 5 functions are:

  • Get-DbabackupInformation
  • Select-DbabackupInformation
  • Format–DbabackupInformation
  • Test–DbabackupInformation
  • Invoke-DbaAdvancedRestore

These can be used individually for advanced restore scenarios, I’ll go through some examples in a later post.

Stuart then provides additional information at the various steps, explaining at a high level how things work.

Comments closed

Backup-Related Instance Settings

Monica Rathbun explains a few instance-level backup properties:

Default backup media retention in days. Now the first things that comes to my mind is that “hey this is a cleanup job” SCORE! Thinking that maybe this will auto delete old backups. After all isn’t that what retention means? NOPE, not in this case.

In this case it’s just a number of days before that a backup media can be OVERWRITTEN. If the DBA goes to overwrite the media before those days it will give a warning message. You’ll note in every back up action you do the RETAINDAYS option is filled in. In this case it will always reflect to 90 now that we have changed it. In general, this a pointless option to me. I don’t normally OVERWRITE backup media. To me this was more relevant when Tapes were used and disk were harder to come by, so I leave it alone.

Read on for more settings.

Comments closed

Restoration With Replacement

Joey D’Antoni tests whether RESTORE WITH REPLACE is functionally different from dropping a database and performing a restoration:

I recently read something that said using the RESTORE WITH REPLACE command could be faster than dropping a database and then performing a RESTORE, because the shell of the file could be used and therefore skip file initialization. I did not think that was the case, but books online wasn’t clear about the situation, so I went ahead and built a quick test case, using ProcMon from sysinternals. If you aren’t familar with the sysinternals tools, you should be—they are a good way to get under the hood of your Windows Server to see what’s going on, and if you’re old like me, you probably used PSEXEC to “telnet” into a Windows server to restart a service before RDP was a thing.

Read on to see how the processes compare.

Comments closed

Backup Compression And Encryption

Arun Sirpal shows the combined effects of backup encryption and backup compression in SQL Server 2017:

Do not forget about the certificate! Warning: The certificate used for encrypting the database encryption key has not been backed up. Imagine if you need to recover the backup and you can’t?  You will get the dreaded thumbprint error.

Msg 33111, Level 16, State 3, Line 25 Cannot find server certificate with thumbprint ‘0x78FAB5A2A5D593FD3C4E163C90B745F70AB51233’. Msg 3013, Level 16, State 1, Line 25

RESTORE DATABASE is terminating abnormally.

So make sure you respect this certificate (and the key) and back it up and re-create them on the target server for a successful restore.

In SQL Server 2016 and 2017, there’s no reason not to encrypt backups; the marginal cost is practically nil even if you’re low enough on disk space that you need to do backup compression.

Comments closed

Bugs With Backup Compression And TDE

Parikshit Savjani provides recommendations on combining backup compression with Transparent Data Encryption:

In past months, we discovered some edge scenarios related to backup compression for TDE databases causing backups or restores to fail, hence our recommendations have been

  • Avoid using striped backups with TDE and backup compression.

  • If your database has virtual log files (VLFs) larger than 4GB then do not use backup compression with TDE for your log backups. If you don’t know what a VLF is, start here.

  • Avoid using WITH INIT for now when working with TDE and backup compression. Instead, use WITH FORMAT.

  • Avoid using backup checksum with TDE and backup compression

Brent Ozar explains the risk:

When you install a new version of SQL Server, you get new features – and sometimes, you’re not told about them. For example, when 2016’s TDE compression came out, nobody told you, “If you back up across multiple files, your backups might suddenly be compressed.” You didn’t know that you had a new thing to test – after all, I don’t know a lot of DBAs who have the time to test that the new version of SQL Server successfully performs restores. They restore their production databases into the new version, test a few things, and declare victory – but testing restores FROM the new version’s backups isn’t usually on that list.

Keep up to date on those patches.

Comments closed

Testing Backups With dbatools

Constantine Kokkinos shows off a dbatools cmdlet to test the last full backup:

This:

  • Defines a list of two servers (PowerShell lists are as easy as “”,””)

  • Pipes them to the Test-DbaLastBackup command.

  • Which then:

    • Gathers information about the last full backups for all of your databases on that instance.

    • Restores the backups to the Destination with a new name. If no Destination is specified, the originating server will be used.

    • The database is restored as “dbatools-testrestore-$databaseName” by default, which you can change with the -Prefix parameter.

    • The internal file names are also renamed to prevent conflicts with original database.

    • A DBCC CHECKTABLE is then performed.

    • And the test database is finally dropped.

Pretty snazzy.

Comments closed