Press "Enter" to skip to content

Category: Backups

Automatic Backups on a Data Lake or Lakehouse

Dave Ruijter backs that thing up:

Out of the box, Azure Data Lake Storage Gen2 provides redundant storage. Therefore, the data in your Data Lake(house) is resilient to transient hardware failures within a datacenter through automated replicas. This ensures durability and high availability. In this blog post, I provide a backup strategy on how to further protect your data from accidental deletions, data corruption, or any other data failures. This strategy works for Data Lake as well as Data Lakehouse implementations. It uses native Azure services, no additional tools, software, or licenses are required.

Read on for a detailed strategy.

Comments closed

Performing a Restore to SQL Managed Instance

Arun Sirpal shows us how to perform a backup and restoration from an on-premises SQL Server to Azure SQL Managed Instance:

So in the last blog we confirmed that we could move to SQL MI via some analysis, this is now time to actually do a backup and restore via URLs to move data.

Quite simply you need to BACKUP to URL (Azure Storage container) and the setup requirement is that you need to create a SQL credential that holds the SAS token – this is what allows authentication to the container to take place. 

Click through for the process.

Comments closed

The Reason for Tail Log Backups

Chad Callihan explains why we need tail log backups:

When you are migrating a database from one server to another, how can you be sure to backup all transactions? Sure, you can notify the client and let them know “there will be a short outage at 8AM so please stay out of the application at that time.” Can you really trust that? Of course not. Let’s demonstrate the steps needed to include all transactions with the tail-log backup.

Protip: if you build your application such that nobody wants to use it, you can migrate the database much more easily. Assuming you don’t want to follow that outstanding advice, Chad has you covered.

Comments closed

Troubleshooting a Slow Restore

Sean Gallardy performs corporate dentistry:

This came with very little to no data available, and to be quite honest, saying “slow restore” doesn’t really mean much. The initial analysis needs to be an actual set of concrete data that describes the issue, what is normal, and what outliers, if any, exist. Since we have none, we can’t even start to analyze anything, so we need to clarify the problem statement and understand a little more about the issue.

This is an interesting dive into the problem and a good example of how to work with “We won’t let you see/do that” as a consultant. Incidentally, if you haven’t heard of WPR, that comes with the Windows Performance Toolkit.

Comments closed

Planning a Restore Strategy

Jonathan Kehayias inverts the paradigm:

Do you know how long it takes to RESTORE each database from a FULL backup? Does your run book include critical configuration options like Instant File Initialization or backup compression that can impact how long it takes to restore a database? Does the database have an excessively large transaction log file that will require zero initialization upon creation if the existing database files are not there? Does the database use transparent database encryption (TDE) which will prevent it from being able to instant initialize the size of the data files for the database?

Until you test your RESTORE time for a FULL backup, you don’t know if you can meet your RTO utilizing backups or not, and you may need one of the previously mentioned HA/DR features in SQL Server. However, in an absolute worst case scenario where recovery can only occur from backups, it is still important to know how long each step of the process is going to take so that you can set expectations appropriately. Your minimum recovery time is going to at least be the time required to restore the most recent FULL backup.

Click through for a lot of great advice in this vein.

Comments closed

TDE and Backup Compression

Andy Levy learns the truth:

For years, I thought that native backups of databases using Transparent Data Encryption (TDE) couldn’t be compressed. Between TDE being limited to Enterprise Edition until SQL Server 2019 and my own lack of experience with TDE in prior positions, I hadn’t really experimented with this myself. Some people have even gone so far as to skip compression in their backup jobs for TDE-enabled databases because there’s no need to burn those CPU cycles if you won’t get any compression, right?

But a curious thing happened after I upgraded a portion of my environment to SQL Server 2019 in late 2020. I observed that scheduled backups were compressing for some of my TDE-enabled databases, most notably the newer instances. And when I took ad hoc backups in any environment, they were compressed. So why wasn’t it working everywhere?

Read on for the explanation, though one correction: MAXTRANSFERSIZE is 1MB by default only when the database is not encrypted using TDE (and you aren’t backing up to a tape drive). If the database is encrypted using TDE, the default max transfer size is 64KB, and I think that’s what got Andy.

1 Comment