Press "Enter" to skip to content

Category: Backups

Estimating SQL Server Backup Sizes

Rebecca Lewis gives us a rule of thumb:

How big will my backup file be?  Very good question.  And of course, it depends… especially if you’re using compression.

I was working something today that required a backup beforehand for safety measures, and the server had limited storage.  So, I needed to estimate the backup file size beforehand to confirm whether I had enough space for the bak file.

Click through for the script. It does, of course, include some simplifications and your actual numbers can turn out a bit different, but for a quick and dirty estimate of disk space needed, this isn’t bad at all.

Leave a Comment

Backups Aren’t Enough

Kevin Hill lays out a common but very important argument:

Many IT leaders and system admins think, “We have full backups every night. We’re covered.” But when the time comes to restore, they discover:

· The backup file is corrupt.
· The storage location is inaccessible.
· The restore process takes way longer than expected.
· The recovery model wasn’t configured properly.
· The point-in-time restore doesn’t actually bring back the data they need.

At that point, it’s not a “backup strategy.” It’s a data loss incident.

The solution is to test those backups, and Kevin provides some guidance on how, as well as additional important parts of the story.

Leave a Comment

Backup Storage Billing for Fabric SQL Databases

Amar Digamber Patil makes an announcement:

While compute and data storage are already included in the Fabric capacity-based billing model, after April 1, 2025, backup storage will also be billed. However, customers will only be charged backup storage that exceeds the allocated database size.

Click through to see what’s changing and how to get ahead of this. I’m not sure there are any ways to reduce that backup price, short of managing the data in your database and not having enormous amounts of transaction log activity.

Leave a Comment

Saving an Azure Database for PostgreSQL Backup to a Storage Account

Josephine Bush wants an extra copy of the backup:

This may or may not be helpful in the long term, but since I’m doing it to be super cautious, I figured I would blog about it. We migrated to Flex last week, and to be abundantly cautious, we’re putting the last single server backup into cold storage. You could also use this same process to offload Flex if you were going to delete a server and want to save a final backup or have some use case for saving backups to storage longer term.

Read on for the process. It’s not as simple as running a command or two, but Josephine does take us through the process.

Comments closed

Seeding an Availability Group via T-SQL Snapshot Backup

Anthony Nocentino jump-starts an availability group:

In this post, the fifth in our series, I want to illustrate an example of using the T-SQL Snapshot Backup feature in SQL Server 2022 to seed Availability Groups (AGs) with storage-based snapshots. Efficiently seeding an Availability Group is essential for maintaining high availability and ensuring effective disaster recovery. With the introduction of T-SQL Snapshot Backup in SQL Server 2022, snapshots can now be created at the storage layer. This advancement significantly speeds up the initialization of secondary replicas, particularly in environments that handle large databases.

This post will walk through a PowerShell script that effectively seeds an AG using T-SQL Snapshot Backup, dbatools, and Pure Storage FlashArray.

Click through for the script and the process.

Comments closed

Point in Time Database Restoration in SQL Server

Andy Brownsword rolls back the clock:

A few weeks ago I demonstrated the simplicity of performing point-in-time database restored in Azure Managed Instance. Whilst that has a lovely front-end, it can be just as easy with a proc call on your traditional SQL instance.

Let’s see how to use two open source tools to achieve this simplicity.

Click through for Andy’s solution. I would also make mention of Minion Backup, which does a great job of the database backup portion and about 90% of the database restoration portion well.

Comments closed

Using MIRROR TO in a Backup Operation

Andy Yun tries out an option:

About a month ago, was having a conversation with a Pure Storage customer’s SQL Server backups. They were using Ola Hallengren’s scripts but mentioned something about writing the backups to multiple locations at once. I presumed they were executing the backup operation as normal, then using another script of some sort to then copy the backup files elsewhere. That’s when I learned that Ola has a parameter called MirrorDirectory, that exposes the MIRROR TO option in a native BACKUP command.

Click through to learn more about this command and the implications. I’d say that this sounds neat but also isn’t something I’d probably recommend.

Comments closed

Backup to URL via Managed Identity in SQL Server 2022

Joey D’Antoni doesn’t trust user logins:

Backing up databases to the cloud is not a new thing. Microsoft introduced the BACKUP TO URL functionality to SQL Server 2012 SP1 CU2. I’m not going to tell you how long ago. Still, it wasn’t last month, and Microsoft recently celebrated the 15th anniversary of Azure so that you can get an idea. When the feature started—it was minimal; you could only backup a database of up to a single terabyte and couldn’t stripe over multiple files. Additionally, you had to use the access key to the storage account, which gave complete control over the storage account—that wasn’t a good thing.

Read on for a quick overview of the feature and guidance on how it all works.

Comments closed

Multi-Storage Array Database Snapshots in SQL Server

Anthony Nocentino continues a series:

In this post, the fourth in our series, I want to share an example demonstrating SQL Server 2022’s T-SQL Snapshot Backup feature in a scenario where a database spans multiple storage arrays. If you’re dealing with multi-array environments, you’ll appreciate how this technique freezes database write I/O to take coordinated snapshots across volumes on two FlashArrays. In this post, I’ll walk you through the process, point out some of the script’s key elements, and show you how long the write I/O pause takes.

Click through to learn how it all works when you have a database spanning multiple volumes.

Comments closed

Automating DBCC CHECKDB on Full Backups

Shane O’Neill riffs off of a script:

Like many shops out there, we can’t run corruption checking on our main production database instance. So, then, what do we do? We take the backups and restore them to a test instances, and then run corruption checking on those restored databases.

At least this way we can test the backups we take can be restored, as well.

But, I don’t want to spend every day manually restoring and corruption checking these databases, so let’s automate this bit…

Click through for a script that Shane built off of a Madeira Data Solutions script that Eitan Blumin put together.

Comments closed