Press "Enter" to skip to content

Category: Backups

Missing Backup Directory When Trying To Upgrade SQL Server

Lori Brown walks us through the solution to an error she experienced:

I was recently performing an in-place upgrade of SQL 2008 R2 to SQL 2014 on one of my client’s servers.  I have done a ton of successful and uneventful in-place upgrades and was surprised when the upgrade failed with the error message:  “Failed to create a new folder ‘X:\SQLBackups’. The specified path is invalid (for example, it is on an unmapped drive).”  This client had over the years changed from using a local drive for all backups to having backups sent to a network share.  So, the X drive really was no longer in existence.

Read on for the solution.

Comments closed

Creating Timelines With dbatools

Marcin Gminski shows how to pull SQL Agent and backup history out of SQL Server and display it as a visual history timeline:

Currently, the output from the following commands is supported:

  • Get-DbaAgentJobHistory
  • Get-DbaBackupHistory

You will run the above commands as you would normally do but pipe the output to ConvertTo-DbaTimeline, the same way as you would with any other ConverTo-* PowerShell function. The output is a string that most of the time you will save as file using the Out-File command in order to open it in a browser.

Then, with the ConvertTo-DbaTimeline cmdlet, you can convert that into an HTML page which looks pretty good.

Comments closed

Finding The Last Database Restore Time

Lori Brown shows how you can see the last time a particular database was restored:

I have a client that uses a lot of disconnected log shipping on a few servers.  They do this because they want to have a copy of their database that is actually hosted by software vendors for reporting purposes.  So, disconnected log shipping it is!  We pull down files that have been FTP’d to the corporate FTP site and apply those logs locally and leave the databases in standby so that they can be queried.

(If you want to review how to set up disconnected log shipping complete with FTP download scripts, all SQL jobs and log shipping monitoring, then hop on over to https://www.sqlrx.com/setting-up-disconnected-log-shipping/ to check it out!)

Of course every so often there is a glitch and one or all databases can get out of synch pretty easily.  When we receive a notification that tlogs have not been restored in a while, the hunt is on to see what happened and get it corrected.  Since I have disconnected log shipping set up, I can’t get a log shipping report from SSMS that will tell me errors and what was the last log applied.  So, I made a query that will give me the most recent file that was restored along with the file name, date, database, etc.  I can pull back all databases or by uncommenting a line can filter by a single or multiple databases.

Lori also includes a helpful script.

Comments closed

What Happens With Data Compression + Backup Compression

Jess Pomfret tests what happens when you enable backup compression for databases with already-compressed tables in SQL Server:

What happens if I use data compression and backup compression, do I get double compression?

This is a great question, and without diving too deeply into how backup compression works I’m going to do a simple experiment on the WideWorldImporters database.  I’ve restored this database to my local SQL Server 2016 instance and I’m simply going to back it up several times under different conditions.

After restoring the database it’s about 3GB in size, so our testing will be on a reasonably small database.  It would be interesting to see how the results change as the database size increases, perhaps a future blog post.

Click through for the answer.

Comments closed

Alerting On Missed Log Backups

Jamie Wick shows us how to build out an alert if we’ve exceeded a threshold for time since the last log backup:

The primary reason most transaction logs start to (unexpectedly) grow is that the transaction log hasn’t been backed up. This is usually caused by one of two situations: the log backup job failed, or the log backup job didn’t start. Most options for backing up SQL transaction logs (3rd party software, SQL Agent job, or maintenance plan) will include an alert/notification mechanism for when the job fails. Where some backup options are lacking is in notifying that a (log) backup that should have been taken, wasn’t. If the backup (or management) software doesn’t include an alert for missing backups, SQL Server (2008 & newer) has a solution. Policy Based Management (PBM) includes a policy (Last Log Backup Date) that will send an email notification when the last backup date/time exceeds a specified threshold.

A good use for a woefully underappreciated tool.

Comments closed

Azure SQL Data Warehouse Restore Points

Arun Sirpal explains how backups work with Azure SQL Data Warehouse:

The question is how are backups done with Azure SQL DW?

It is very different from Azure SQL DB (which you would expect). Azure SQL DW has a totally different architecture to its classic database counter-part. Restore points are the key here. Automatic ones are taken throughout the day and are kept for seven days only. Worst case scenario is the time between the restore points will be eight hours hence giving an eight hour RPO (Recovery Point Objective).

You can also create manual restore points, as Arun shows.

Comments closed

It’s 10 O’Clock; Do You Know Where Your Backups Are?

Adrian Buckman has a script which makes sure your backups are where msdb says they are:

Here is the information that the script provides:

  1. Warn of restores over the top of the database since its last FULL backup

  2. Show database snapshots currently against the database

  3. Show the last FULL, DIFF and LOG backup for the database including the backup durations and backup age.

  4. Backup file information such as backup start/finish time , file path , first LSN , Last LSN , a status column which states whether the log chain is in tact based on First and last LSN but also if the file exists on disk, and finally a file exists column which will tell you if the file still exists on disk.

This is a great script if you take transaction log backups frequently (typically a good idea).

Comments closed

SQL Server Backup Restoration With LOADHISTORY

Kenneth Fisher explains what the LOADHISTORY option means when you run a RESTORE VERIFYONLY command:

So, first of all, it only works with RESTORE VERIFYONLY. RESTORE VERIFYONLY does some basic checking on a backup to make sure that it can be read and understood by SQL. Please note, it does not mean that the backup can be restored. It will check things like the checksum, available diskspace (if you specify a location), the header and that the backup set is actually complete and readable. Basically enough to see if it will start restoring, but it could still have errors later on.

As for what LOADHISTORY actually does? It causes you to write an entry to the restore history table. You can tell which record this is because the restore_type is set to a V. Really, the only benefit here (as I see it) is that you can do reporting on what backups you’ve verified.

Click through for a demo.

Comments closed

Cross-Server Database Restoration With Minion Backup

Jen McCown walks us through how to restore a database on a different server with Minion Backup:

Today we’ll look at configuring a common, repeatable scenario: take the latest backup of MyDB from ProdServer1 and restore it to DevServer1. There are four basic steps to the setup and execution:

  1. Configure Minion Backup and let it run on ProdServer1.Restoring with MB requires at least one full backup taken by MB. (Note that you don’t need Minion Backup on DevServer1 for this scenario.)

  2. Configure restore settings paths. You know how sometimes a restore requires one or more “WITH MOVE” clauses?  Configure this once for your source-target pair, and MB takes care of it from then on.

  3. Configure the restore tuning settings (optional). Oh yes, we tune our backups AND our restores!

  4. Generate and run the restore statements.

It’s a good walkthrough if you’re a Minion Backup user.  If you’re not and you’re not particularly happy with your backup solution, I recommend giving it a try.

Comments closed

Backing Up Azure Data Lake Store Data

Hugo Almeida has some hints for backing up Azure Data Lake Store data using Azure Data Factory:

Our Hadoop HDP IaaS cluster on Azure uses Azure Data Lake Store (ADLS) for data repository and accesses it through an applicational user created on Azure Active Directory (AAD). Check this tutorial if you want to connect your own Hadoop to ADLS.

Our ADLS is getting bigger and we’re working on a backup strategy for it. ADLS provides locally-redundant storage (LRS), however, this does not prevent our application from corrupting data or accidentally deleting it. Since Microsoft hasn’t published a new version of ADLS with a clone feature we had to find a way to backup all the data stored in our data lake.

We’re going to show you How to do a full ADLS backup with Azure Data Factory (ADF). ADF does not preserve permissions. However, our Hadoop client can only access the AzureDataLakeStoreFilesystem (adl) through hive with a “hive” user and we can generate these permissions before the backup.

Read the whole thing if you’re thinking of using Azure Data Lake Store.

Comments closed