I had a recent project to enable backup encryption on all our servers. Then question from the storage team came up will this required additional space. Well by then I had already enabled in all our test servers so I wrote a query that would compare the average size of backups before encryption to after encryption. Keep in mind we do keep only two weeks of history in our backup tables so this is a fair comparison. If you don’t have maintenance tasks to clean up your backup history then you should have backup_start_time to the where clauses to get more accurate numbers and setup a maintenance tasks to keep your msdb backup history in check.
Unfortunately, Tracy leaves us in suspense regarding whether they did increase.
I needed to get the WideWorldImporters sample database for a project and noticed that there was a BACPAC available. I downloaded it and needed to restore this as a database. At least, that’s what many people would think.
However, if you go to the restore dialog, and select Device and then pick your location, there’s no filter for a .bacpac. In fact, if you choose one, it won’t restore. You’ll get the “no backupset selected” error.
Read on for a step-by-step guide showing how to do this.
What are the possibilities with this new field ? We are now able to check how many extents have changed since last full backup and decide if a full backup is really needed or we can live with a differential backup, achieving smarter backup plans.
Change our full backup jobs to first check this field and decide if the backup will be full or differential can save space and maintenance time with databases that aren’t updated so often.
Read on to learn more about this new column, which will be available in SQL Server 2017.
In sys.dm_db_log_stats, you will find a new column log_since_last_log_backup_mb which can be used in your backup script to trigger a transaction log backup when log generated since last backup exceeds a threshold value. With smart transaction log backup, the transaction log backup size would be consistent and predictable avoiding autogrows from transactional burst activity on the database. The resulting pattern from the transaction log backup would be similar to below.
The new sys.dm_db_log_stats DMV looks to be quite useful.
The default, and the way my AG was configured, was Prefer Secondary. As the image shows, this means backups will be made on the secondary, unless the secondary is unavailable, in which case, they will be made on the primary.
There are a couple of things to note when you use this setting:
Full backups made on the secondary are Copy Only backups. This means they won’t reset the differential bitmap and your differentials will continue to increase in size until a full backup is made on the primary.
Differential backups cannot be made on the secondary.
Transaction log backups can be made on the secondary and they do clear the log, so your log file will not continue to grow.
Read on for more details.
Here’s a list of things you can do to make restoring a full backup go faster:
Ensure that instant file initialization is enabled on the SQL Server instance performing the restore operation, to avoid spending time zero-initializing any data files that must be created. This can save hours of downtime for very large data files.
If possible, restore over the existing database – don’t delete the existing files. This avoids having to create and potentially zero initialize the files completely, especially the log file. Be very careful when considering this step, as the existing database will be irretrievably destroyed once the restore starts to overwrite it.
Consider backup compression, which can speed up both backup and restore operations, and save disk space and storage costs.
It’s a straightforward explanation, and Paul provides a few more tips for speeding up restorations.
The backup mechanism allows an administrator to create a physically separate copy of index files and configuration metadata for a Solr collection. Any subsequent change to a Solr collection state (e.g. removing documents, deleting index files or changing collection configuration) has no impact on the state of this backup. As part of disaster recovery, the restore operation creates a new Solr collection and initializes it to the state represented by a Solr collection backup.
It’s probably safest to treat data in Solr as secondary data, in the sense that you should be able to rebuild the entire data set from scratch instead of Solr being a primary data store. I’m not a big fan of the author using the term “disaster recovery” instead of just “recovery” or “backup restoration” (as they’re different concepts), but it’s worth the read.
Why Is MySQL Platform as a Service Important?
I am going to answer this question. There are a lot of advantages to creating, using and developing against data storage within a PaaS offering. One of the biggest for me is backups. Microsoft is automatically taking backups of the MySQL databases you create within Azure. These are real, full backups. Microsoft validates the backups. As I write this, you’ll have the ability to restore your entire database, to any point in time, at intervals of five minutes, over a 35 day preceding period. By programming against a MySQL database within Azure, you are gaining protection of the information you’re storing within your database, and you don’t have to do anything to benefit from this. It’s all part of the service.
Read the whole thing.
Notice the “100 percent…” message has detailed about “bytes processed”. Since my data is around 13Mb this tells me that the progress indicators are all about the data transferred step of RESTORE. Notice the time gap in the messages to “Waiting for Log zeroing…” and “Log Zeroing is complete”. That gap in time is around 2 minutes. Exactly the time it took between the the 100% Complete message in the SSMS Window and the final restore message.
From this evidence I can conclude that having a transaction log file size that takes a long time to initialize can be a possible cause of seeing the behavior of getting the 100% Complete Message quickly but the overall RESTORE takes longer to complete.
There’s a lot worth reading packed into this post, as you’d expect from Bob. Read the whole thing.
I worked on an interesting issue today where a user couldn’t restore a backup. Here is what this customer did:
- backed up a database from an on-premises server (2008 R2)
- copied the file to an Azure VM
- tried to restore the backup on the Azure VM (2008 R2 with exact same build#)
But he got the following error:
Msg 3241, Level 16, State 0, Line 4
The media family on device ‘c:\temp\test.bak’ is incorrectly formed. SQL Server cannot process this media family.
Msg 3013, Level 16, State 1, Line 4
RESTORE HEADERONLY is terminating abnormally.
We verified that he could restore the same backup on the local machine (on-premises). Initially I thought the file must have been corrupt during transferring. We used different method to transfer file and zipped the file. The behavior is the same. When we backed up a database from the same Azure VM and tried to restore, it was successful.
Click through for Jack’s findings as well as a couple workarounds.