This strikes me as an odd chicken-and-egg problem. I’d need to create the certificate to decrypt the [master] backup on the instance I’m restoring [master] to…and the certificate is stored in [master], which I’d be overwriting. As weird as it sounds, this is exactly what needs to happen. Maybe it’s not as complicated as it sounds.
Read on for the solution. You might also want to check out that one time he met Larry Bird.
So to put that in simple terms. I have a database, Test. I take a full backup, changes happen, I take a differential backup, changes happen, I take a differential backup, etc. Ignoring all of the log backups that are happening if the database is in FULL recovery of course.
Currently, both differentials contain everything that has happened between the time of the full backup and the time the differential was taken. But what happens if I take a second full backup between the first and second differential? Now, that second differential will only contain data between the second full backup and the differential.
Read on for more.
And his proposed solution:Add a new column to sys.dm_db_log_space_usage or sys.database_recovery_status called LastLogBackupTime.
I LOVE this idea…back up the T-log more frequently during busy times, less often during off hours. At my current client, there is almost nothing happening outside of a 12 hour workday window, so this would be perfect here.
Now, I am possibly misunderstanding Ola’s request or the intent…and that’s ok. This query from the msdb..backupset table already contains this info via a relatively short amount of code:
Click through for more details as well as Ola’s Connect item.
When SQL Server goes to restore the file, it reads part of the header. In here, the process must detect the DEK and try to decrypt that key. However, since this new instance does not have the certificate, this doesn’t work and an error is thrown, despite not needing the key since the data isn’t encrypted.
The issue here is the DEK still exists in the source database.
Read the whole thing for the solution.
Because let’s face it whole books are written on the subject and yet it’s one of the very first things a DBA should learn. Because it is one of those subjects everyone has to learn one way or another I had a large number of responses (which explains my delay in getting this rollup out, sorry about that). However, the large number of responses makes this list an excellent course on backup and recovery. It’s by no means comprehensive but if you read each of these posts you will have a great start into what’s necessary and what’s possible.
Click through for links to 25 blog posts on the topic.
The trickiest part of wiring a circuit like this is detecting a button press. Most logic boards don’t know if an input circuit should poll at high or low levels. That’s where pull-ups come in. Above, you can see we set one of the pins for the button to be a pull-up (or an input if we were using another board). That means it will pull the current and look for impedance. The other important thing is our debounce. With circuits, one button press can actually turn into lots because as soon as the switch completes (or interrupts) the circuit, it starts sending signals. A debounce is like a referee saying “only look for a signal for this long” and it will filter out extra “presses” based on current that might linger on a press.
Once we detect our button press, we’re calling the function below. All it does is read the current LED pin values, and looks to see which one is currently lit, and then lights the next one.
Go from understanding general purpose input/output pins to calling SMO via a web service all in one post. If you’ve got an itch for a weekend project, have at it.
I have also set the script to accept a database name parameter. If a name is provided, then only the backup history for that database is returned. If the parameter is left NULL, then the backup history for all databases will be returned. Additionally, I added a number of days parameter to limit the scope of the report to a specific range of days.
Among the data points returned in this script, you will note there is the duration of the backup, the date, and even the size of the backup. All of these attributes can help me to forecast future storage requirements both for the backup storage as well as for the data volume. Additionally, by knowing the duration of the backup and the trend of that duration, I can adjust maintenance schedules accordingly.
These types of reports are quite useful. Aside from giving you useful information on database backups, it should also be a reminder to have a process which deletes older backup data over time so that your msdb database doesn’t grow to unsustainable sizes.
I have not met a setup where applying compression was not an option, yet. Obviously this has a penalty cost on CPU while executing the backup, and will affect the rest of the tasks running on the server (even if you have your data and backup dir on different drives). But in my experience, the impact is negligible.
This may not be the case with the encryption option, as this has a much larger foot print on the server. You should be using this with some caution in production. Test on smaller subsets of the data if in doubt.
Another thing to keep in mind, as always when dealing with encryption, do remember the password. There is no way of retrieving the data other than with the proper password.
My goal is to be able to rebuild any cube from the relational database, but even with that goal in mind, it is smart to have backups.
In the example, the database performed a checkpoint at noon and a backup had been taken at that time. The restore process will capture all the transactions up until the point the database had been backed up. After the database has been restored, the recovery process will roll forward transactions 2 and 4 because they had been committed to the transaction log before the point of failure. Since transactions 3 and 5 did not commit before the time of system failure, the undo process will roll back the transactions to keep the data in a consistent state.
Read the whole thing.
Now, all of the above may be review for you, but a much more important part of this story is that you need to be TESTING your backups. I’ve seen many customers who have been happily taking backups and storing them on some drive somewhere, and then when disaster strikes and they actually need to restore them, they can’t – maybe they had been backing up corruption all along, or the backups were failing but they were ignoring alerts, or they weren’t taking log backups frequently enough to meet their RPO, or they were only taking full backups.
Testing backups is vital; just because the backup process reported success doesn’t mean that you’ll necessarily be able to restore that backup when the time comes that you need it. It’s also good to drill people on restoration skills, as things get a bit more stressful when three levels of management are standing behind your chair asking you what’s taking so long.