Currently, the output from the following commands is supported:
You will run the above commands as you would normally do but pipe the output to
ConvertTo-DbaTimeline, the same way as you would with any other
ConverTo-*PowerShell function. The output is a string that most of the time you will save as file using the
Out-Filecommand in order to open it in a browser.
Then, with the
ConvertTo-DbaTimeline cmdlet, you can convert that into an HTML page which looks pretty good.
I have a client that uses a lot of disconnected log shipping on a few servers. They do this because they want to have a copy of their database that is actually hosted by software vendors for reporting purposes. So, disconnected log shipping it is! We pull down files that have been FTP’d to the corporate FTP site and apply those logs locally and leave the databases in standby so that they can be queried.
(If you want to review how to set up disconnected log shipping complete with FTP download scripts, all SQL jobs and log shipping monitoring, then hop on over to https://www.sqlrx.com/setting-up-disconnected-log-shipping/ to check it out!)
Of course every so often there is a glitch and one or all databases can get out of synch pretty easily. When we receive a notification that tlogs have not been restored in a while, the hunt is on to see what happened and get it corrected. Since I have disconnected log shipping set up, I can’t get a log shipping report from SSMS that will tell me errors and what was the last log applied. So, I made a query that will give me the most recent file that was restored along with the file name, date, database, etc. I can pull back all databases or by uncommenting a line can filter by a single or multiple databases.
Lori also includes a helpful script.
What happens if I use data compression and backup compression, do I get double compression?
This is a great question, and without diving too deeply into how backup compression works I’m going to do a simple experiment on the WideWorldImporters database. I’ve restored this database to my local SQL Server 2016 instance and I’m simply going to back it up several times under different conditions.
After restoring the database it’s about 3GB in size, so our testing will be on a reasonably small database. It would be interesting to see how the results change as the database size increases, perhaps a future blog post.
Click through for the answer.
The primary reason most transaction logs start to (unexpectedly) grow is that the transaction log hasn’t been backed up. This is usually caused by one of two situations: the log backup job failed, or the log backup job didn’t start. Most options for backing up SQL transaction logs (3rd party software, SQL Agent job, or maintenance plan) will include an alert/notification mechanism for when the job fails. Where some backup options are lacking is in notifying that a (log) backup that should have been taken, wasn’t. If the backup (or management) software doesn’t include an alert for missing backups, SQL Server (2008 & newer) has a solution. Policy Based Management (PBM) includes a policy (Last Log Backup Date) that will send an email notification when the last backup date/time exceeds a specified threshold.
A good use for a woefully underappreciated tool.
The question is how are backups done with Azure SQL DW?
It is very different from Azure SQL DB (which you would expect). Azure SQL DW has a totally different architecture to its classic database counter-part. Restore points are the key here. Automatic ones are taken throughout the day and are kept for seven days only. Worst case scenario is the time between the restore points will be eight hours hence giving an eight hour RPO (Recovery Point Objective).
You can also create manual restore points, as Arun shows.
Here is the information that the script provides:
Warn of restores over the top of the database since its last FULL backup
Show database snapshots currently against the database
Show the last FULL, DIFF and LOG backup for the database including the backup durations and backup age.
Backup file information such as backup start/finish time , file path , first LSN , Last LSN , a status column which states whether the log chain is in tact based on First and last LSN but also if the file exists on disk, and finally a file exists column which will tell you if the file still exists on disk.
This is a great script if you take transaction log backups frequently (typically a good idea).
So, first of all, it only works with RESTORE VERIFYONLY. RESTORE VERIFYONLY does some basic checking on a backup to make sure that it can be read and understood by SQL. Please note, it does not mean that the backup can be restored. It will check things like the checksum, available diskspace (if you specify a location), the header and that the backup set is actually complete and readable. Basically enough to see if it will start restoring, but it could still have errors later on.
As for what LOADHISTORY actually does? It causes you to write an entry to the restore history table. You can tell which record this is because the restore_type is set to a V. Really, the only benefit here (as I see it) is that you can do reporting on what backups you’ve verified.
Click through for a demo.
Today we’ll look at configuring a common, repeatable scenario: take the latest backup of MyDB from ProdServer1 and restore it to DevServer1. There are four basic steps to the setup and execution:
Configure Minion Backup and let it run on ProdServer1.Restoring with MB requires at least one full backup taken by MB. (Note that you don’t need Minion Backup on DevServer1 for this scenario.)
Configure restore settings paths. You know how sometimes a restore requires one or more “WITH MOVE” clauses? Configure this once for your source-target pair, and MB takes care of it from then on.
Configure the restore tuning settings (optional). Oh yes, we tune our backups AND our restores!
Generate and run the restore statements.
It’s a good walkthrough if you’re a Minion Backup user. If you’re not and you’re not particularly happy with your backup solution, I recommend giving it a try.
Our Hadoop HDP IaaS cluster on Azure uses Azure Data Lake Store (ADLS) for data repository and accesses it through an applicational user created on Azure Active Directory (AAD). Check this tutorial if you want to connect your own Hadoop to ADLS.
Our ADLS is getting bigger and we’re working on a backup strategy for it. ADLS provides locally-redundant storage (LRS), however, this does not prevent our application from corrupting data or accidentally deleting it. Since Microsoft hasn’t published a new version of ADLS with a clone feature we had to find a way to backup all the data stored in our data lake.
We’re going to show you How to do a full ADLS backup with Azure Data Factory (ADF). ADF does not preserve permissions. However, our Hadoop client can only access the AzureDataLakeStoreFilesystem (adl) through hive with a “hive” user and we can generate these permissions before the backup.
Read the whole thing if you’re thinking of using Azure Data Lake Store.
I’ve been having a little play around with AWS recently and was looking at S3 (AWS’ cloud storage) when I thought to myself, I wonder if it’s possible to backup up an on premise SQL Server database directly to S3?
When we want to backup directly to Azure, we can use the ‘TO URL’ clause in our backup statement. Although S3 buckets can also be accessed via a URL, I couldn’t find a way to backup directly to that URL. Most of the solutions on the web have you backing up your databases locally and then a second step of the job uses Power Shell to copy those backups up to your S3 buckets. I didn’t really want to do it that way, I want to backup directly to S3 with no middle steps. We like to keep things as simple as possible here at SQL Undercover, the more moving parts you’ve got, the more chance for things to go wrong.
So I needed a way for SQL Server to be able to directly access my buckets. I started to wonder if it’s possible to map a bucket as a network drive. A little hunting around and I came across this lovely tool, TNTDrive. TNTDrive will let us do exactly that and with the bucket mapped as a local drive, it was simply a case of running the backup to that local drive.
Quite useful if your servers are in a disk crunch. In general, I’d probably lean toward keeping on-disk backups and creating a job to migrate those backups to S3.