Press "Enter" to skip to content

Category: Log Shipping

Low-Downtime Migration Techniques from SQL Server 2017 to 2022

Yohei Kawajiri describes three techniques for performing a SQL Server migration:

It is possible to configure a SQL Server Always On availability group with a primary replica running on SQL Server 2017 and a secondary replica running on SQL Server 2022, but there are important considerations and limitations to keep in mind: 

  1.  Backward Compatibility: SQL Server supports having replicas on different versions, but the primary replica must be on an older version than or equal to the secondary replicas. Therefore, having SQL Server 2017 as the primary and SQL Server 2022 as the secondary is valid. 
  2.  Database Upgrade Path: When you decide to upgrade the primary replica to a newer version, you need to follow a specific upgrade path to ensure minimal downtime and data integrity. 

Building an availability group? Yeah, makes a lot of sense. Performing log shipping? Sure, I could see that working. Database mirroring? I did not expect to read that one, mostly because it’s been deprecated for a decade.

Comments closed

Using the Log Replay Service to Migrate to Azure SQL MI

Rob Carrol makes a move:

The Log Replay Service (LRS) is a new Azure service that allows you to migrate your databases from SQL Server on-premises, SQL Server on Azure Virtual Machines, Amazon EC2, Amazon RDS for SQL Server, or Google Compute Engine to Azure SQL Managed Instance. LRS is a free cloud service that uses log shipping technology to enable custom migrations of databases from SQL Server 2008 through 2022.

Read on for some configuration options and tips on how to use the service.

Comments closed

Standby Mode in Log Shipping

Garry Bargsley will not simply stand by:

Did you know that log shipped databases do not have to sit there collecting dust and not being used? However, there are certain situations where replication or Availability Groups are not the answer to the question of separating your reporting traffic.

You don’t need to build an actual data warehouse, you don’t want the complexity of Windows Cluster for Availability Groups, and you cannot use replication because your tables do not have primary keys.

We can use the long trusted Log Shipping technology and tweak a few settings and get a read-only copy of a database.

Click through to see how you can make use of those log shipped databases.

Comments closed

Monitoring Log Shipping with T-SQL

Lori Brown tracks log shipping operations:

For Log Shipping, some information is only available on the primary or only on the secondary.  That means that I had to set up a linked server on the primary to connect to the secondary.  I do not want to create any OPENROWSET queries for this since that would require that AdHoc Distributed Queries be enabled.  I am not a fan of opening that up for the following reasons:

1) It can allow buffer overflow bugs to compromise systems.

2) It can allow a compromised server to connect to a non-compromised server. 

Read on to see what Lori prefers instead.

Comments closed

SQL Server 2019 CU 16 and Log Shipping

Lee Markum notes a change in SQL Server 2019 CU 16:

As a data professional managing SQL Servers, you need to be thinking about Windows and SQL Server patching.

You want to stay up to date, but you also have to be careful because sometimes staying fully up to date comes with the risk that a new patch might break a feature you are using. This is possibly the case for SQL Server 2019 CU 16. If you’re using Log Shipping on a database that has TDE enabled and compression is enabled on those backups, then you need to take heed.

I’ll walk you through what can happen and what I’ve seen work to resolve the problem.

This is the tricky part of getting rid of service packs: when I think cumulative update, I don’t think “Here’s a thing that could break backwards compatibility with other SQL Server instances which have not been patched.” Read on to see an example of this in action.

Comments closed

Log Replay for Azure SQL Managed Instance

Joey D’Antoni has some quick notes on the Log Replay Service:

Recently, I’ve started on a project where we are migrating a customer to Azure SQL Managed Instance, which now supports a few different migration paths. You can simply backup and restore from a database backup, but you can’t apply a log or differential backup to that database. You can also use the Database Migration Service, but that requires a lot of infrastructure and Azure configuration. The log replay service, or LRS, is the functional equivalent of log shipping to your new managed instance database. While log shipping is a well known methodology for both database migrations or disaster recovery. However, the implementation is a little different–let’s talk about how it works.

Click through to see how it differs.

Comments closed

Alerting on Log Shipping Failures

Andrea Allred is waiting for an e-mail:

I don’t want emails that tell me everything is ok, only when things are bad and I want them to be helpful emails. Not only did I want an alert, I wanted an email with actual information that I can use to make my decisions. Decisions like, can I just apply a few logs to get caught up or did everything burn down and I need to pull a full backup plus all the logs to be up and running again?

This was a task for some super fancy alerts on my agent job.

Click through to see how.

Comments closed

Log Shipping Configuration with Powershell

Lee Markum needs to do log shipping in bulk:

I recently needed to configure log shipping for multiple databases at once as part of a migration project. I turned to PowerShell to do this.

But before we get to that part, this post assumes that you’ve done the upfront work to create shares for the backups to write to and for the backups to be copied to. This will involve providing the right permissions for the SQL Server service accounts involved in Log Shipping. If you are not familiar with this, that’s perfectly fine. Check out this article in MS Docs first.

Click through for the next steps in the process, including a well-timed Get-Help call.

Comments closed

Using Log Shipping With Delayed Recovery

Kenneth Igiri shows how you can combine log shipping with delayed recovery to prevent data loss due to user mistakes:

Now let’s talk about the key benefit of this delay. In the scenario, where a user inadvertently drops a table, we can recover the data quickly from the Secondary Database as long as the Delay period has not elapsed. In this example, we drop the table Sales.Orderlines on BOTH databases and verify that the table no longer exists in BOTH databases.

It’s a fairly expensive technique but potentially still cheaper than restoring from backups.

Comments closed

Log Shipping Tests With dbachecks

Sander Stad has a bonus post in his log shipping series:

We want everyone to know about this module. Chrissy LeMaire reached out to me and asked if I could write some tests for the log shipping part and I did.

Because I wrote the log shipping commands for dbatools I was excited about creating a test that could be implemented into this module for everyone to use.

That test is also quite easy to use, as Sander demonstrates.

Comments closed