Press "Enter" to skip to content

Category: HA / DR

High Availability Announcements from Microsoft

Allan Hirt looks at a couple announcements from Microsoft:

I’m going to discuss what I feel are the biggest game changers. I knew licensing was changing as I had conversations with Microsoft around this months ago. I was not sure what the final result was going to be, but I’m fairly pleased. Is it perfect? No, but it’s much better than it was.

You’ll definitely want to read Allan’s thoughts on Microsoft’s SQL Server licensing changes, as well as a private preview of Azure Shared Disks.

Comments closed

MultiSubNetFailover and Availablity Groups

William Assaf strongly advises setting MultiSubNetFailover=True in connection strings when using Availability Groups:

I received an email from a client who is having issues with third-party applications connecting to their three-subnet SQL Server Availability Group. After an exchange with Microsoft Support, they discovered that the applications weren’t specifying MultiSubNetFailover = True in their connection strings. As a result, because RegisterAllProvidersIP = 1 in the cluster, connections were randomly experiencing high latency upon connecting, as client-side DNS queries over time had a 66% chance of returning the wrong IP from the listener.

They set RegisterAllProvidersIP = 0, but before you take that as advice keep reading.

Read on to learn why this is not a first-best solution.

Comments closed

Building a SQL Cluster Lab

Ryan Adams has started a series on building a Windows cluster in Hyper-V and layering SQL Server on top of it:

Before we start to build a SQL Cluster Lab, let’s look at the desired result. You will build a 3-node cluster replicating an environment that has two data centers. As a result, the first two nodes will reside in data center 1 and the third node in data center 2. We are creating this architecture because it is the most common architecture I see for Availability Groups. It is multi-subnet and can solve for both HA and DR.

You will notice the domain controller in the middle. That piece is certainly not representative of a production environment. However, we are using it in our lab for several different functions and being a router is one of them.

Part 1 is the only part which is currently up, but this looks like it will be a good one. Go buy a couple more sticks of RAM for your PC and get reading.

Comments closed

Accelerated Database Recovery

Andy Mallon explains the concept of Accelerated Database Recovery:

Accelerated Database Recovery(ADR) is a new feature intended to speed up the recovery process, which could be very slow, particularly when there are long-running, large transactions. ADR is not just for recovery after a crash, but also helps in other scenarios where the transaction log needs to be recovered–including Availability Group secondary redo and Failover Cluster Instance failovers.

This is one of the most interesting new features in SQL Server 2019.

Comments closed

Disaster Recovery With Kafka Deployments

Yeva Byzek walks us through a disaster recovery scenario when running Apache Kafka:

Imagine:

Disaster strikes—catastrophic hardware failure, software failure, power outage, denial of service attack or some other event causes one datacenter with an Apache Kafka® cluster to completely fail. Yet Kafka continues running in another datacenter, and it already has a copy of the data from the original datacenter, replicated to and from the same topic names. Client applications switch from the failed cluster to the running cluster and automatically resume data consumption in the new datacenter based on where it left off in the original datacenter. The business has minimized downtime and data loss resulting from the disaster, and continues to run its mission critical applications.

Ultimately, enabling the business to continue running is what disaster recovery planning is all about, as datacenter downtime and data loss can result in businesses losing revenue or entirely halting operations. To minimize the downtime and data loss resulting from a disaster, enterprises should create business continuity plans and disaster recovery strategies.

Distributed data sources can still succumb to disaster and many of the same policies that people learn when working with relational databases apply to things like Kafka as well.

Comments closed

Testing Azure SQL Database Failover

Arun Sirpal shows us how easy it is to fail over Azure SQL Database to another region:

So what happens now if I connect to the read/write endpoint? (I test this via SSMS)

The dreaded IP address / create a new firewall rule message. Why? Well this setup utilized a “server” level firewall rule and the server in the US did NOT have the IP address mapped in, you can see from the below screen shot that there are no firewall rules configured.

Fixing this is easy, you could just add the IP address on the secondary server as another server level rule but you should seriously consider using a database level firewall rule, the setup will get replicated to the secondary server making failover experience smoother.

Read on to see how to set this.

Comments closed

Simplified Disaster Recovery With dbatools

Chrissy LeMaire shows how you can make DR a lot easier with dbatools:

When we talk about Disaster Recovery or DR, it’s often coupled with the term High Availability or HA. Here are some definitions from my graduate course on HADR.

high availability

  • Deals with minor outages, and failover solutions are automated
  • The goal is to restore full system functionality in a short time

disaster recovery

  • Deals with major outages such as natural and man-made disasters
  • Focuses on manual processes and procedures to restore systems back to their original state
  • Characterized by a phased approach to restoring the primary site

In the context of SQL Server, HA would be Availability Groups (AG), Failover Clustering (FCI), Log Shipping and more. I won’t be addressing High Availability in this post, however.

Chrissy has a demo of everything in action, including running a series of tests to ensure that your DR site actually has everything.

Comments closed

HA/DR With Azure SQL Database

James Serra explains the High Availability and Disaster Recovery options available when using Azure SQL Database:

High Availability (HA) – Keeping your database up 100% of the time with no data loss during common problems.  Redundancy at system level, focus on failover, addresses single predictable failure, focus is on technology.  SQL Server IaaS would handle this with:

  • Always On Failover cluster instances
  • Always On Availability Groups (in same Azure region)
  • SQL Server data files in Azure

Disaster Recovery (DR) – Protection if major disaster or unusual failure wipes out your database.  Use of alternate site, focus on re-establishing services, addresses multiple failures, includes people and processes to execute recovery.  Usually includes HA also.  SQL Server IaaS would handle this with:

  • Log Shipping

  • Database Mirroring

  • Always On Availability Groups (different Azure regions)

  • Backup to Azure

Click through for more details.

Comments closed

Backing Up Cloudera Search Data

Eva Nahari explains different techniques to back up Cloudera Search data, as well as setting up disaster recovery:

If you have the raw data in HDFS (which most do, and which you should!), the most straightforward way to have a hot-warm disaster recovery setup is to use our Backup and Disaster Recovery tool. It allows you to set up regular incremental updates between two clusters. You then have the option of using MapReduce Indexer or Spark Indexer to regularly index the raw data in your recovery cluster and append to a running Solr service in that same recovery cluster. This way you can easily switch over from one Solr service to the backup Solr service if you experience downtime in the original cluster.

The lag would be depending on the network between the clusters and how frequent you transfer data between the clusters. To some extent it would also depend on how long time you need (i.e. how much resources you have available) to complete the MapReduce or Spark indexing workload and append it (using the Cloudera Search GoLive feature) into Solr active indexes on the recovery site.

Read on for several options.

Comments closed

Log Shipping With dbatools

Sander Stad has started a series on using dbatools to help set up log shipping.  Part one walks through the basics and setup:

Technically you don’t need multiple servers to setup log shipping. You can set it up with just one single SQL Server instance. In an HA solution this wouldn’t make sense but technically it’s possible.

Having a separate server acting as the monitoring server ensures that when one of the server goes down, the logging of the actions still takes place.

Having a separate network share for both the backup and copy makes it easier to setup security and decide which accounts can access the backups. The backup share needs to be readable and writable by the primary instance and readable by the secondary instance.
The copy share needs to be accessible and writable for only the secondary instance.

Part two is all about checking the status of a log shipping implementation:

Monitoring your log shipping processes is important. You need the synchronization status of the log shipped databases.

The log ship process consists of three steps; Backup, Copy and Restore. The log shipping tracks the status for these processes.
It registers the last transaction log backup, the last file copied and the last file restored. It also keeps track of the time since the last backup, copy and restore.

But that’s not all. Log shipping also checks if the threshold for the backup and restore has been exceeded.

Log shipping is an underrated piece of the HA/DR puzzle, and Sander shows how easy dbatools makes it to configure.

Comments closed