Press "Enter" to skip to content

Category: HA / DR

So You Want To Monitor Kafka…

Ben Summer reviews a Kafka catstrophe and explains how to avoid it:

Here at New Relic, the Edge team is responsible for the pipelines that handle all the data coming into our company. We were an early adopter of Apache Kafka, which we began using to power this data pipeline. Our initial results were outstanding. Our cluster handled any amount of data we threw at it; it showed incredible fault tolerance and scaled horizontally. Our implementation was so stable for so long that we basically forgot about it. Which is to say, we totally neglected it. And then one day we experienced a catastrophic incident.

Our main cluster seized up. All graphs, charts, and dashboards went blank. Suddenly we were totally in the dark — and so were our customers. The incident lasted almost four hours, and in the end, an unsatisfactory number of customers experienced some kind of data loss. It was an epic disaster. Our Kafka infrastructure had been running like a champ for more than a year and suddenly it had ground to a halt.

This happened several years ago, but to this day we still refer to the incident as the “Kafkapocalypse.”

Ben has a couple interesting stories and some good rules of thumb for maintaining a Kafka cluster.

Comments closed

Building A Disaster Recovery Checklist

Robert Davis has a checklist for you:

If a disaster struck tomorrow, are you ready for it? Are you sure you thought of everything? I cannot count the number of times something happened, and I thought that I should have been able to anticipate that. It can be a big benefit to get another viewpoint on your disaster plan. No matter how well you think you’ve covered yourself, someone who thinks about things differently than you do can often see something you don’t.

Here’s a checklist of things you might not have considered. This can be your other set of eyes.

This is not a to-do list or a process guide when disaster strikes; rather, it’s a checklist of things you should have in place before a disaster occurs.

Comments closed

Planning For A Good Disaster

Randolph West loves it when a (disaster recovery) plan comes together:

My first question to the attendee was what the Service Level Agreement (SLA) says. As we know from previous posts, a disaster recovery strategy is dictated by business requirements, not technical ones. The Recovery Point Objective (how much data loss is acceptable) and Recovery Time Objective (how much time there is to bring everything back) will guide my proposal.

He told me that the SLA was 24 hours, so I started writing on the white board while I was thinking aloud.

On average, a fast storage layer can read and write around 200 MB/s, so it would take 5.12 seconds to write 1 GB, or just under 85 minutes to restore the database back to disk, not counting the log file or crash recovery. I never assume that Instant File Initialization is enabled, plus I won’t know how big the transaction log file is going to be, and that needs to be zeroed out.

I like this post a lot because it lets us get a glimpse into Randolph’s thought process and gives some hard numbers that you should have in mind.

Comments closed

Backup And Recovery With Hadoop

Tim Spann explains how to perform backup/recovery operations and disaster recovery using Hadoop:

You can mirror datasets with Falcon. Mirroring is a very useful option for enterprises and is well-documented. This is something that you may want to get validated by a third party. See the following resources:

Tim shows several recovery options, making it useful reading if you use Hadoop as a source system for anything (or if you can’t afford it to be down for a 2-3 day period as you recover data).

Comments closed

Log Shipping On Linux

Andrew Pruski digs into how to set up log shipping for SQL Server on Linux:

What I’m going to do is setup two instances of SQL Server running on linux and log ship one database from one to another. So the first thing I did was get two VMs running Ubuntu 16.04.1 LTS which can be download from here.

Once both servers were setup (remember to enable ssh) I then went about getting SQL setup, I’m not going to go through the install in this post as the process is documented fully here. Don’t forget to also install the SQL Tools, full guide is here.

Read on for the guide, but also be sure to read his disclaimer.

Comments closed

Asynchronous Mirroring And Asynchronous AGs

Tara Kizer contrasts asynchronous database mirroring with asynchronous Availability Groups:

ASYNCHRONOUS DATABASE MIRRORING

For asynchronous Database Mirroring, all we need is two servers: the principal at the primary site and the secondary at the DR site. Setup async mirroring between the two, and you’re done. If the secondary server goes down, production still continues. The transaction log isn’t clearing when the log backups occur because the principal still needs to send those log records to the secondary. As long as you have enough disk space where the transaction log resides to support this until the secondary server comes back online, production still continues. Of course if you run out of disk space, users will start receiving errors. But that can take some time and often times is enough time to get the secondary back online.

Availability Groups are more difficult to implement than database mirroring, but the flipside is that they’re much more powerful.

Comments closed

Running CheckDB On Log Shipping Subscribers

Brent Ozar shows how to support running DBCC CHECKDB on a log shipping subscriber:

 

At first glance, if you set up log shipping the way most folks do, it looks like you can’t run CHECKDB on a subscriber:

On SQL Server Enterprise Edition, you can try working around it by creating a database snapshot – but that won’t work either:

Here’s the trick: your database needs to be in standby mode instead of norecovery. When you run restores with the standby option, you’re able to query the database in between restores.

This doesn’t obviate the need for running CHECKDB on a primary, but it can offload some of that work some of the time.

Comments closed

Dealing With Azure Service Disruptions

Vincent-Philippe Lauzon discusses various potential solutions to Azure service disruptions:

What does happen in Azure is Service disruption.  Sometimes those are due to some hardware failures, but most of the time, they are a software problem:  an upgrade in Azure software (Microsoft’s software) gone wrong.  They happen occasionally, are typically short lived but if business continuity is a hard requirement, those are the ones you should protect your solution against.

In this article I’ll cover resilience for virtual machines.  I’ll assume that you are usingAzure Traffic Manager or other DNS service to fail over the traffic from a primary to a secondary region.

He covers “stateless” VMs (like web servers or app servers) as well as “stateful” VMs (like database servers) and explains disaster recovery options for each.

Comments closed

Preparing For Disaster Recovery

Kendra Little has a 30-minute video and explanation for how to prepare for a failover event:

The fact that you’re thinking about this is great!

You’re right, there are two major types of fail-overs that you have to think about:

  • Planned failover, when you can get to the original production system (at least for a short time)
  • Unplanned failover, when you cannot get to it

Even when you’re doing a planned failover, you don’t have time to go in and script out settings and jobs and logins and all that stuff.

Timing is of the essence, so you need minimal manual actions.

And you really should have documentation so that whomever is on call can perform the failover, even if they aren’t you.

The short answer is, test, test, test.  Test where it can’t hurt, and then test where it can.  But do read/watch the whole thing.

Comments closed

Availability Groups And Failover Cluster Instances

Derik Hammer digs into a more complex architecture for HA+DR:

The Availability Group will handle the data synchronization between the data centers. Asynchronous mode is recommend due to potential network latency hindering primary site performance. With the combination of these two features, you meet HA with the FCI’s automatic failover locally and DR with manual failover of the Availability Group between sites.

Derik’s conclusion is important:  this isn’t the type of thing a brand new accidental DBA should try to build; there are a lot of moving parts here.  But if you want a robust solution and have the support people available to handle the relative complexity, this is a good option.

Comments closed