Press "Enter" to skip to content

Category: Availability Groups

High Availability Changes in SQL Server 2016

Martin Surasky looks at what SQL Server 2016 changed with respect to High Availability options:

AlwaysOn Availability Groups, first introduced in SQL Server 2012 is a feature that is conceptually similar to database mirroring. I’m going to assume you already know what AlwaysOn Availability Groups are in general, their main purpose and how they are different (essential aspects at least) from other technologies to provide replication such as Database Mirroring.

In SQL Server 2014, the significant enhancement to availability groups was the increase in the number of supported secondary replicas from three to eight. SQL Server 2016 includes a number of new enhancements

The biggest thing about it, as I recall, was stability: I wouldn’t have recommended too many places go into production with Availability Groups in 2012, but by 2016, many of the biggest bugs were ironed out.

Comments closed

Creating a Failover Cluster Instance with Shared Storage

Ryan Adams wraps up a video series on setting up a SQL Server lab environment:

You are going to create a SQL Server Failover Cluster Instance in Part 4 of our series on how to build a SQL Cluster Lab. The FCI will only be installed on Node1 and Node2. FCIs require shared storage so you will make your domain controller an iSCSI target. Last you will create your FCI using the iSCSI drives you presented to the cluster. 

Click through for links to the entire series.

Comments closed

Building a SQL Cluster: Availability Groups

Ryan Adams continues a series on building a SQL Server cluster as a lab:

You are going to create a multi-subnet Availability Group in Part 3 of our series on how to build a SQL Cluster Lab. First you give the Cluster Name Object rights in Active Directory and then install your AG. You will also ensure the AG endpoints are using the dedicated network created for them in Part 1. Here are links to the other articles in the series.

Click through for the video.

Comments closed

High Availability Announcements from Microsoft

Allan Hirt looks at a couple announcements from Microsoft:

I’m going to discuss what I feel are the biggest game changers. I knew licensing was changing as I had conversations with Microsoft around this months ago. I was not sure what the final result was going to be, but I’m fairly pleased. Is it perfect? No, but it’s much better than it was.

You’ll definitely want to read Allan’s thoughts on Microsoft’s SQL Server licensing changes, as well as a private preview of Azure Shared Disks.

Comments closed

Plan Forcing and Availability Groups

Milos Radivojevic shows an example where plan forcing might not work:

In my previous post about Query Store, you saw that persistence of forced plans does not imply that they are used. This post brings another example where forced plans are still in the system, with no failures, but they are not applied against a desired query.

For this demo, we need a database which is a part of the availability group. 

Click through to see why failover might cause your forced queries no longer to work.

Comments closed

MultiSubNetFailover and Availablity Groups

William Assaf strongly advises setting MultiSubNetFailover=True in connection strings when using Availability Groups:

I received an email from a client who is having issues with third-party applications connecting to their three-subnet SQL Server Availability Group. After an exchange with Microsoft Support, they discovered that the applications weren’t specifying MultiSubNetFailover = True in their connection strings. As a result, because RegisterAllProvidersIP = 1 in the cluster, connections were randomly experiencing high latency upon connecting, as client-side DNS queries over time had a 66% chance of returning the wrong IP from the listener.

They set RegisterAllProvidersIP = 0, but before you take that as advice keep reading.

Read on to learn why this is not a first-best solution.

Comments closed

Building a SQL Cluster Lab

Ryan Adams has started a series on building a Windows cluster in Hyper-V and layering SQL Server on top of it:

Before we start to build a SQL Cluster Lab, let’s look at the desired result. You will build a 3-node cluster replicating an environment that has two data centers. As a result, the first two nodes will reside in data center 1 and the third node in data center 2. We are creating this architecture because it is the most common architecture I see for Availability Groups. It is multi-subnet and can solve for both HA and DR.

You will notice the domain controller in the middle. That piece is certainly not representative of a production environment. However, we are using it in our lab for several different functions and being a router is one of them.

Part 1 is the only part which is currently up, but this looks like it will be a good one. Go buy a couple more sticks of RAM for your PC and get reading.

Comments closed

Making SQL Agent Jobs AG-Aware

Stuart Moore shows how you can use dbatools to make SQL Agent jobs Availability Group-aware:

What do I mean by Availability Group aware? When running on an Availability Group, one SQL Server instance ‘owns’ the database at any point in time, but the SQL Agent jobs have to be replicated across all of the instances in the cluster. So you want to make sure that your SQL Server Agent jobs only do work on the instance that currently owns the Availability Group.

Doing this is pretty simple. Below is a piece of T-SQL that checks if the current SQL Server Instance is the primary instance in the AG. If it isn’t then we exit with an error.

Read on to see how, and how you can use dbatools to automate this work.

Comments closed

Upgrading Servers in an Availability Group

Thomas Rushton has a checklist for upgrading servers connected by an Availability Group:

This is a checklist put together and followed for an upgrade of a pair of physical SQL Server 2012 servers which hosted a single availability group of several terabytes of data with minimal downtime.

The availability group was configured with synchronous commit and automatic failover.

Minimizing downtime here is great, but it’s not automatic: you still need to do work on your end to get this right.

Comments closed