Press "Enter" to skip to content

Category: Replication

Slow Merge Replication Initialization

Gonzalo Bissio covers one reason why initializing merge replication might be slow:

Since 1 week (Yes… 1 week) they were waiting for the replication re-initialization to finish since they introduced some changes on the system and they needed to reinitialize the replication again. The rate of the records between the publisher and the subscribers were too low (20 records per second). Since this application is used globally they write records on all of the regions (then the merge replication manage them).

Read on to see what Gonzalo’s company had going on and how they fixed it.

Comments closed

Merge Replication on Linux

Jignesh Raiyani shows how you can set up merge replication on SQL Server on Linux:

In this article, we will discuss to deploy SQL Server Merge Replication on Linux environment. Many SQL Server features are not available on Linux by Microsoft but the Replication feature exists for Linux Environment Edition. Before starting anything, let’s address this question – what is Merge Replication?

Merge Replication is a data synchronization process with one database (Publisher) to other databases (Subscriber) and vice versa. The data synchronization audit will be controlled by the distributor (Distribution database). The distributor database will manage the data synchronization between Publisher and Subscribers.

Merge replication is also an unending nightmare of pain, but you do what you gotta do.

Comments closed

Tracking Transactional Replication Status

Pamela Mooney has a script to validate that transactional replication is up to date:

You may sometimes have reports or other processes that are dependent on transactional replication being current.  If that is the case, you will probably need a mechanism to check and see if, in fact, replication is caught up.  Here is my solution to that, without having to resort to Replication Monitor all the time. The bonus?  This could be inserted into conditional workflows to help streamline processes (i.e., validate publications before moving on to Step 2 of process).

To do this, I chose to make three stored procedures.  The first one to just check all publications on a server, one to check just one publication on a server, and one central sproc to rule them all.  You simply execute the master stored procedure, and based on the parameters you feed, it decides which of the other two to execute.

Read on for those scripts.

Comments closed

Replicating ACID Tables in Hive

Ashutosh Bapat shows off some of the improvements in Apache Hive replication:

Transactional tables in Hive support ACID properties. Unlike non-transactional tables, data read from transactional tables is transactionally consistent, irrespective of the state of the database. Of course, this imposes specific demands on replication of such tables, hence why Hive replication was designed with the following assumptions:

1. A replicated database may contain more than one transactional table with cross-table integrity constraints.
2. A target may host multiple databases, some replicated and some native to the target. The databases replicated from the same source may have transactional tables with cross-database integrity constraints.
3. A user should be able to run read-only workloads on the target and should be able to read transactionally consistent data.
4. Since in Hive a read-only transaction requires a new transaction-id, the transaction-ids on the source and the target of replication may differ. Thus transaction-ids can not be used for reading transactionally consistent data across source and replicated target.

Read on to learn why these assumptions are in place and what they mean for replication.

Comments closed

Tearing Down and Rebuilding Replication

Pamela Mooney takes us through tearing down replication, restoring a database, and rebuilding transactional replication with scripts:

If you use replication, you have had the situation occur where you had to restore a replicated database.  You’ve have doubtless been paged to restore a replicated database. You have experienced the ineffable joy of being tearing down replication-dependent indexed views (if you have them), blowing away replication, doing the restore, putting replication and indexing back together again, and finally redeploying your indexed views.  I know I have.

In fact, I’ve done it enough times that I didn’t want to do it anymore. So, you may ask, did I go to a different modality of replicating my data?  Did I go to Availability Groups or mirroring instead?  No.  I actually like replication.  It’s invaluable when you need to write code around real-time data (especially from a third party database), but you aren’t able to index the original copy.  It’s been around for a long time and is well vetted, and pretty forgiving, once you understand how it works.  So, no need to reinvent the wheel. I decided to automate replication instead.

This is specific to transactional replication. There’s a whole ‘nother kettle of fish for merge replication.

Comments closed

Transactional Replication Tips

Nate Johnson has a few things which might make SQL Server transactional replication easier for you:

For what seems like years, I’ve bemoaned the fact that SQL Transactional Replication doesn’t come with a “Just Trust Me” option. I’ll explain more about what I mean in a moment. The other thing I’ve complained about is that there’s no “Pause” button — which not entirely accurate, since obviously you could just stop the distribution and subscription agents. But specifically what I mean is, it’s not easy to ‘put it on hold so you can make some schema changes to one of the tables that’s being replicated’, and then easily “Resume” it after you’re done with said changes.

Well, I’m happy to say that now I have both of these tools/methodologies in my arsenal!

Read on for those tips and a couple more.

Comments closed

Kafka and MirrorMaker

Renu Tewari describes what MirrorMaker does for Kafka today and what is coming with version 2:

Apache Kafka has become an essential component of enterprise data pipelines and is used for tracking clickstream event data, collecting logs, gathering metrics, and being the enterprise data bus in a microservices based architectures. Kafka is essentially a highly available and highly scalable distributed log of all the messages flowing in an enterprise data pipeline. Kafka supports internal replication to support data availability within a cluster. However, enterprises require that the data availability and durability guarantees span entire cluster and site failures.

The solution, thus far, in the Apache Kafka community was to use MirrorMaker, an external utility, that helped replicate the data between two Kafka clusters within or across data centers. MirrorMaker is essentially a Kafka high-level consumer and producer pair, efficiently moving data from the source cluster to the destination cluster and not offering much else. The initial use case that MirrorMaker was designed for was to move data from clusters to an aggregate cluster within a data center or to another data center to feed batch or streaming analytics pipelines. Enterprises have a much broader set of  use cases and requirements on replication guarantees.

Read on for the list of benefits and upcoming features.

Comments closed

Conflict Tracking in Merge Replication

Ranga Babu shows the two different models for conflict detection with merge replication:

Conflict Detection:
The conflict detection depends on the type of tracking we configure for the article.
Row-level tracking: If data changes are made to any column on the same row at both ends, then it is considered a conflict.
Column-level tracking: If data changes are made on the same column at both ends, this change is qualified as a conflict.

Read on for a detailed demonstration of the two.

Comments closed

When SQL Server Replication Ignores Tables

Matt Slocum takes us through a tricky replication scenario (hint, they all are):

There are occasions when Updates, Inserts, and Deletes on a replicated table do not replicate out to the Subscriber.  You’ve verified that the table is listed in the Articles included in the Publication, and that there is at least one Subscription on the Publication. 

The strange thing is that there are likely other tables in the same Publication that are properly being replicated to the same Subscriber.

What is happening here?  Why is replication ignoring this table?

Read on to see Matt’s explanation and fix.

Comments closed

The Distribution Database and AGs

Andy Mallon looks at a wrinkle Availability Groups adds to replication:

This has always worked super, and has been a go-to query for me for years. But when looking at a SQL Server 2017 distributor with the distribution database in an Availability Group, that wasn’t working. All the publications had a publisher_id of 1, but in sys.serversserver_id 1 was some random linked server, and definitely not the publisher. But replication was working great. Maybe replication was set up on the other AG replica, and server_id 1 came from there.

Nope. On the other replica, it was the same story. Server_id 1 was a random linked server, and nothing to do with replication at all, let alone the publisher. But replication was working perfectly. A teammate fooling around with it in dev confirmed that if he updated the publisher_id to match the server_id we thought it should join to, replication stopped working. So, that publisher_id of 1 was correct. Or special. But also definitely different than what I’ve seen in prior versions of SQL Server.

Read on to see what Andy learned.

Comments closed