In this article, we will discuss to deploy SQL Server Merge Replication on Linux environment. Many SQL Server features are not available on Linux by Microsoft but the Replication feature exists for Linux Environment Edition. Before starting anything, let’s address this question – what is Merge Replication?
Merge Replication is a data synchronization process with one database (Publisher) to other databases (Subscriber) and vice versa. The data synchronization audit will be controlled by the distributor (Distribution database). The distributor database will manage the data synchronization between Publisher and Subscribers.
Merge replication is also an unending nightmare of pain, but you do what you gotta do.
You may sometimes have reports or other processes that are dependent on transactional replication being current. If that is the case, you will probably need a mechanism to check and see if, in fact, replication is caught up. Here is my solution to that, without having to resort to Replication Monitor all the time. The bonus? This could be inserted into conditional workflows to help streamline processes (i.e., validate publications before moving on to Step 2 of process).
To do this, I chose to make three stored procedures. The first one to just check all publications on a server, one to check just one publication on a server, and one central sproc to rule them all. You simply execute the master stored procedure, and based on the parameters you feed, it decides which of the other two to execute.
Read on for those scripts.
Transactional tables in Hive support ACID properties. Unlike non-transactional tables, data read from transactional tables is transactionally consistent, irrespective of the state of the database. Of course, this imposes specific demands on replication of such tables, hence why Hive replication was designed with the following assumptions:
1. A replicated database may contain more than one transactional table with cross-table integrity constraints.
2. A target may host multiple databases, some replicated and some native to the target. The databases replicated from the same source may have transactional tables with cross-database integrity constraints.
3. A user should be able to run read-only workloads on the target and should be able to read transactionally consistent data.
4. Since in Hive a read-only transaction requires a new transaction-id, the transaction-ids on the source and the target of replication may differ. Thus transaction-ids can not be used for reading transactionally consistent data across source and replicated target.
Read on to learn why these assumptions are in place and what they mean for replication.
If you use replication, you have had the situation occur where you had to restore a replicated database. You’ve have doubtless been paged to restore a replicated database. You have experienced the ineffable joy of being tearing down replication-dependent indexed views (if you have them), blowing away replication, doing the restore, putting replication and indexing back together again, and finally redeploying your indexed views. I know I have.
In fact, I’ve done it enough times that I didn’t want to do it anymore. So, you may ask, did I go to a different modality of replicating my data? Did I go to Availability Groups or mirroring instead? No. I actually like replication. It’s invaluable when you need to write code around real-time data (especially from a third party database), but you aren’t able to index the original copy. It’s been around for a long time and is well vetted, and pretty forgiving, once you understand how it works. So, no need to reinvent the wheel. I decided to automate replication instead.
This is specific to transactional replication. There’s a whole ‘nother kettle of fish for merge replication.
For what seems like years, I’ve bemoaned the fact that SQL Transactional Replication doesn’t come with a “Just Trust Me” option. I’ll explain more about what I mean in a moment. The other thing I’ve complained about is that there’s no “Pause” button — which not entirely accurate, since obviously you could just stop the distribution and subscription agents. But specifically what I mean is, it’s not easy to ‘put it on hold so you can make some schema changes to one of the tables that’s being replicated’, and then easily “Resume” it after you’re done with said changes.
Well, I’m happy to say that now I have both of these tools/methodologies in my arsenal!
Read on for those tips and a couple more.
Apache Kafka has become an essential component of enterprise data pipelines and is used for tracking clickstream event data, collecting logs, gathering metrics, and being the enterprise data bus in a microservices based architectures. Kafka is essentially a highly available and highly scalable distributed log of all the messages flowing in an enterprise data pipeline. Kafka supports internal replication to support data availability within a cluster. However, enterprises require that the data availability and durability guarantees span entire cluster and site failures.
The solution, thus far, in the Apache Kafka community was to use MirrorMaker, an external utility, that helped replicate the data between two Kafka clusters within or across data centers. MirrorMaker is essentially a Kafka high-level consumer and producer pair, efficiently moving data from the source cluster to the destination cluster and not offering much else. The initial use case that MirrorMaker was designed for was to move data from clusters to an aggregate cluster within a data center or to another data center to feed batch or streaming analytics pipelines. Enterprises have a much broader set of use cases and requirements on replication guarantees.
Read on for the list of benefits and upcoming features.
The conflict detection depends on the type of tracking we configure for the article.
– Row-level tracking: If data changes are made to any column on the same row at both ends, then it is considered a conflict.
–Column-level tracking: If data changes are made on the same column at both ends, this change is qualified as a conflict.
Read on for a detailed demonstration of the two.
There are occasions when Updates, Inserts, and Deletes on a replicated table do not replicate out to the Subscriber. You’ve verified that the table is listed in the Articles included in the Publication, and that there is at least one Subscription on the Publication.
The strange thing is that there are likely other tables in the same Publication that are properly being replicated to the same Subscriber.
What is happening here? Why is replication ignoring this table?
Read on to see Matt’s explanation and fix.
This has always worked super, and has been a go-to query for me for years. But when looking at a SQL Server 2017 distributor with the distribution database in an Availability Group, that wasn’t working. All the publications had a
publisher_idof 1, but in
server_id1 was some random linked server, and definitely not the publisher. But replication was working great. Maybe replication was set up on the other AG replica, and
server_id1 came from there.
Nope. On the other replica, it was the same story.
Server_id1 was a random linked server, and nothing to do with replication at all, let alone the publisher. But replication was working perfectly. A teammate fooling around with it in dev confirmed that if he updated the
publisher_idto match the
server_idwe thought it should join to, replication stopped working. So, that
publisher_idof 1 was correct. Or special. But also definitely different than what I’ve seen in prior versions of SQL Server.
Read on to see what Andy learned.
Last week I saw a thread on twitter about how to get replication setup for SQL Server running in a container. Now I know very little about replication, it’s not an area of SQL that I’ve had a lot of exposure to but I’m always up for figuring stuff out (especially when it comes to SQL in containers).
So let’s run through how to set it up here.
First, create a dockerfile to build an image from the SQL Server 2019 CTP 2.2 image with the SQL Server Agent enabled: –
Now that Andrew is a replication expert…