Press "Enter" to skip to content

Category: Replication

When SQL Server Replication Ignores Tables

Matt Slocum takes us through a tricky replication scenario (hint, they all are):

There are occasions when Updates, Inserts, and Deletes on a replicated table do not replicate out to the Subscriber.  You’ve verified that the table is listed in the Articles included in the Publication, and that there is at least one Subscription on the Publication. 

The strange thing is that there are likely other tables in the same Publication that are properly being replicated to the same Subscriber.

What is happening here?  Why is replication ignoring this table?

Read on to see Matt’s explanation and fix.

Comments closed

The Distribution Database and AGs

Andy Mallon looks at a wrinkle Availability Groups adds to replication:

This has always worked super, and has been a go-to query for me for years. But when looking at a SQL Server 2017 distributor with the distribution database in an Availability Group, that wasn’t working. All the publications had a publisher_id of 1, but in sys.serversserver_id 1 was some random linked server, and definitely not the publisher. But replication was working great. Maybe replication was set up on the other AG replica, and server_id 1 came from there.

Nope. On the other replica, it was the same story. Server_id 1 was a random linked server, and nothing to do with replication at all, let alone the publisher. But replication was working perfectly. A teammate fooling around with it in dev confirmed that if he updated the publisher_id to match the server_id we thought it should join to, replication stopped working. So, that publisher_id of 1 was correct. Or special. But also definitely different than what I’ve seen in prior versions of SQL Server.

Read on to see what Andy learned.

Comments closed

Using Replication With SQL Server In Containers

Andrew Pruski shows us how we can build up snapshot replication with SQL Server in containers:

Last week I saw a thread on twitter about how to get replication setup for SQL Server running in a container. Now I know very little about replication, it’s not an area of SQL that I’ve had a lot of exposure to but I’m always up for figuring stuff out (especially when it comes to SQL in containers).
So let’s run through how to set it up here.
First, create a dockerfile to build an image from the SQL Server 2019 CTP 2.2 image with the SQL Server Agent enabled: –

Now that Andrew is a replication expert…

Comments closed

Configuring Snapshot Replication

Nisarg Upadhyay shows us how to configure snapshot replication:

On the next screen, configure the SQL Agent security. To configure the Agent security, click the Security Settings button. The Snapshot Agent Security dialog box opens. In the dialog box, provide the account under which the subscriber connects to the publisher. Moreover, provide the account information under which the SQL Server agent job will be executed. For this demo, SQL Server jobs are executed under the SQL server agent service account, hence select the Run under the SQL Server Agent service account option. Subscribers will be connected to the publisher using SQL login, hence select the Using the following SQL Server login option and provide SQL login and password. In this demo, connect using the sa login. Click OK to close the dialog box and Click Next.

Snapshot replication is the easiest to get right, but most of the setup is the same for transactional or merge replication.

Comments closed

Index Maintenance With Replication

Ajay Dwivedi shares his rules of thumb for index maintenance on replicated databases:

Like any other DBA, I fell into the trap of using straight maintenance solution using Reorganize operation for Indexes with avg fragmentation with 30% or less with Index Rebuild for avg fragmentation greater than 30%.

Well above approach works fine in common scenarios, but can create problems for servers using transaction log based High Availability technologies, such as AlwaysOn Availability Groups, database mirroring, log shipping, and replication. Both index rebuild and reorganize introduce heavy transaction log activity and generate a large number of log records. This becomes an issue in case of node failover, server with limited storage, database file with restricted growth, wrong file auto growth setting, or database with high VLF counts.

The best option for servers with High Availability is to identify kind of server workload (OLTP/OLAP/mixed), fill factor (based on Page Splits/sec), fragmentation, underlying storage load (random/sequential), Index Scans vs Index Searches, job time frame (low activity outside business hours) etc. After calculating all the above factors, all we need is to have a robust Index Maintenance solution. This is where I find Ola Hallengren’s SQL Server Maintenance Solution a perfect fit.

Ajay uses Ola Hallengren’s solution and gives us the breakdown percentages he uses.

Comments closed

Replicating Data In HDFS Between Clusters

Murali Ramasami and Niru Anisetti have an article showing how to use the Hortonworks Data Lifecycle Manager to set up replication between two Hadoop clusters:

Data Lifecycle Manager (DLM) delivers on the promise of location-agnostic, secure replication by encapsulating and copying data seamlessly across physical private storage and public cloud environments. This empowers businesses to deliver the right data in the right environment to power the right use cases.

DLM v1.1 provides a complete solution to replicate data, metadata and security policies between on-premises and in cloud. It also supports data movement for data-at-rest and data-in-motion – whether the data is encrypted using a single key or multiple keys on both source and target clusters. DLM supports HDFS and Apache Hive dataset replication.

With DLM infrastructure administrators can manage their data, metadata and security management on-prem and in-cloud using a single-pane of glass that is built on open source technology. Business users can consume their workload outputs in the cloud with data-source-abstraction. DLM also enables business to reduce their capital expenditures and enjoy the benefits of flexibility and elasticity that cloud provides.

Click through for a demo.  May HDFS replication have as long a life and slightly less vitriol than SQL Server replication.

Comments closed

Problem With Merge Replication And FILESTREAM

Gianluca Sartori walks us through an error when combining merge replication with FILESTREAM:

I published tables with FILESTREAM data before, but it seems like there is a particular planetary alignment that triggers an error during the execution of the snapshot agent.

This unlikely combination consists in a merge article with a FILESTREAM column and two UNIQUE indexes on the ROWGUIDCOL column. Yes, I know that generally it does not make sense to have two indexes on the same column, but this happened to be one of the cases where it did, so we had a CLUSTERED PRIMARY KEY on the uniqueidentifier column decorated with the ROWGUIDCOL attribute and, on top, one more NONCLUSTERED UNIQUE index on the same column, backed by a UNIQUE constraint.

Setting up the publication does not throw any error, but generating the initial snapshot for the publication does:

Cannot create, drop, enable, or disable more than one constraint,
column, index, or trigger named 'ncMSmerge_conflict_TestMergeRep_DataStream'
in this context. Duplicate names are not allowed.

This is a rather specific confluence of events, so it probably won’t affect many people.  Still, it is a bug.

Comments closed

Optimizing Replication Agent Parameters

Sourabh Agarwal shows us a couple of levers we have in improving replication performance:

When SQL Server replication is used on environments with high traffic OLTP systems, users often need to adjust the agent profile parameters to increase the throughput of the log reader and distribution agents to keep up with the workload. We recently performed a series of tests to measure the performance of log reader and distribution agents while changing some of the parameters for these agents. This blog summarizes the outcomes and conclusions from this testing.

Read on for the relevant parameters.

Comments closed

Replicating Extra-Long Strings

Monica Rathbun walks us through a replication error:

Ever seen the below error? Until this week I hadn’t. So, I figured I’d take a little time and introduce it to those that had not.

Error Description: Length of LOB data (65754) to be replicated exceeds configured maximum 65536. Use the stored procedure sp_configure to increase the configured maximum value for max text repl size option, which defaults to 65536. A configured value of -1 indicates no limit

We ran into an issue with a customer this week, this error was flooding the error log. After a little digging I found it had to do with transactional replication (also applies to Change Data Capture) they had setup which included LOB data.

Read on to see what you can do to resolve this error.  Also, check out the comments and be glad you’re not in that boat…unless you are, in which case…

Comments closed

Smarter Distribution Database Cleanup

Sourabh Agarwal writes about an improvement to the Distribution Cleanup job:

In case of a highly active environment the cleanup job would need to delete a substantially large set of expired transactions and commands. The cleanup job uses a while loop to delete entries from the MSRepl_Commands and MSRepl_Transactions in batches of 2000 and 5000 rows, respectively. Depending on how large the MSRepl_Commands and MSRepl_Transactions Tables are, the cleanup activity can be cumbersome and time consuming, leading to several performance issues like lock blocking and sometimes replication agent failures. These hard-coded batch sizes work well when the size of the replication tables is small, but do not perform well when these replication tables become large, let’s say like 200-300 million rows.

The new implementation of the procedure allows users to parameterize the batch sizes inside the cleanup stored procedure and introduces a new adaptive approach to determine the batch sizes for each iteration. The procedure will, by default, honor any batch sizes that are mentioned explicitly as parameters. If no parameters are supplied, then the procedure will start with the default batch sizes of 2000 and 5000 rows and increase/decrease the batch sizes based on the performance of the previous iteration of the delete operation. If the time taken by the delete query improves by 50% compared to previous execution, the batch size value is increased by 20% up to a max value of 50000 rows per batch, and if the performance of the delete operation decreases by 20% as compared to the previous iteration, the batch size is decreased by 50% up to a default value of 2000/5000 rows per batch for MSReplCommands and MSRepl_Transactions, respectively.

It’s good to see Microsoft making incremental improvements to replication.

Comments closed