Press "Enter" to skip to content

Category: Replication

Using Replication With SQL Server In Containers

Andrew Pruski shows us how we can build up snapshot replication with SQL Server in containers:

Last week I saw a thread on twitter about how to get replication setup for SQL Server running in a container. Now I know very little about replication, it’s not an area of SQL that I’ve had a lot of exposure to but I’m always up for figuring stuff out (especially when it comes to SQL in containers).
So let’s run through how to set it up here.
First, create a dockerfile to build an image from the SQL Server 2019 CTP 2.2 image with the SQL Server Agent enabled: –

Now that Andrew is a replication expert…

Comments closed

Configuring Snapshot Replication

Nisarg Upadhyay shows us how to configure snapshot replication:

On the next screen, configure the SQL Agent security. To configure the Agent security, click the Security Settings button. The Snapshot Agent Security dialog box opens. In the dialog box, provide the account under which the subscriber connects to the publisher. Moreover, provide the account information under which the SQL Server agent job will be executed. For this demo, SQL Server jobs are executed under the SQL server agent service account, hence select the Run under the SQL Server Agent service account option. Subscribers will be connected to the publisher using SQL login, hence select the Using the following SQL Server login option and provide SQL login and password. In this demo, connect using the sa login. Click OK to close the dialog box and Click Next.

Snapshot replication is the easiest to get right, but most of the setup is the same for transactional or merge replication.

Comments closed

Index Maintenance With Replication

Ajay Dwivedi shares his rules of thumb for index maintenance on replicated databases:

Like any other DBA, I fell into the trap of using straight maintenance solution using Reorganize operation for Indexes with avg fragmentation with 30% or less with Index Rebuild for avg fragmentation greater than 30%.

Well above approach works fine in common scenarios, but can create problems for servers using transaction log based High Availability technologies, such as AlwaysOn Availability Groups, database mirroring, log shipping, and replication. Both index rebuild and reorganize introduce heavy transaction log activity and generate a large number of log records. This becomes an issue in case of node failover, server with limited storage, database file with restricted growth, wrong file auto growth setting, or database with high VLF counts.

The best option for servers with High Availability is to identify kind of server workload (OLTP/OLAP/mixed), fill factor (based on Page Splits/sec), fragmentation, underlying storage load (random/sequential), Index Scans vs Index Searches, job time frame (low activity outside business hours) etc. After calculating all the above factors, all we need is to have a robust Index Maintenance solution. This is where I find Ola Hallengren’s SQL Server Maintenance Solution a perfect fit.

Ajay uses Ola Hallengren’s solution and gives us the breakdown percentages he uses.

Comments closed

Replicating Data In HDFS Between Clusters

Murali Ramasami and Niru Anisetti have an article showing how to use the Hortonworks Data Lifecycle Manager to set up replication between two Hadoop clusters:

Data Lifecycle Manager (DLM) delivers on the promise of location-agnostic, secure replication by encapsulating and copying data seamlessly across physical private storage and public cloud environments. This empowers businesses to deliver the right data in the right environment to power the right use cases.

DLM v1.1 provides a complete solution to replicate data, metadata and security policies between on-premises and in cloud. It also supports data movement for data-at-rest and data-in-motion – whether the data is encrypted using a single key or multiple keys on both source and target clusters. DLM supports HDFS and Apache Hive dataset replication.

With DLM infrastructure administrators can manage their data, metadata and security management on-prem and in-cloud using a single-pane of glass that is built on open source technology. Business users can consume their workload outputs in the cloud with data-source-abstraction. DLM also enables business to reduce their capital expenditures and enjoy the benefits of flexibility and elasticity that cloud provides.

Click through for a demo.  May HDFS replication have as long a life and slightly less vitriol than SQL Server replication.

Comments closed

Problem With Merge Replication And FILESTREAM

Gianluca Sartori walks us through an error when combining merge replication with FILESTREAM:

I published tables with FILESTREAM data before, but it seems like there is a particular planetary alignment that triggers an error during the execution of the snapshot agent.

This unlikely combination consists in a merge article with a FILESTREAM column and two UNIQUE indexes on the ROWGUIDCOL column. Yes, I know that generally it does not make sense to have two indexes on the same column, but this happened to be one of the cases where it did, so we had a CLUSTERED PRIMARY KEY on the uniqueidentifier column decorated with the ROWGUIDCOL attribute and, on top, one more NONCLUSTERED UNIQUE index on the same column, backed by a UNIQUE constraint.

Setting up the publication does not throw any error, but generating the initial snapshot for the publication does:

Cannot create, drop, enable, or disable more than one constraint,
column, index, or trigger named 'ncMSmerge_conflict_TestMergeRep_DataStream'
in this context. Duplicate names are not allowed.

This is a rather specific confluence of events, so it probably won’t affect many people.  Still, it is a bug.

Comments closed

Optimizing Replication Agent Parameters

Sourabh Agarwal shows us a couple of levers we have in improving replication performance:

When SQL Server replication is used on environments with high traffic OLTP systems, users often need to adjust the agent profile parameters to increase the throughput of the log reader and distribution agents to keep up with the workload. We recently performed a series of tests to measure the performance of log reader and distribution agents while changing some of the parameters for these agents. This blog summarizes the outcomes and conclusions from this testing.

Read on for the relevant parameters.

Comments closed

Replicating Extra-Long Strings

Monica Rathbun walks us through a replication error:

Ever seen the below error? Until this week I hadn’t. So, I figured I’d take a little time and introduce it to those that had not.

Error Description: Length of LOB data (65754) to be replicated exceeds configured maximum 65536. Use the stored procedure sp_configure to increase the configured maximum value for max text repl size option, which defaults to 65536. A configured value of -1 indicates no limit

We ran into an issue with a customer this week, this error was flooding the error log. After a little digging I found it had to do with transactional replication (also applies to Change Data Capture) they had setup which included LOB data.

Read on to see what you can do to resolve this error.  Also, check out the comments and be glad you’re not in that boat…unless you are, in which case…

Comments closed

Smarter Distribution Database Cleanup

Sourabh Agarwal writes about an improvement to the Distribution Cleanup job:

In case of a highly active environment the cleanup job would need to delete a substantially large set of expired transactions and commands. The cleanup job uses a while loop to delete entries from the MSRepl_Commands and MSRepl_Transactions in batches of 2000 and 5000 rows, respectively. Depending on how large the MSRepl_Commands and MSRepl_Transactions Tables are, the cleanup activity can be cumbersome and time consuming, leading to several performance issues like lock blocking and sometimes replication agent failures. These hard-coded batch sizes work well when the size of the replication tables is small, but do not perform well when these replication tables become large, let’s say like 200-300 million rows.

The new implementation of the procedure allows users to parameterize the batch sizes inside the cleanup stored procedure and introduces a new adaptive approach to determine the batch sizes for each iteration. The procedure will, by default, honor any batch sizes that are mentioned explicitly as parameters. If no parameters are supplied, then the procedure will start with the default batch sizes of 2000 and 5000 rows and increase/decrease the batch sizes based on the performance of the previous iteration of the delete operation. If the time taken by the delete query improves by 50% compared to previous execution, the batch size value is increased by 20% up to a max value of 50000 rows per batch, and if the performance of the delete operation decreases by 20% as compared to the previous iteration, the batch size is decreased by 50% up to a default value of 2000/5000 rows per batch for MSReplCommands and MSRepl_Transactions, respectively.

It’s good to see Microsoft making incremental improvements to replication.

Comments closed

Putting The Distribution Database In An Availability Group

Sourabh Agarwal announces that you will soon be able to put the distribution database into an Availability Group:

Many enterprise customers have asked the capability to combine the usage of SQL Server replication and Always On, such that they can place replication distribution databases within an Always On AG to achieve high availability for their distribution databases, with the expectation that after doing so and when AG failover happens, SQL Server replication will continue functioning seamlessly and correctly. While the Replication publication and subscription databases can be configured to use Availability Groups, this support was lacking for the replication Distribution Databases.

SQL Server engineering team is excited to announce the new enhancement around the configuration of Replication Distribution Database in an Availability Group. This feature enhancement would be available with SQL Server 2017 CU6 and will be ported to SQL Server 2016 in a subsequent CU for SP2.

For those people using merge replication, it won’t be supported in this release.

1 Comment

Why CHECKDB Repair Invalidates Replication

Paul Randal explains why running DBCC CHECKDB on a published article will cause subscriptions to become invalidated:

Whenever I’m teaching and recovering from corruption, I always stress that if the REPAIR_ALLOW_DATA_LOSS option of DBCC CHECKDB (which I’ll just call ‘repair’ from now on) has to be used, you should do the following:

  • Run another DBCC CHECKDB to make sure that repair fixed everything and no new problems were exposed (by the repair fixing something that was blocking DBCC CHECKDB‘s access to a portion of the database)
  • Run DBCC CHECKCONSTRAINTS on any tables or indexes that were repaired and are involved in constraints (stated in Books Online too)
  • Reinitialize any replication subscriptions that are based on tables that were repaired (stated in Books Online too)
  • Reevaluate your HA/DR strategy so you don’t have to use repair again in future

One question I’m often asked is about why replication can be broken by running repair.

Read on for the answer.

Comments closed