Press "Enter" to skip to content

Category: Durability

Delayed Durability in SQL Server

Esat Erkec walks us through Delayed Durability in SQL Server:

In this article, we will learn the Delayed Durability feature that helps to improve transaction log file write throughput in SQL Server.

OLTP (Online Transaction Processing) databases should process a huge number of transactions within the shortest time and concurrently. Therefore, the transaction completion time becomes more important for the performance of the OLTP databases. Particularly for SQL Server, the transaction log (T-log) file configuration will play a key role in the performance of the transaction completion times because the write throughput to the log file directly affects the application response times.

This is a feature which might be useful in specific scenarios, but I’m always concerned about that risk of data loss.

Comments closed

The Transaction Log in Delta Tables

Burak Yavuz, et al, explain how the transaction log works with Delta Tables in Apache Spark:

When a user creates a Delta Lake table, that table’s transaction log is automatically created in the _delta_log subdirectory. As he or she makes changes to that table, those changes are recorded as ordered, atomic commits in the transaction log. Each commit is written out as a JSON file, starting with 000000.json. Additional changes to the table generate subsequent JSON files in ascending numerical order so that the next commit is written out as 000001.json, the following as 000002.json, and so on.

It’s interesting that they chose JSON instead of a binary transaction log like relational databases use.

Comments closed

Why Transaction Logs are Zero-Initialized

Paul Randal explains why the transaction log needs to be zero-initialized before SQL Server starts up:

It’s all to do with crash recovery. SQL Server knows where crash recovery has to start for a database, but not where it ends – i.e. SQL Server does not persist the ‘most recent LSN’ for a database anywhere. This means it has to work out where the end of the log is (and by end, I mean the most recent log record persisted on disk, not the physical end of the log file).

Read on for the detailed explanation.

Comments closed

Jepsen: MongoDB 3.4.0-rc3

Kyle Kingsbury takes a new look at MongoDB:

In April 2015, we discussed stale and dirty reads in MongoDB 2.6.7. However, writes appeared to be safe; update-only workloads with majority write concern were linearizable. This conclusion was not entirely correct. In this Jepsen analysis, we develop new tests which show the MongoDB v0 replication protocol is intrinsically unsafe, allowing the loss of majority-committed documents. In addition, we show that the new v1 replication protocol has multiple bugs, allowing data loss in all versions up to MongoDB 3.2.11 and 3.4.0-rc4. While the v0 protocol remains broken, patches for v1 are available in MongoDB 3.2.12 and 3.4.0, and now pass the expanded Jepsen test suite. This work was funded by MongoDB, and conducted in accordance with the Jepsen ethics policy.

Mongo has grown up when it comes to data integrity, though be sure you’re using the v1 replication protocol.

Comments closed

Delayed Durability

Chris Taylor gives a use case for Delayed Durability:

During development and initial testing on our own hardware, we had the migration at the time running at ~25minutes for around 600 packages (ie. tables) covering (what we termed) RawSource–>Source–>Staging which was well within the performance requirements for the stage that development was at and for what was initially set out. The rest of this blog post will hone in specifically on Source–>Staging only.

However, once we transferred the solution to the clients development environment things took a turn for the worse. In our environment we were running VMs with 8 cores, 16GB RAM and utlising SSDs. The client environment was running SQL Server 2016 Enterprise on VMWare vSphere 5.5, 8 vCPUs, 32GB RAM (for Integration, Development was half this) but the infrastructure team have done everything in their power to force all VMs onto the lower tier (ie. slow disks) of their 3-PAR SAN and throttle them in every way possible, just to make things more of a challenge. Even though the VM’s themselves were throttled we were confident that we wouldn’t see too much of a performance impact, especially as this was only a subset of the processing to be done so we needed it to be quick and it will only ever get longer and longer.

Chris walks through the hallmarks of when Delayed Durability might work, and the big one for me is the way data migration works:  full reloads.  The important thing is to have a durable source and a process to repeat data loads when things get missed; in this case, it’s a full reload, but in other cases it could be watchdog applications which compare data sets on each side.

Comments closed

Delayed Durability Deletions

Melissa Connors looks at using Delayed Durability while deleting a large batch of records:

Recently, while considering possible use cases for Delayed Durability, it occurred to me that data loss might be entirely acceptable in cases where the data would not truly be lost. I have worked with a number of applications that have processes that purge old information from the database. If a purge process failed in these applications, data would simply live a little bit longer, and be purged the next time the process was successful – they have a recovery mechanism built in as it is. I decided to test Delayed Durability in a database with a long-running purge to observe the potential performance impact. I chose a process that was clearly contributing to transaction log waits, because that is where the real performance impact comes from when delaying durability. If you do not have notable waits or some level of a bottleneck there, you are not likely to improve anything simply by turning on this feature.

I was not aware that you could set durability at the transaction level; I was under the mistaken impression that once you flipped the switch, all transactions were subject to Delayed Durability.  Disk-heavy operations (like large batches of deletions) does seem like a good use case for this.

Comments closed