Press "Enter" to skip to content

Category: Internals

MVCC and Vacuuming in Postgres

Ryan Booz explains one area where Postgres’s implementation differs from most other vendors:

All relational databases handle transaction isolation in some way, typically with an implementation of Multi-version Concurrency Control (MVCC). Plain ‘ol, mainline SQL Server uses a form of MVCC, but all older rows (currently retained for ongoing transactions) are stored in TempDB. Oracle and MySQL also do something similar, storing (essentially) diffs of the modified data outside of the table that is merged at runtime for ongoing transactions that still need to see the older data.

Among these databases, PostgreSQL stands alone in the specific way MVCC is implemented. Rather than storing some form of the older data outside of the current table for transactions to query/merge/etc. at runtime, PostgreSQL always creates the newly modified row in-table alongside the existing, older versions that are still needed for running transactions. Yes, every UPDATE creates a new row of data in the table, even if you just change one column.

Read on to understand some of the implications of this and how it affects the way we manage databases.

Comments closed

SQL Agent and Memory Consumption

Sean Gallardy performs some troubleshooting:

I was asked if I knew any reason why SQL Agent would be using “a bunch” of memory and more cpu than normal. You and I, reader, now have the same information to go on. What do you do? Think about for a minute or two if you want before reading on. I’m not saying the way I did it was the way to do it or the only way, but I gave an action plan for data capture and once data was in hand, was solved in a few minutes. There are all sorts of things that can cause this in a program, but Agent typically doesn’t use a whole lot, unless it’s executing many concurrent T-SQL jobs (which, it really shouldn’t be, get enterprise level scheduling) and even then, the memory should deallocate.

My first inclination turned out to be right: I figured it had to do with a job running. The specifics, that was something I wasn’t sure about, but Sean takes us through the troubleshooting process.

Comments closed

Troubleshooting Issues with Full-Text Indexing

Jose Manuel Jurado Diaz digs into a customer problem:

Today, we got a new service request where our customer asks about the time spent populating the full-text. Following, I would like to share with you some lessons learned during this process, specifically working with Azure SQL Database.

For this example, we are going to use a General Purpose database with 8 vCores. Let’s get started with the creation of the table and fulltext.

Read on for a walkthrough of setting up full-text indexing and figuring out what the indexing engine is doing at any point in time.

Comments closed

Performance Tuning a Dedicated SQL Pool

Sarath Sasidharan has some guidance for us:

Synapse Dedicated pools have been battle tested at enterprise customers across the globe. We deal with data in the magnitude of PetaBytes. Synapse can provide you with the scale of the cloud and the high performance required for your enterprise-grade requirements.  The key to maximizing your performance is to follow best practices, check out best practices for dedicated SQL pools in Azure Synapse Analytics

Failure to do so causes performance issues. In such scenarios, is it important to understand where the bottlenecks are. This blog focuses on the different steps a query goes through; from the time the query is fired from the client until it returns back.  Delay caused in any of the steps would impact the overall run-time of the query and hence indicate degraded performance.

Click through for a walkthrough of each step along the way, potential problems you could run into, and remediations for those problems. Much of the advice is similar to what you’d get with SQL Server, though there are differences interspersed throughout each level.

Comments closed

Disabling Inline Database Log Flushes

Bob Dorr talks about a trace flag:

SQL Server optimizes database log file flush requests, performing these flush requests inline on the active worker. Certain patterns of log record activity may encounter increased spinlock contention while performing the log cache block flush activities.

Trace flag -T8904 (startup only trace flag) disables inline log flush, limiting the contention possibility from many workers to the subset of background LogWriter workers.  When the trace flag is enabled, the worker adding log records mark the log cache block to be flushed and a background LogWriter worker performs the flush activity.

Read on to learn more, including a call-out to Curated SQL favorite Lonny Niederstadt.

Comments closed

Performance-Killing Pre-Emptive Waits

Sean Gallardy finds the real killer:

If you haven’t already read up on cooperative and preemptive scheduling or aren’t sure what those are, please read the intro to that first, otherwise you’ll be lost.

Much as I’ve discussed before, SQL Server uses a cooperative scheduling model. This doesn’t mean that Windows does, nor does it mean Windows will scheduler whatever SQL Server schedules, in fact much of the time there are many other threads that run before the ones from SQL Server, that’s the job of the operating system to figure out. Due to SQL Server using cooperative scheduling there needs to be a mechanism that exists such that when a resource not under SQL Server’s control needs interaction the scheduler can keep on scheduling and threads can switch in and out (in SQL Server, Windows does what Windows wants). Enter preemptive status and associated waits.

Click through for a deep dive on the topic.

Comments closed

Query Cost Normalizing

Tibor Karaszi does the math:

There are plenty of articles out there recommending you to up the “cost threshold for parallelism” configuration option. Most of you already know this, if not fire up your favorite search engine and spend a few minutes reading about it.

My aim here is to do a totally non-scientific test if I can translate these fictitious seconds to what they correspond to on a reasonably modern hardware – which is my laptop from 2020.

This is one interesting approach to back into a cost threshold for parallelism.

Comments closed

Find When a Table was Dropped

Andrea Allred does some sleuthing:

Say you have a user come to you and they dropped a table sometime yesterday, but they don’t remember when and now they need it back. You could start the restore process and roll through logs until you see the drop and then restore to the hour before or you could run this super cool query to get the time the table was dropped.

Click through to see when and even which user did it—assuming you don’t have everybody going through a connection pooled account.

Comments closed

Lock Escalation outside of Repeatable Read

Paul White continues a series on lock escalation:

When SQL Server reads data under locking read committed isolation (the default for non-cloud offerings) and chooses row-level locking granularity, it normally only locks one row at a time. The engine processes the row through parent operators before releasing the shared lock prior to locking and reading the next row.

When the engine chooses page-level locking granularity, it takes and holds a shared page lock while processing rows on the current page and then releases the lock right before acquiring a lock on the next page.

In general, SQL Server is careful to take and hold locks only when needed and only long enough to guarantee correct results, taking account of the current isolation level. When locking at row granularity, the engine can skip row locks entirely when it’s safe to do.

Read on to understand when SQL Server either gets it wrong or when exigent factors alter the story.

Comments closed

Lock Escalation Thresholds and Parallelism

Paul White complicates matters:

In part one of this series, I explained how the lock manager maintains a count of held locks per transaction, and individual access methods keep a separate lock count per HoBt for each plan operator performing data access.

Parallelism adds an extra layer of complexity because both lock counts are maintained per thread.

There are two underlying reasons for this:

Paul explains the reasons and shows off plenty of tests along the way.

Comments closed