Press "Enter" to skip to content

Category: Administration

Sockets vs Cores and SQL Server

Vlad Drumea checks server settings:

It’s not uncommon that I run into a VM that’s configured with something like 6 or more cores with each core on one socket.

Here’s an example how this would show up in Task Manager for a VM with 16 CPU cores configured with 1 core per socket.

Read on to learn why this particular configuration can turn out so poorly with SQL Server, particularly when you use Standard Edition.

Comments closed

High Water Mark and PostgreSQL Vacuum Operations

Shane Borden troubleshoots an issue:

The first thing we came to understand is that the pattern of work on the primary is a somewhat frequent large DELETE statement followed by a data refresh accomplished by a COPY from STDIN command against a partitioned table with 16 hash partitions.

The problem being observed was that periodically the SELECTs occurring on the read replica would time out and not meet the SLA. Upon investigation, we found that the “startup” process on the read replica would periodically request an “exclusive lock” on some random partition. This exclusive lock would block the SELECT (which is partition unaware) and then cause the timeout. But what is causing the timeout?

Read on for the answer and tips on how to determine if you have problems with High Water Mark growth in PostgreSQL.

Comments closed

Setting the Optimal logical_decoding_work_mem in PostgreSQL

Ashutosh Bapat shares a tip with us:

Logical replication is a versatile feature offered in PostgreSQL. I have discussed the the theoretical background of this feature in detail in my POSETTE talk. At the end of the talk, I emphasize the need for monitoring logical replication setup. If you are using logical replication and have setup monitoring you will be familiar with pg_stat_replication_slots. In some cases this view shows high amount of spill_txns, spill_count and spill_bytes, which indicates that the WAL sender corresponding to that replication slot is using high amount of disk space. This increases load on the IO subsystem affecting the performance. It also means that there is less disk available for user data and regular transactions to operate. This is an indication that logical_decoding_work_mem has been configured too low. That’s the subject of this blog: how to decide the right configuration value for logical_decoding_work_mem. Let’s first discuss the purpose of this GUC. Blog might serve as a good background before reading further.

Read on to learn a bit more about how this value works and what you can do to set it correctly.

Comments closed

Estimating Query Percentiles in PostgreSQL

Michael Christofides makes an assertion:

I recently saw a feature request for pg_stat_statements to be able to track percentile performance of queries, for example the p95 (95th percentile) or p99 (99th percentile).

That would be fantastic, but isn’t yet possible. In the meantime, there is a statistically-dodgy-but-practically-useful (my speciality) way to approximate them using the mean and standard deviation columns in pg_stat_statements.

Click through for the code. Michael even covers the immediate objection I have (that the data isn’t normally distributed, so you shouldn’t use the same Z score estimators that exist for the normal). That said, if you’re interested in “p99…ish” then this is a clever approach to take.

Comments closed

Pain Points around tempdb

Kevin Hill has a list:

TempDB is the SQL Server equivalent of a junk drawer – everyone uses it, nobody monitors it, and eventually it becomes a bottleneck you can’t ignore.

Whether it’s poorly configured from the start or getting hammered by bad execution plans, TempDB often becomes the silent killer of performance. The good news? A few targeted changes can make a big impact.

Read on for some of tempdb’s greatest hits. Kevin also has a few quick tips for tempdb.

Comments closed

What’s Forthcoming for DBAs in SQL Server 2025

Stephen Planck has a list:

There is a lot to be excited about in SQL Server 2025! When thinking about features that may not get as much attention as others, yet will make a real difference in the lives of DBAs, I have selected my top 5 enhancements for SQL Server 2025. These improvements may not make the headlines, but they address pain points we’ve all experienced as DBAs. Please let me know if I’ve left any of your favorites off the list.

Read on for the list. It’s not a set of fancy new features, though there are some quality of life improvements here.

Comments closed

Session Variables in PostgreSQL

Kaarel Moppel talks session variables:

Animated by some comments / complaints about Postgres’ missing user variables story on a Reddit post about PostgreSQL pain points in the real world – I thought I’d elaborate a bit on sessions vars – which is indeed a little known Postgres functionality.

Although this “alley” has existed for ages – and one can also use injected session variables to implement crazy stuff like token based Row Level Security or store huge and complex JSON state, or just keep a bit of DB-side state over essentially stateless statement-level connection pools – should you actually use it? What are the alternatives instead? Read on …

Click through to learn more.

Comments closed

Performing PostgreSQL Version Upgrades

Warda Bibi upgrades a server:

Upgrading to the latest version ensures long-term stability, access to new features, and better support. Recently, I worked on upgrading a critical production PostgreSQL environment from version 11 to 15. Version 15 was chosen because the client’s application had only been tested up to that release.  The system supported large batch workloads and live applications, so we had to be meticulous. While this article draws from that specific project, the steps are broadly applicable to anyone planning a major PostgreSQL upgrade, especially when crossing several versions.

This guide outlines a generalized, production-ready approach for performing major version upgrades using the pg_dump/pg_restore method.

Click through for two upgrade methods and some of the things to keep in mind during the process.

Comments closed

Performance Testing ZSTD Compression for SQL Server Backups

Andy Yun tries out some backup compression:

SQL Server 2025 Public Preview is not even a week old, but I’m impressed with another new capability that was released – a new backup compression algorithmZSTD. This one came as a surprise, despite being part of Private Preview, as it was only released with Public Preview.

Click through for Andy’s findings. It’s just one database that is not representative of normal SQL Server databases, but it’s an interesting data point that we can use.

Comments closed

Interpreting V$ and GV$ Views in Oracle RAC

Kellyn Gorman continues a series on Oracle Real Application Clusters:

Furthering on our Oracle Real Application Clusters (RAC) knowledge, we’re going to go deeper into what we watch for a RAC database that may be different than a single instance.  RAC is built for scale and instance resilience, distributing workloads across multiple nodes.  At the same time, what gives it strength introduces monitoring complexity, especially when you’re not just watching a single instance but multiple, interconnected ones. To manage performance effectively in RAC, you need to understand the difference between V$ and GV$ views, what they show you, and how to interpret cluster-level wait events.  Along with performance, the overall health of the RAC cluster and interconnect must be known, too.

Click through for Kellyn’s explanation.

Comments closed