Press "Enter" to skip to content

Category: Administration

Breaking Changes in SQL Server 2025

Rebecca Lewis goes over the list:

Every new SQL Server release comes with shiny features — but SQL Server 2025 brings more than just enhancements. It’s important to know that there are several breaking changes under the hood that could futz your upgrade if you’re not paying attention.

On the whole, it’s a pretty small list but there are a few things on here that could affect any given environment.

Comments closed

Linux Huge Pages and PostgreSQL

Umair Shahid explains the value of huge pages when running PostgreSQL:

Huge pages are a Linux kernel feature that allocates larger memory pages (typically 2 MB or 1 GB instead of the normal 4 KB). PostgreSQL’s shared buffer pool and dynamic shared memory segments are often tens of gigabytes, and using huge pages reduces the number of pages the processor must manage. Fewer page‑table entries mean fewer translation‑lookaside‑buffer (TLB) misses and fewer page table walks, which reduces CPU overhead and improves query throughput and parallel query performance. The PostgreSQL documentation notes that huge pages “reduce overhead … resulting in smaller page tables and less CPU time spent on memory management”

One thing I found interesting here was the advice for PostgreSQL is to disable Transparent Huge Pages whereas in SQL Server on Linux, Microsoft’s recommendation is to keep THP enabled.

Comments closed

Diagnosing a Partition Job Failure after Migration to an AG

Mike Lynn describes a customer issue:

Quick Summary

A client noticed one of their reporting tables wasn’t logging any new information after the first of the new month.

Context

This environment ran on SQL Server 2019 in an Always On Availability Group configuration hosted on AWS EC2 servers. This is roughly 30-45 days after the servers were migrated from a SQL Server Failover Cluster Instance in AWS on EC2 to the new AG setup.

Read on for the problem, the discovery process, and the solution. I like reading this sort of report specifically to focus on the process. One of the best skills you can develop in any technical field is the practice of methodical behavior: review and understand the error message (perhaps with the assistance of a search engine or tool of choice), then work logically through possible issues until you discover the cause. It sounds obvious when I describe it that way, but far too often, people flail about and try a variety of arbitrary things because they don’t really understand the issue and hope that doing this one thing will fix whatever problem is happening.

Comments closed

Postgres Migration via Logical Replication

Elizabeth Christensen makes a move:

Moving a Postgres database isn’t a small task. Typically for Postgres users this is one of the biggest projects you’ll undertake. If you’re migrating for a new Postgres major version or moving to an entirely new platform or host, you have a couple options:

Read on for those three options, when logical replication can possibly work, and the process for upgrade. It’s definitely a bit more fiddly than other options, but it’s hard to beat with respect to downtime.

Comments closed

Creating Database Snapshots with sp_snapshot

David Fowler announces a tool update:

Presenting you with an updated version of our sp_snapshot procedure, allowing you to easily create database snapshots.

This new version fixes a bug that we’ve found in version 2 where snapshots will fail for databases with multiple data files.

We’ve also added the @STMTOnly parameter, allowing you to generate the scripts for creating the required snapshots without actually doing so.

Click through for more information, as well as where you can go to download the script.

Comments closed

RCSI Scenarios

Haripriya Naidu digs into a few scenarios:

When RCSI is enabled, I’d like to discuss three scenarios:

  1. UPDATE is in progress, and SELECT starts to run.
    Where does SELECT read from?
  2. SELECT runs, and there are no concurrent operations or uncommitted transactions. Where does SELECT read from?
  3. SELECT is running, and now an UPDATE starts to run concurrently. What happens to SELECT that is in progress? What about the UPDATE that started? Does it wait for SELECT to finish?

Click through to see what happens during each of these scenarios.

Comments closed

Tracking Memory Consumption in Fabric SQL Database

Lance Wright tracks memory utilization:

SQL Database in Fabric continues its commitment to providing you with robust tools for database management, performance monitoring, and optimization. Earlier this year, we released a performance dashboard to help you monitor and improve the performance of your SQL Database in Fabric. We’ve improved upon those performance monitoring capabilities with the ability to track memory consumption. This new capability delivers real-time, actionable data regarding the memory utilization of all database queries to help you make more informed decisions and manage SQL Database resources more efficiently.

Read on to see what you can do with this.

Comments closed

Linking Fabric Warehouse SQL Query Results to the Capacity Metrics App

Chris Webb follows up on a previous post:

Following on from my post two weeks ago about how to get the details of Power BI operations seen in the Capacity Metrics App using the OperationId column on the Timepoint Detail page, I thought it was important to point out that you can do the same thing with TSQL queries against a Fabric Warehouse/SQL Endpoint and with Spark jobs. These two areas of Fabric are outside my area of expertise so please excuse any mistakes or simplifications, but I know a lot of you are Fabric capacity admins so I hope you’ll find this useful.

Read on to learn more.

Comments closed

Inspecting the Postgres Write-Ahead Log

Henrietta Dombrovskaya digs into the write-ahead log:

First, when the users fixed one of the primary suspects jobs, the situation with WAL growth didn’t change. Second, the rate of the growth couldn’t be explained by these suboptimal jobs: the data volumes they were removing and reloading were still magnitudes smaller than the WAL size we were dealing with. Finally, I decided to do what I should have done from the start – to take a look at what exactly was in these super-fast growing WALs.

Read on to learn what Henrietta found. Also check out the comments for some additional context.

Comments closed

Losing Data with PostgreSQL and Jepsen

Jeremy Schneider performs some tests:

This is a follow‑up to the last article: Run Jepsen against CloudNativePG to see sync replication prevent data loss. In that post, we set up a Jepsen lab to make data loss visible when synchronous replication was disabled — and to show that enabling synchronous replication prevents it under crash‑induced failovers.

Since then, I’ve been trying to make data loss happen more reliably in the “async” configuration so students can observe it on their own hardware and in the cloud. Along the way, I learned that losing data on purpose is trickier than I expected.

Click through to learn more. Jepsen has been the gold standard in testing distributed database systems for data loss.

Comments closed