Press "Enter" to skip to content

Category: Administration

What’s Forthcoming for DBAs in SQL Server 2025

Stephen Planck has a list:

There is a lot to be excited about in SQL Server 2025! When thinking about features that may not get as much attention as others, yet will make a real difference in the lives of DBAs, I have selected my top 5 enhancements for SQL Server 2025. These improvements may not make the headlines, but they address pain points we’ve all experienced as DBAs. Please let me know if I’ve left any of your favorites off the list.

Read on for the list. It’s not a set of fancy new features, though there are some quality of life improvements here.

Comments closed

Session Variables in PostgreSQL

Kaarel Moppel talks session variables:

Animated by some comments / complaints about Postgres’ missing user variables story on a Reddit post about PostgreSQL pain points in the real world – I thought I’d elaborate a bit on sessions vars – which is indeed a little known Postgres functionality.

Although this “alley” has existed for ages – and one can also use injected session variables to implement crazy stuff like token based Row Level Security or store huge and complex JSON state, or just keep a bit of DB-side state over essentially stateless statement-level connection pools – should you actually use it? What are the alternatives instead? Read on …

Click through to learn more.

Comments closed

Performing PostgreSQL Version Upgrades

Warda Bibi upgrades a server:

Upgrading to the latest version ensures long-term stability, access to new features, and better support. Recently, I worked on upgrading a critical production PostgreSQL environment from version 11 to 15. Version 15 was chosen because the client’s application had only been tested up to that release.  The system supported large batch workloads and live applications, so we had to be meticulous. While this article draws from that specific project, the steps are broadly applicable to anyone planning a major PostgreSQL upgrade, especially when crossing several versions.

This guide outlines a generalized, production-ready approach for performing major version upgrades using the pg_dump/pg_restore method.

Click through for two upgrade methods and some of the things to keep in mind during the process.

Comments closed

Performance Testing ZSTD Compression for SQL Server Backups

Andy Yun tries out some backup compression:

SQL Server 2025 Public Preview is not even a week old, but I’m impressed with another new capability that was released – a new backup compression algorithmZSTD. This one came as a surprise, despite being part of Private Preview, as it was only released with Public Preview.

Click through for Andy’s findings. It’s just one database that is not representative of normal SQL Server databases, but it’s an interesting data point that we can use.

Comments closed

Interpreting V$ and GV$ Views in Oracle RAC

Kellyn Gorman continues a series on Oracle Real Application Clusters:

Furthering on our Oracle Real Application Clusters (RAC) knowledge, we’re going to go deeper into what we watch for a RAC database that may be different than a single instance.  RAC is built for scale and instance resilience, distributing workloads across multiple nodes.  At the same time, what gives it strength introduces monitoring complexity, especially when you’re not just watching a single instance but multiple, interconnected ones. To manage performance effectively in RAC, you need to understand the difference between V$ and GV$ views, what they show you, and how to interpret cluster-level wait events.  Along with performance, the overall health of the RAC cluster and interconnect must be known, too.

Click through for Kellyn’s explanation.

Comments closed

Handling Large Delete Operations in TimescaleDB

Semab Tariq deletes a significant amount of data:

In today’s blog, we will discuss another crucial aspect of time-series data management: massive delete operations.

As your data grows over time, older records often lose their relevance but continue to occupy valuable disk space, potentially increasing storage costs and might degrade the performance if not managed well. 

Let’s walk through some strategies to clean up or downsample aged data in TimescaleDB, helping you maintain a lean, efficient, and cost-effective database.

The “or downsample” is huge, by the way: as a simple example, suppose you collect one record every millisecond, or 1000 per second. Say that we have a date+time and a few floating point numbers that add up to 40 bytes per record. If we have a year of data at that grain, we have 40 bytes/record * 1000 records/second * 3600 seconds/hour * 24 hours/day * 365.25 days/year, or 1,262,304,000,000 bytes/year. That’s ~1.15 terabytes of data per year, assuming no compression (which there actually is, but whatever). By contrast, if you keep millisecond-level data for a week, second-level for 3 weeks, and minute-level for the remaining year, you have:

40 bytes/record * 1000 records/second * 3600 seconds/hour * 24 hours/day * 7 days/week * 1 week = 22.53 gigabytes
40 bytes/record * 1 record/second * 3600 seconds/hour * 24 hours/day * 7 days/week * 3 weeks = 69 megabytes
40 bytes/record * 1 record/minute * 60 minutes/hour * 24 hours/day * 337.25 days = 18.5 megabytes

And for most cases, we only need the lowest level of granularity for a relatively short amount of time. After that, we typically care more about how the current data looks versus older data, for the purposes of trending.

Comments closed

Monitoring Node Health in Oracle RAC

Kellyn Gorman continues a series:

After my last blog post on Oracle Real Application Clusters (RAC) I was asked to talk about both health and how performance impact can affect a RAC database. Its architecture enables failover, workload distribution, and offers an option to scale performance, but only when all nodes play well together. When one node drags behind or becomes unstable, RAC has no choice but to protect the rest of the cluster- so help me, Oracle Gods. This protection can come in the form of node eviction, which can be both disruptive and at times avoidable with proactive monitoring and intervention.

Click through to learn how Oracle monitors node health, the types of issues you might run into, and how to prevent node eviction.

Comments closed

Analyzing Snowflake Costs

Kevin Wilkie watches a moth fly out of his wallet and wonders where all of the money went:

Last time, in Dashboard Dreams and Snowflake Schemes, we talked a little about showing how much Snowflake really costs in a dashboard internal to Snowflake itself instead of having to push it to PowerBi, Tableau, Looker, or a myriad of other tools.

This time, let’s take it a step further: instead of sticking with the basic bar charts or exploding pie charts, we’ll explore how to better highlight usage trends by adding a Rolling 7-Day Average to our visualizations. This helps us more easily spot patterns and anomalies within our warehouses.

Read on for a pair of queries and a neat chart.

Comments closed

The Basics of Oracle RAC

Kellyn Gorman gives us a primer:

Oracle Real Application Clusters (RAC) is still one of the most robust instance high-availability and scalability solutions, designed to provide resilience, performance, and continuous service for many Oracle enterprise workloads. Whether deployed on two nodes or a complex multi-node setup, RAC ensures your database infrastructure is both fault-tolerant and responsive under increasing demand.  RAC is an essential part of the Maximum Availability Architecture (MAA) recommended practices, (and in my experience) found in about 40% of small to medium Oracle environments, 98% of large enterprise environments and 100% of Exadata engineered systems. 

In this post, we’ll walk through the architectural foundation of RAC, configuration essentials, and a real-world transactional scenario that highlights the importance of its shared and synchronized architecture.

Read on to learn more.

Comments closed

Data Archival and Retention in PostgreSQL

Daria Nikolaenko walks through a presentation:

I’ve started talking about something that happens with almost every Postgres database — the slow, steady growth of data. Whether it’s logs, events, or transactions — old rows pile up, performance suffers, and managing it all becomes tricky. My talk was focusing on  practical ways to archive, retain, and clean up data in PostgreSQL, without breaking queries or causing downtime.

Read on to learn more.

Comments closed