Press "Enter" to skip to content

Category: Administration

Oracle Errors: Snapshot Too Old and LOB Columns

David Fitzjarrell tackles a pair of errors:

One of the few errors taht strikes fear in the heart of a DBA is the dreaded:

ORA-01555 snapshot too old
and
ORA-22924 snapshot too old

Of course there are plenty of blogs instructing the DBA to simply “increase the undo_retention”, and there are cases where this works as expected. However, LOBs can be different as two different mechanisms exist for undo management. A LOB column can be configured to use retention to manage before mages of the data, but that can be confusing as each LOB column MAY have its own retention setting. The DBA_LOBS view reports whether LOG column uses retention or pctversion to manage undo, and the associated setting being used. Let’s -dig into this a bit deeper.

Read on to learn more about how LOB retention works, the types of issues you can run into with it, and how to correct those issues.

Comments closed

Common SQL Server Mistakes: Default Auto-Growth

Hemantgiri Goswami takes a look at auto-growth:

Auto Growth is a feature that allows database files (primary, secondary, and log) to expand when the database file becomes full – without manual intervention.

Auto Growth feature is handy when we do not want to increase the size of database files manually. There are two ways you can set auto growth – using SQL Server Management Studio (SSMS hereafter) and T-SQL. Auto Growth can be configured – In Percent and Megabytes.

Auto-growth isn’t a problem on its own, though growth sizes, especially in older versions of SQL Server, were far too low for medium- and large-sized databases.

I don’t particularly like the 2.5 MB example Hemantgiri shows. I have a quick rule of thumb which is 64MB for small databases, 256-512 for medium-sized databases, and 1GB for large databases (assuming my underlying disk is fast). This limits the number of auto-growth events and, for log files in particular, keeps virtual log file counts more reasonable.

Comments closed

Vacuuming in PostgreSQL

Muhammad Ali keeps things tidy:

If you’re a PostgreSQL user, you’ve undoubtedly come across the term “vacuum“. This operation plays a pivotal role in maintaining the optimal performance of your database while preventing unnecessary data bloat. In this blog, we’ll understand how vacuum works on high level, its significance, types, server parameters that influence autovacuum operations, and general FAQ’s on vacuum.

Read on to learn more about what vacuuming does and why it is important. It also turns out that there are multiple types of vacuuming.

Comments closed

Debugging an Unresponsive Elasticsearch Cluster

Derric Gilling troubleshoots an Elasticsearch cluster:

Because of this sharding, a read or write request to an Elasticsearch cluster requires coordinating between multiple nodes as there is no “global view” of your data on a single server. While this makes Elasticsearch highly scalable, it also makes it much more complex to setup and tune than other popular databases like MongoDB or PostgresSQL, which can run on a single server.

When reliability issues come up, firefighting can be stressful if your Elasticsearch setup is buggy or unstable. Your incident could be impacting customers which could negatively impact revenue and your business reputation. Fast remediation steps are important, yet spending a large amount of time researching solutions online during an incident or outage is not a luxury most engineers have. This guide is intended to be a cheat sheet for common issues that engineers running that can cause issues with Elasticsearch and what to look for.

Read on for several helpful tips.

Comments closed

Safe Mode for Updates in MySQL

Chad Callihan is no dummy:

Did you know MySQL has a flag designed to prevent accidentally changing more data than you intended? If not, I think you’ll find it easy to remember as the flag has a memorable name: ‘i-am-a-dummy.’ If you have this flag set and leave off a WHERE clause when updating or deleting data, MySQL will prevent the statement from executing.

Let’s walk through an example using i-am-a-dummy and its “Safe Updates” Workbench counterpart.

Seems like this should be on by default for most servers.

Comments closed

Choosing the Correct Enterprise Edition for SQL on Linux

Amit Khandelwal disambiguates two options:

Recently, in one of the meetings with our CSS team (support) we realized that our customers might not be selecting the right Enterprise edition for their SQL Servers deployed on Linux. To learn more about different SQL Server editions please refer: Editions and supported features of SQL Server 2022 – SQL Server | Microsoft Learn.

Thus, through this blog we would like to provide guidance on how you can identify and update the SQL Server to use the right Enterprise edition which should show up in the error log as : “Enterprise Edition: Core-based Licensing (64-bit)” and not “Enterprise Edition (64-bit)” which is only needed in case you need to follow historical licensing agreements that limits to a maximum 20 cores per SQL Server instance.

Read on to learn how you can change the edition if you did indeed goof on the installation.

Comments closed

Creating a Postgres Cluster on AWS with pg_cirrus

Salman Ahmed builds a cluster:

pg_cirrus is a simple and automated solution to deploy highly available 3-node PostgreSQL clusters with auto failover. It is built using Ansible and to perform auto failover and load balancing we are using pgpool.

We understand that setting up 3-node HA cluster using pg_cirrus on cloud environment isn’t as simple as setting it up on VMs. In this blog we will guide you in setting up a 3-node HA cluster using pg_cirrus on AWS EC2 instances.

Read on for the step-by-step instructions.

Comments closed

The Risk of Changing MaxDOP

Erik Darling recommends caution:

Like in yesterday’s post about Cost Threshold For Parallelism, changing MAXDOP settings will have a universal effect on the workload.

This is true whether you change it at the server level for all databases, or at the database level using a database scoped configuration for a single database.

It is a guardrail to prevent unwanted conditions as a whole, like excessive concurrent parallel queries causing worker thread starvation (THREADPOOL waits), or just pushing CPU to 100% for extended periods of time.

Read on to see what Erik recommends you think about after any MaxDOP change.

Comments closed

Thoughts on Cost Threshold for Parallelism

Erik Darling has some thoughts:

First, I’m not suggesting that anyone should be using the default value for Cost Threshold For Parallelism. It’s old and moldy and not a good fit for most workloads functioning on modern hardware.

My apologies to Azure SQLDB users who can’t change this setting and leave it up to Microsoft to maybe manage it for them based on ???

Some people out there really like fiddling with settings in a usually ill-informed reaction to Some Script They Found On The Internet, without reading the fine print.

Erik’s thoughts are reasonable overall. My recommendation is to use Michael J. Swart’s technique for tuning cost threshold for parallelism as a starting point, as it gives you a basis for what the net effect of your changes are.

Comments closed