Press "Enter" to skip to content

Category: Administration

Vacuuming in PostgreSQL

Muhammad Ali keeps things tidy:

If you’re a PostgreSQL user, you’ve undoubtedly come across the term “vacuum“. This operation plays a pivotal role in maintaining the optimal performance of your database while preventing unnecessary data bloat. In this blog, we’ll understand how vacuum works on high level, its significance, types, server parameters that influence autovacuum operations, and general FAQ’s on vacuum.

Read on to learn more about what vacuuming does and why it is important. It also turns out that there are multiple types of vacuuming.

Comments closed

Debugging an Unresponsive Elasticsearch Cluster

Derric Gilling troubleshoots an Elasticsearch cluster:

Because of this sharding, a read or write request to an Elasticsearch cluster requires coordinating between multiple nodes as there is no “global view” of your data on a single server. While this makes Elasticsearch highly scalable, it also makes it much more complex to setup and tune than other popular databases like MongoDB or PostgresSQL, which can run on a single server.

When reliability issues come up, firefighting can be stressful if your Elasticsearch setup is buggy or unstable. Your incident could be impacting customers which could negatively impact revenue and your business reputation. Fast remediation steps are important, yet spending a large amount of time researching solutions online during an incident or outage is not a luxury most engineers have. This guide is intended to be a cheat sheet for common issues that engineers running that can cause issues with Elasticsearch and what to look for.

Read on for several helpful tips.

Comments closed

Safe Mode for Updates in MySQL

Chad Callihan is no dummy:

Did you know MySQL has a flag designed to prevent accidentally changing more data than you intended? If not, I think you’ll find it easy to remember as the flag has a memorable name: ‘i-am-a-dummy.’ If you have this flag set and leave off a WHERE clause when updating or deleting data, MySQL will prevent the statement from executing.

Let’s walk through an example using i-am-a-dummy and its “Safe Updates” Workbench counterpart.

Seems like this should be on by default for most servers.

Comments closed

Choosing the Correct Enterprise Edition for SQL on Linux

Amit Khandelwal disambiguates two options:

Recently, in one of the meetings with our CSS team (support) we realized that our customers might not be selecting the right Enterprise edition for their SQL Servers deployed on Linux. To learn more about different SQL Server editions please refer: Editions and supported features of SQL Server 2022 – SQL Server | Microsoft Learn.

Thus, through this blog we would like to provide guidance on how you can identify and update the SQL Server to use the right Enterprise edition which should show up in the error log as : “Enterprise Edition: Core-based Licensing (64-bit)” and not “Enterprise Edition (64-bit)” which is only needed in case you need to follow historical licensing agreements that limits to a maximum 20 cores per SQL Server instance.

Read on to learn how you can change the edition if you did indeed goof on the installation.

Comments closed

Creating a Postgres Cluster on AWS with pg_cirrus

Salman Ahmed builds a cluster:

pg_cirrus is a simple and automated solution to deploy highly available 3-node PostgreSQL clusters with auto failover. It is built using Ansible and to perform auto failover and load balancing we are using pgpool.

We understand that setting up 3-node HA cluster using pg_cirrus on cloud environment isn’t as simple as setting it up on VMs. In this blog we will guide you in setting up a 3-node HA cluster using pg_cirrus on AWS EC2 instances.

Read on for the step-by-step instructions.

Comments closed

The Risk of Changing MaxDOP

Erik Darling recommends caution:

Like in yesterday’s post about Cost Threshold For Parallelism, changing MAXDOP settings will have a universal effect on the workload.

This is true whether you change it at the server level for all databases, or at the database level using a database scoped configuration for a single database.

It is a guardrail to prevent unwanted conditions as a whole, like excessive concurrent parallel queries causing worker thread starvation (THREADPOOL waits), or just pushing CPU to 100% for extended periods of time.

Read on to see what Erik recommends you think about after any MaxDOP change.

Comments closed

Thoughts on Cost Threshold for Parallelism

Erik Darling has some thoughts:

First, I’m not suggesting that anyone should be using the default value for Cost Threshold For Parallelism. It’s old and moldy and not a good fit for most workloads functioning on modern hardware.

My apologies to Azure SQLDB users who can’t change this setting and leave it up to Microsoft to maybe manage it for them based on ???

Some people out there really like fiddling with settings in a usually ill-informed reaction to Some Script They Found On The Internet, without reading the fine print.

Erik’s thoughts are reasonable overall. My recommendation is to use Michael J. Swart’s technique for tuning cost threshold for parallelism as a starting point, as it gives you a basis for what the net effect of your changes are.

Comments closed

Port Scanning for SQL Server

David Fowler performs one of the early steps of a penetration test:

Since witnessing a rather nasty cyber attack around a year ago, I’ve been thinking quite a bit about security. Do we really know how secure our SQL Servers are? Penetration testing is a great way to find out where our weaknesses and vulnerabilities are. Ideally you probably want to be getting regular pen tests conducted by external companies (although in my experience, some are better than others. I’ve known some who argue totally pointless issues and miss glaring holes which I know exist, but that’s a whole different story) but wouldn’t it be useful if we could conduct some of these tests ourselves?

In this series of posts, I’m going to try to knock together a little pen testing toolbox so that we can hopefully find some of these vulnerabilities. I’m no pen testing expert and this is never going to replace getting a professional pen tester in to test your setup but it might go some way to helping us understand some of our vulnerabilities and identify them.

Click through to see what David did, as well as an alert which helped pick out this port scanning operation.

Comments closed

Upgrading SQL Server Cloud VMs

Brent Ozar recommends you check your cloud provider’s VM listings:

If you’ve been in Azure or Amazon for a few years, you’re probably on old, slow hardware.

In the last 3 weeks, I’ve had two clients who’d both been early cloud adopters. When they’d migrated to the cloud, they both used Azure Ev3 VMs – at the time, a good choice for SQL Server due to its relatively high amount of memory. When the Ev3 VM types were announced in 2017, they were hosted on Intel Broadwell and Haswell processors with 2.3-2.4GHz processing speed.

Also, even if you’re locked into a 1-year or 3-year deal, I know that at least Azure is usually willing to switch your VM class registration if you contact your support person. I’m not positive if AWS does the same but it wouldn’t shock me.

Comments closed