Press "Enter" to skip to content

Category: Administration

VACUUM FULL in PostgreSQL

Umair Shahid goes full vacuum and you never go full vacuum:

If you have worked with PostgreSQL for a while, you have probably come across the command VACUUM FULL. At first glance, it might seem like a silver bullet for reclaiming disk space and optimizing tables. After all, who would not want to tidy things up and make their database more efficient, right?

But here is the thing: while VACUUM FULL can be useful in some situations, it is not the hero it might seem. In fact, it can cause more problems than it solves if you are not careful.

Read on to learn what it does and why it’s not always a good idea.

Comments closed

Waiting for Locks in Postgres

Hubert Lubaczewski wants to make a change:

I once wrote about this problem, but given that we now have DO blocks and procedures I can make nicer, easy to use example.

Over the years there have been many improvements to how long ALTER TABLE can take. You can now (in some cases) change datatype without rewrite or add default value.

Regardless how fast the thing works, it still needs extremely heavy (though shortlived) lock: Access Exclusive.

Read on to see how you can write a SQL operation that waits for a lock and, if it does not get this lock, retries with backoff.

Comments closed

Against Using SQL Server Database Mail

Brent Ozar lays out an argument:

So you’re writing T-SQL code and you’ve decided that you want to send emails to customers, employees, or the general public. Perhaps you need to send order updates or low stock notifications.

Stop right there. You don’t really wanna do that with SQL Server. Let’s talk about why.

Read on for good reasons why. The bit of fuzziness here is “employees,” as Database Mail is quite useful for DBAs or individuals responsible for the upkeep of database-related operations. But the further you move away from “People whose job it is to keep the SQL Server instance up and running,” the less viable Database Mail is as a product, for the exact reasons Brent mentions.

Of course, you could always do what I did at one consulting engagement and set up sending Slack messages via CLR. I’m not saying it was a good idea, but it was pretty cool that it worked as well as it did.

Comments closed

High CPU with SQL Server Database Mail

Vitaly Bruk deals with a CPU issue:

Today, I investigated an interesting issue.

One of my clients called me and complained about high CPU on his server. Server with 0 load at this time frame.

Using my favorite “Activity” script and the sys.dm_os_ring_buffers DMV, I clearly saw that the CPU is being used by a non-SQL server service. Next, I’ve asked to connect SQL Server machine and opened a task manager.

Surprise, surprise! The CPU was being used by the SQL Server process! Well, kind of…

The moral of the story is to keep your systems patched.

Comments closed

Trace Flag 3625 and Masking Failed Logins

Patrick Keisler notes an interesting data capture result:

Several months ago, I discussed my customer’s intention to enable trace flag 3625. Since that time, we have observed an intriguing phenomenon when encountering a login failure.

To demonstrate this, our initial step is to ensure we are capturing failed login attempts. In Server Properties, select the Security tab, click on Failed Logins Only, and then click OK.

Patrick has a link to a description of trace flag 3625, but the short version is that it masks sensitive information in error messages from non-sysadmins.

Comments closed

A Mechanism for Change Management for DBAs

Terri Hurley shares one path:

Back in the day when I received an email from a developer who requested a Production Change, I would copy and paste SQL code from an email to SQL Server to run against a production database. I never bothered to save that code. But later when I was asked for a history of database changes, it was awkward to locate all the previous email requests asking for production changes. Fast forward to present day, most businesses have embraced Change Management Separation of Duties in companies of all sizes.

My preferred path to change management is:

  • All changes go into source control. You can either have the people making changes create the actual scripts to run, or generate them from base changes in source control, dealer’s choice.
  • Each changeset goes to a specific environment when it gets PR’d into the appropriate branch—that is, dev, QA, test, pre-prod, prod, post-prod, next-gen-post-dev-unprod, whatever.
  • DBAs are on pull requests, at least after a certain point. PR to dev? Let the team handle that code review. Once you move nearer to prod, at least one DBA performs a code review.
  • After PR gating happens, your CI/CD system automatically deploys the changes, assuming it passes any continuous integration tests.

This takes time to get right, but it’s very much worth the effort. I worked at a company where they built the release management processes on top of Git + Azure DevOps. You can also use tools like Flyway.

Comments closed

Roles and Privileges in Oracle versus PostgreSQL

Umair Shahid continues a series on migrating from Oracle to Postgres:

When moving from Oracle to PostgreSQL, one of the key differences lies in how each database handles roles and privileges. Oracle’s privilege model is deeply ingrained in enterprise systems, with fine-grained user controls and a strict distinction between users and roles. PostgreSQL, while just as capable, approaches roles and privileges differently, offering flexibility and simplicity, but it also requires a shift in mindset for Oracle users.

This article provides a practical guide for Oracle experts to understand and implement roles and privileges in PostgreSQL, addressing the structural differences, common challenges, and best practices to make this transition smooth.

Read on for the differences between the two platforms.

Comments closed

CPU Monitoring in SQL Server with Datadog

Kendra Little has a recommendation:

What makes me a raving fan is the flexibility of Datadog’s notebooks and dashboards, combined with the ability to create all sorts of custom metrics and monitors. There are always things in SQL Server monitoring packs that I have strong opinions about. Datadog lets me take what I want, build what I need that isn’t contained in that, and ignore the rest. For a team that has the budget to afford Datadog paired with dedicated database staff with the time and resources to do this work, this can be a great fit.

One of the weirdest and worst parts of the Datadog SQL Server monitoring tooling, though, is how it handles wait stats. In my opinion, it’s a case of someone reinventing a wheel that didn’t need to be reinvented, and then not documenting what they did clearly (at least not in a way I can find).

Two of the most confusing Datadog “waits” are labeled “CPU” and “Waiting on CPU”. I opened a support ticket with Datadog a while back to ask what these are, because I couldn’t find any way they correspond to actual wait stats in SQL Server. I learned they aren’t wait stats at all. In fact, I think you should largely ignore them. Here’s why.

Read on for the full story.

Comments closed