Press "Enter" to skip to content

Category: Administration

Using the pg_repack Extension

Muhammad Ali tries out an extension:

Regular updates and deletions within PostgreSQL tables can lead to various issues such as bloat, fragmentation, and a decline in performance over time. These challenges can significantly impact the efficiency and reliability of the database, potentially affecting critical operations.

To address these concerns, PostgreSQL introduced the pg_repack extension, which provides a robust solution for managing table maintenance without disrupting the production environment. By allowing tables to be rebuilt online, pg_repack tackles bloat and fragmentation issues, ensuring that database storage remains optimized and performance is consistently maintained.

Read on to see why vacuuming might not be enough and what pg_repack does.

Comments closed

Monitoring and Alerting on Fabric Capacity Metrics

Ron L’Esteve wants to know what’s happening:

With Microsoft Fabric now generally available, organizations are interested in implementing this flagship Unified Data and AI Intelligence Platform for several reasons. Its native integration within the Azure stack provides seamless and secure access to widely used technologies for data integration, business intelligence, and advanced analytics. Microsoft Fabric’s storage and compute capacity is utilized by resources within this unified analytics platform, including storage repositories, such as data warehouses and data lakes, and compute capacity for Power BI, Pipelines, DW processing, and artificial intelligence (AI)/machine learning (ML) workloads.

Fabric capacity can be purchased on Azure with a pay-as-you-go model, and a 60-day free trial (64 CUs) is offered to test the platform. Organizations that have an existing Power BI Premium capacity can easily enable access to Fabric by using the Microsoft Fabric admin switch. Enabling Fabric in Power BI Premium as opposed to Azure Portal creates a problem: there is no easy way to monitor and set alerts on your Fabric capacity metrics in the Azure Portal.

Click through to learn how to install and use the Microsoft Fabric Capacity Metrics App.

Comments closed

Arbitrary Intervals for Partitioning in Postgres

Keith Fiske does a bit of interval math:

Whether you are managing a large table or setting up automatic archiving, time based partitioning in Postgres is incredibly powerful. pg_partman’s newest versions support a huge variety of custom time internals. Marco just published a post on using pg_partman with our new database product for doing analytics with PostgresCrunchy Bridge for Analytics. So I thought this would be a great time to review the basic and complex options for the time based partitioning.

Read on for a note of how pg_partman works and interval management, especially for versions earlier than 5.0.

Comments closed

Multi-Master Architecture in PostgreSQL

Semab Tariq describes a scale-out technique for Postgres:

Multi-master architecture has gained significant traction in the world of database management, offering a solution to traditional limitations in scalabilityfault tolerance, and high availability. By allowing multiple nodes to operate as master, this architecture promises a more flexible and robust database system. However, along with these benefits come certain challenges, including data consistency, resource demands, and conflict resolution.

In this blog, we will explore what multi-master architecture is, delve into its key advantages, and discuss the potential drawbacks that come with its implementation. Also in our upcoming blogs, we will see how you can setup your first multi-master architecture with a tool called PGD (Postgres Distributed) by EnterpriseDB (EDB).

Read on to learn how it works, as well as some of the pros and cons of using it.

Comments closed

SQL ConstantCare Population Report for Spring 2024

Brent Ozar has a new chart for us:

The short story: SQL Server 2019 appears poised to swallow the SQL Server market altogether, hahaha.

The long story: ever wonder how fast people are adopting new versions of SQL Server, or what’s “normal” out there for SQL Server adoption rates? Let’s find out in the winter 2023 version of our SQL ConstantCare® population report.

Out of the thousands of monitored SQL Servers, SQL Server 2019 is now at 48% of the market! That’s the highest percentage we’ve seen for any version in the 3 years that we’ve been doing this analysis, up from last quarter’s 44% market share. Here’s how adoption is trending over time, with the most recent data at the right:

Click through for the graph and a bonus excursion on database size and associated hardware. And as always, this is an interesting view of one sample of SQL Server instances, but that sample may not (and probably does not) represent SQL Server usage as a whole.

Comments closed

Missing Log Information after a VM Rollback

Jonathan Kehayias (via Paul Randal) advises caution:

Recently I received an email with a very vague description of a problem that went something along the lines of:

“After some issues with Windows patching and rolling back from snapshots, our cluster resources won’t come online. In the process of troubleshooting the issue and validating the fix, we rolled back a few times. We can’t find any details about the issue in the cluster logs, Windows event logs, or SQL Server error log to investigate further.”

Read on for more information about the immediate problem, the root cause, and the actual issue the customer ran into before compounding the problem.

Comments closed

Central Management Servers and SSMS 20

Greg Low works around an issue:

I’ve recently been doing work with a site that makes extensive use of Central Management Servers. And that’s an issue if you upgrade past v19.3 of SSMS.

Here’s my counter-argument: how frequent is it to find organizations that have enough SQL Server instances to make a Central Management Server worthwhile and also do not have any sort of certificate management process?

And more importantly, why don’t they have certificate management processes in place for SQL Server? This isn’t 2008 anymore—everybody (for some slight exaggeration of the term “everybody”) has certificate management in place for websites. It’s incredibly rare to find websites without TLS certificates, so somebody in your organization is managing certificates somehow. Why are these people not also managing certificates for SQL Server? Because once you have proper certificates in place rather than self-signed certs, there is no SSMS problem.

And if money is the issue, money is not the issue. Note that Daniel’s post is over 6 years old (and here’s me self-linking for street cred), meaning any company without the budget for proper certificates could have put this into place anytime over the past 6 years.

Self-signed certificates are okay for debugging purposes on personal machines. But they should not be acceptable for connecting to SQL Server in any environment. Certificate-driven encryption is a critical part of securing data movement over the wire, and a trusted certificate chain is critical for ensuring attackers cannot sit in the middle of that connection and read the data.

Comments closed

Logging and Auditing in PostgreSQL

Muhammad Ali checks the logs:

In PostgreSQL, managing logs serves as a vital tool for identifying and resolving issues within your application and database. However, navigating through logs can be overwhelming due to the volume of information they contain. To address this, it’s essential to implement a well-defined logs management strategy.

Customizing PostgreSQL logs involves adjusting various parameters to suit your specific needs. Each organization may have unique requirements for logging, depending on factors such as the type of data stored and compliance standards.

In this article, we will explain parameters used to customize logs in PostgreSQL. Furthermore, we describe how to record queries in PostgreSQL and finally recommend a tool for managing PostgreSQL logs at granular level.

Read on to learn how to enable logs in Postgres, some notes on log management, and even a bit on auditing via pgaudit.

Comments closed

Switching All SQL Server Databases to Simple Recovery Model

Vlad Drumea doesn’t need no steenkin’ transaction log backups:

This brief post contains a script that can help switch a whole SQL Server instance, model and all user databases, to SIMPLE recovery.

The script is useful in case of dev/test/QA/UAT instances that have been left by accident to use the default FULL recovery model, yet do not have or need transaction log backups.

Read on for the script. It also shrinks the transaction log file after the switch-over.

Comments closed