Press "Enter" to skip to content

Category: Administration

Testing Postgres Configuration Parameter Performance Changes

Semab Tariq follows up on a prior blog post:

In a previous blog post, we configured an EC2 instance and installed PostgreSQL on it. After the initial setup, we ran pgbench to measure the initial TPS (transactions per second). Then, we tuned PostgreSQL and reran pgbench to see how much we could improve the TPS. To tune PostgreSQL, we adjusted various parameters based on the system’s available resources. In this blog, we will identify which of those parameters contributed the most to the performance improvements.

If you want to read the initial blog post, you can find it here: Key PostgreSQL Configuration Parameters for Enhanced Performance.

We will use the same instance size on AWS EC2 as before, which is t2.large.

I suppose it shouldn’t matter very much if you’re running a performance test over a significant timespan, though it was an interesting choice to use a burstable VM type for the test.

Comments closed

Finding Long-Running Queries in Azure SQL DB

Kunal Rathi has a script for us:

Identifying and optimizing long-running queries is crucial for maintaining the performance and efficiency of your Azure SQL Database. Slow queries can lead to resource bottlenecks, affecting the overall user experience and system stability. In this post.we will see how to find long running queries in Azure SQL database. Whether you’re a database administrator or a developer, these insights will enable you to ensure your database runs smoothly and efficiently.

Click through for the script. It’ll work not only for Azure SQL DB, but also SQL Server and Azure SQL Managed Instance as well.

Comments closed

Calling a REST Endpoint from Azure SQL Database

Meagan Longoria makes a call:

External REST endpoint invocation in Azure SQL DB went GA in August 2023. Whereas before, we might have needed an intermediate technology to make a REST call to an Azure service, we can now use an Azure SQL Database to call a REST endpoint directly.

One use case for this would be to retrieve a file from blob storage. I explain how to set this up below.

Read on to learn more about the process, as well as a few ideas on when you might use it.

Comments closed

Calculating the Size of Dataflow Gen2 Staging Lakehouses

Sandeep Pawar busts out the calculator:

My friend Alex Powers (PM, Fabric CAT) wrote a blog post about cleaning the staging lakehouses generated by Dataflow Gen2. Before reading this blog, go ahead and read his blog first on the mechanics of it and the whys. Note that these are system generated lakehouses so at some time in the future, they will be automatically purged but until then the users will be paying the storage cost of these lakehouses. If you want to read more about how dataflow gen2 works and whether you should stage or not , read this and this blog.

Read on for a Python script using the SemPy library.

Comments closed

Creating Elastic Jobs in TerraForm

Josephine Bush builds a script:

If you aren’t familiar with Terraform and its requisite bits and pieces, I have a post that walks you through them. Let’s hit the ground running. To create Elastic Jobs with Terraform, you need to use the azapi_resource, but before we go down that path, you will use the azurerm resource to create some supporting objects.

Read on for a guide covering requirements and setup.

Comments closed

Basic Troubleshooting in SQL Server on Linux

Infant Arockiaraj gives people an idea of where to look:

Based on our extensive experience with customers using SQL on Linux, we have compiled a guide outlining fundamental troubleshooting steps and available tools to assist in resolving SQL on Linux issues. This guide aims to make it easier for SQL DBAs who have primarily worked on Windows operating systems over the years.

In this article, we will normalize three checks in Linux that we typically perform in Windows to start troubleshooting issues with SQL Server: 

  • System logs
  • SQL Server logs
  • Task Manager

Note: The screenshots in the below examples are taken from RHEL and Ubuntu machines, and while the Linux flavors are different, the commands are similar in both environments. 

Click through to learn more about these steps and where you can find the files..

Comments closed

Using Powershell to Browse Azure Capabilities

Kay Sauter does some digging:

In a nutshell, Azure follows a structure called the Azure Resource Manager (ARM). Unfortunately, ARM is not human-readable, so you need to a tool to be efficient in using it. There are quite some tools available, and from Microsoft, there is Azure Bicep, Azure CLI and Azure PowerShell. Since I have done some DBA stuff in the past, and I am a fan of dbatools (created by the team here), I find PowerShell the most convenient tool to use and suspect that many will see it the same way if they are using dbatools.

Click through for the post. As of right now, the images are broken, but if you open them in new tabs and remove the extraneous “browsing-azure-with-powershell/” in the URL, you’ll be able to see those images.

Comments closed

sp_delete_backuphistory Removes Restore History Too

Steve Jones susses out a problem:

I had a customer that was looking to document a restore that had occurred on one of their systems and didn’t see it. They had concerns about SQL Server accurately tracking history across time and noted they hadn’t cleaned any history.

We dug through some of their instance jobs and found one that ran sp_delete_backuphistory. The person didn’t realize this removes restore history as well. This post talks a bit about how this works.

Read on for the full story.

Comments closed

Have a Recovery Strategy

Aaron Bertrand has a public service announcement:

I’ve talked about it before; you shouldn’t have a backup strategy, you should have a recovery strategy. I can’t possibly care if my backups succeed if I’m not bothering to test that they can be restored. And if they can’t be restored then, both technically and practically, I don’t have backups.

In one of the systems I manage, they built a very simple “test restore” process long before I became involved. Every night, it would pull the full backup for each database, restore it on a test system, and run DBCC CHECKDB against it. It would alert on any failure, of course, but the primary purpose was to always be confident that the backups could, in fact, be restored.

Aaron now has a much more robust version of this in place, which you can see in the article.

Comments closed