Press "Enter" to skip to content

Category: Administration

Data Compression and CPU Utilization

Kendra Little shares some advice:

Every time I share a recommendation to use data compression in SQL Server to reduce physical IO and keep frequently accessed data pages in memory, I hear the same concern from multiple people: won’t this increase CPU usage for inserts, updates, and deletes?

DBAs have been trained to ask this question by many trainings and a lot of online content – I used to mention this as a tradeoff to think about, myself– but I’ve found this is simply the wrong question to ask.

In this post I’ll share the two questions that are valuable to ask for your workload.

Kendra’s advice is very good, and to add my own two cents to the mix: the last place I was at did, in fact, see a pretty reasonable reduction in CPU utilization by performing page-level compression on any index where it made sense—and this was a very busy OLTP environment. The exceptions would be indexes making prominent use of things like Guids or chunks of binary, which don’t compress very well at all. In all my FTE and consulting years, I’ve never run into a circumstance in which compression caused a significant gain in CPU utilization.

Comments closed

Shrinking Large Tables in Postgres

Andrew Atkinson shrinks a table:

In this post, you’ll learn a recipe that you can use to “shrink” a large table. This is a good fit when only a portion of the data is accessed, the big table has become unwieldy, and you don’t want a heavier solution like table partitioning.

This recipe has been used on tables with billions of rows, and without taking Postgres offline. How does it work?

Click through to find out.

Comments closed

SQL Server Msg 3023 during DBCC SHRINKFILE

Tom Collins gets an error:

Question : Executing the following  Database shrinkfile activity and getting the error message

use myDatabase
DBCC SHRINKFILE (N’myDatabase_log’ , 0, TRUNCATEONLY)

Msg 3023, Level 16, State 2, Line 4
Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. Reissue the statement after the current backup or file manipulation operation is completed.

Read on for the answer.

Comments closed

Vacuum, MERGE, and ON CONFLICT Directives

Shane Borden looks back at a directive:

I previously blogged about ensuring that the “ON CONFLICT” directive is used in order to avoid vacuum from having to do additional work. You can read the original blog here: Reduce Vacuum by Using “ON CONFLICT” Directive

Now that Postgres has incorporated the “MERGE” functionality into Postgres 15 and above, I wanted to ensure that there was no “strange” behavior as it relates to vacuum when using merge.

Read on to see how things look now.

Comments closed

T-SQL Snapshot Point-in-Time Recovery to Azure VM

Anthony Nocentino continues a series on T-SQL snapshot backups:

In this post, the third in our series on using T-SQL Snapshot Backup, I will guide you through using the new T-SQL Snapshot Backup feature in SQL Server 2022 to take a snapshot backup and then perform point-in-time database restores using that snapshot backup as the base, but this time using an Azure Virtual Machine. We will explore how to manage Azure storage-level operations, such as taking snapshots, cloning snapshots, and executing an instantaneous point-in-time database restore from the snapshot with minimal impact on your infrastructure. Additionally, I will demonstrate a PowerShell script that utilizes dbatools and Azure Az modules to automate the process.

Read on for the script and plenty of details.

Comments closed

Thoughts on T-SQL Snapshot Backups

Anthony Nocentino has a two-parter for us. First is the idea of how T-SQL snapshot backups work:

Traditional SQL Server backups can struggle with large databases, resulting in longer backup times and resource contention. T-SQL Snapshot Backup, a new feature in SQL Server 2022, addresses these challenges by allowing storage-based snapshots to be coordinated through T-SQL. This feature delivers faster, more efficient backups, especially for large-scale environments with the most aggressive of recovery objectives.

The follow-up is for point-in-time recovery from a T-SQL snapshot backup:

In this post, the second in our series, I will guide you through using the new T-SQL Snapshot Backup feature in SQL Server 2022 to take a snapshot backup and perform point-in-time database restores using a snapshot backup as the base of the restore. We will explore how to manage storage-level operations, such as cloning snapshots and executing an instantaneous point-in-time restore of a database from the snapshot with minimal impact on your infrastructure. Additionally, I will demonstrate a PowerShell script that utilizes dbatools and the PureStoragePowerShellSDK2 modules to automate the process.

Check out both posts and be on the lookout for subsequent entries in the series.

Comments closed

Tips for Bringing a Streamlit App into Production

I have wrapped up another series:

In this video, I discuss some of the things you should consider as you transition a Streamlit application from development to production. We will cover four methods of bringing a Streamlit app to production and some thoughts on performance optimization.

This one doesn’t have much in the way of demos, but I do spend a lot of time at the virtual whiteboard, so it’s got that going for it.

Comments closed

vCore-Based Subscription Limits for Azure SQL DB and Synapse Dedicated SQL Pools

Raj Tiwari announces a change in subscription limits:

New vCore based limits: The new limits will be based on vCores per Subscription per Region, which will be directly equivalent to DTU and DWU.

Default logical servers limit: The previous limits on Logical Server DTUs have been discontinued. All new and existing subscriptions will now have a default limit of 250 logical servers. 

Configurable vCore limits: Subscription vCore limits can now be easily managed through the support section on the Azure Portal, with approvals typically processed within minutes.

Read on to learn more about these limits and how you could extend them.

Comments closed