Press "Enter" to skip to content

Category: Administration

Thoughts on T-SQL Snapshot Backups

Anthony Nocentino has a two-parter for us. First is the idea of how T-SQL snapshot backups work:

Traditional SQL Server backups can struggle with large databases, resulting in longer backup times and resource contention. T-SQL Snapshot Backup, a new feature in SQL Server 2022, addresses these challenges by allowing storage-based snapshots to be coordinated through T-SQL. This feature delivers faster, more efficient backups, especially for large-scale environments with the most aggressive of recovery objectives.

The follow-up is for point-in-time recovery from a T-SQL snapshot backup:

In this post, the second in our series, I will guide you through using the new T-SQL Snapshot Backup feature in SQL Server 2022 to take a snapshot backup and perform point-in-time database restores using a snapshot backup as the base of the restore. We will explore how to manage storage-level operations, such as cloning snapshots and executing an instantaneous point-in-time restore of a database from the snapshot with minimal impact on your infrastructure. Additionally, I will demonstrate a PowerShell script that utilizes dbatools and the PureStoragePowerShellSDK2 modules to automate the process.

Check out both posts and be on the lookout for subsequent entries in the series.

Comments closed

Tips for Bringing a Streamlit App into Production

I have wrapped up another series:

In this video, I discuss some of the things you should consider as you transition a Streamlit application from development to production. We will cover four methods of bringing a Streamlit app to production and some thoughts on performance optimization.

This one doesn’t have much in the way of demos, but I do spend a lot of time at the virtual whiteboard, so it’s got that going for it.

Comments closed

vCore-Based Subscription Limits for Azure SQL DB and Synapse Dedicated SQL Pools

Raj Tiwari announces a change in subscription limits:

New vCore based limits: The new limits will be based on vCores per Subscription per Region, which will be directly equivalent to DTU and DWU.

Default logical servers limit: The previous limits on Logical Server DTUs have been discontinued. All new and existing subscriptions will now have a default limit of 250 logical servers. 

Configurable vCore limits: Subscription vCore limits can now be easily managed through the support section on the Azure Portal, with approvals typically processed within minutes.

Read on to learn more about these limits and how you could extend them.

Comments closed

Migrating SQL Server Database Files between Storage Subsystems

Andy Yun does a bit of shuffling:

In my role at Pure Storage, I often engage with customers who wish to migrate their SQL Server databases off of their prior storage onto our hardware. And after some digging around for prior-published material, I was surprised to find that there really wasn’t much that was comprehensive. After all, one doesn’t change SANs too often. But when it does happen, it is nice to have some reference material from others who have. So I decided to try and give a good overview of how I’d approach the challenge.

This is meant to be a “food for thought” kind of post. I’m going to keep things somewhat high level, but will provide links to other blogs and material that can help you continue down whatever path you choose. And for simplicity, I’m going to limit this scope to a single SQL Server.

Read on for a few questions you should answer, followed by some notes and preferences. Andy’s filegroups tip is also a really good one.

Comments closed

Compressing Indexes and Shrinking Azure SQL MI Databases

Kendra Little has a good reason for an often-bad act:

Shrinking databases in SQL Server isn’t fun – it’s slow, it causes blocking if you forget to use the WAIT_AT_LOW_PRIORITY option, and sometimes it persistently fails and refuses to budge until you restart the instance. You only want to shrink a SQL Server database when you’ve got a good reason and a lot of patience.

If you’re using Azure SQL Managed Instance and you haven’t already used data compression on your indexes and shrunk your databases, you probably have two good reasons to do both of those things: performance and cost reduction.

Compressing indexes is very often (almost always?) a good thing. Shrinking databases is very often (again, almost always?) a bad thing. This is like a buddy cop movie for your database.

Kendra gives some good advice but also lays out a warning if you’re on General Purpose V1, so read the whole thing.

Comments closed

Reviewing the SQL Server Error Log

Jim Evans digs into the logs:

In SQL Server there are two primary sets of error logs. One for the database engine and a second for SQL Server Agent. Reviewing these logs is routine for Database Administrators and sometimes Developers when troubleshooting issues. What are the different ways to view these error logs? Are there different scenarios when you would use one view other another? Do any other error logs exist that SQL Server Professionals should review?

Read on for three ways to do this, including one outside of SQL Server itself.

Comments closed

Cost Threshold for Parallelism and Missing Indexes

Jared Westover explains a phenomenon:

Did you know that the Cost Threshold for Parallelism (CTFP) affects SQL Server’s choice of a trivial execution plan? One area that can suffer from this setting is the optimizer’s ability to suggest index recommendations. When SQL Server picks a trivial plan, it skips suggesting any missing indexes. So, if you’ve set a high CTFP and run simple queries, you might never get those handy index recommendations.

Click through to learn more.

Comments closed

Authentication Monitoring in Postgres

Rafia Sabih announces a new extension:

How about a situation when you want to log all the login attempts made to your PostgreSQL server. Yes, one way would be to read your log files and find out all the information related to login attempts and then transfer to them some other file, etc. to further use this information. Now, this is one way of doing it but it might pose some challenges like time and memory consumption in reading from those long files, additionally this is going to incur a lot of I/O which might be a serious issue when you are on cloud. During my time working for Zalando, we came across this problem and decided to write a Postgres extension to handle this efficiently.

Click through to learn more about the pg_auth_mon extension and how it works, including a link to the GitHub repository.

Comments closed