Press "Enter" to skip to content

Category: Administration

Copying Azure SQL Databases between Subscriptions

Kenneth Fisher is back in the fight:

I recently had to copy an Azure SQL database (SQL db) from one subscription to an Azure SQL Server instance in another subscription. All of the help I found suggested going to the database and hitting the COPY option. Unfortunately, when I did, I ran into a problem.

Read on for the issue, as well as one way to fix it. The route Kenneth landed on was the same one I ended up going with when I had a similar problem and very limited access to SQL DB on both subscriptions.

Leave a Comment

Efficiency of Sparse Hash Tables in PostgreSQL

Ashutosh Bapat runs some tests:

The hash_create() API in PostgreSQL takes initial size as an argument. It allocates memory for those many hash entries upfront. If more entries are added, it will expand that memory later. The point of argument was what should be the initial size of the hash table, introduced by that patch, containing the derived clauses. During the discussion, David hypothesised that the size of the hash table affects the efficiency of the hash table operations depending upon whether the hash table fits cache line. While I thought it’s reasonable to assume so, the practical impact wouldn’t be noticeable. I thought that beyond saving a few bytes choosing the right hash table size wasn’t going to have any noticeable effects. If an derived clause lookup or insert became a bit slower, nobody would even notice it. It was practically easy to address David’s concern by using the number of derived clauses at the time of creating the hash table to decide initial size of the hash table. The patch was committed.

Read on to see how things didn’t quite turn out this way, and what the results of testing look like.

Leave a Comment

Extending Ola’s Backup Scripts for Scale

Aaron Bertrand looks at some extensions to a venerable solution:

Where I work, we have hundreds of mission-critical databases. We use Ola Hallengren’s SQL Server backup solution, but, out of necessity, have wrapped it with several enhancements to help us accomplish goals around retention, SQL Server backup frequency, and recovery point objectives. I touched on some of this last year when I talked about automating some of our restore testing, in It’s a recovery strategy, not a backup strategy. Today, we do a lot more than that, and we have to be more flexible about it.

Click through to see what Aaron’s environment looks like and some of the things they’ve done to make recovery work better.

Leave a Comment

Goodbye Default Contributor Role in Fabric Workspace Identities

Varun Jain makes a security announcement:

Fabric workspace identity is an automatically managed service principal that can be associated with a Fabric workspace. Fabric workspaces with a workspace identity can securely read or write to firewall-enabled Azure Data Lake Storage Gen2 accounts through trusted workspace access for OneLake shortcuts. Fabric items can use the identity when connecting to resources that support Microsoft Entra authentication. Fabric uses workspace identities to obtain Microsoft Entra tokens without the customer having to manage any credentials. 

Previously, a workspace identity was automatically assigned the workspace contributor role and had access to workspace items.  

Read on to see what’s changing, why, and what you can do instead.

Leave a Comment

Random Page Cost and PostgreSQL Query Plans

Tomas Vondra takes us through a setting:

Last week I posted about how we often don’t pick the optimal plan. I got asked about difficulties when trying to reproduce my results, so I’ll address that first (I forgot to mention a couple details). I also got questions about how to best spot this issue, and ways to mitigate this. I’ll discuss that too, although I don’t have any great solutions, but I’ll briefly discuss a couple possible planner/executor improvements that might allow handling this better.

Tomas’s points around the random_page_cost setting sound a lot like the cost threshold for parallelism setting in SQL Server in inverse: a setting whose default makes sense in a world of spinning disks at 7200 RPM, but not really in a solid state world.

Leave a Comment

Trying out Fabric Unified Admin Monitoring

Reitse Eskens tries out a tool for monitoring Microsoft Fabric installations:

Let me set the scene quickly for you. You’re working for an organisation where Fabric is in the process of being adopted or it’s already fully in use. Regardless of the number of capacities, workspaces, etc, you’re interested in what’s going on in your Fabric environment. You have questions like “Who is using the reports?”, “Who is changing settings in the Admin panel?” and “How is my capacity being used?”.

Read on for a single tool that can solve these sorts of questions.

Leave a Comment

SQL Server Error Log Tips

Kevin Hill notes that the error log isn’t just for errors:

SQL Server’s Error Logs often get overlooked, until something breaks. But if you know how to use them, they can tell you what went wrong before you’re knee-deep in outage mode.

Whether you’re a sysadmin who inherited SQL or an MSP juggling multiple environments, understanding the ErrorLog is low effort, high return.

Let’s decode the basics.

Click through for examples of information you can find in the error log. Kevin mentions cycling the error log weekly. Back in my DBA days, I’d cycle them daily because I didn’t want the files to grow too large and become unwieldy. Some of this, of course, depends upon how hard people are pushing that box and how much you need to log.

Leave a Comment

Troubleshooting SQL Server Memory Pressure

Tom Collins has troubleshooting tip for us:

Receiving a regular  ” There is insufficient system memory in resource pool ‘default’ to run this query”  message in the SQL Server Error logs, leading to    unavailability of sufficien resources to connect.  

This query supplied by Jonathan Kehayias   is a great way to start assessing if there is SQL Server memory presure .

Click through for the query and some additional explanation.

Leave a Comment

Performance Testing the pg_tde Extension

Transparent data encryption is now available in PostgreSQL and Andreas Scherbaum has some performance measures:

The performance impact of pg_tde by Percona for encrypted data pages is measurable across all tests, but not very large. The performance impact of encrypting WAL pages is about 20% for write-heavy tests. The tests were run with an extension RC (Release Candidate), however the WAL encryption feature is still in Beta stage.

Andreas also has a post on the testing specifics:

This test was run on a dedicated physical server, to avoid external influences and fluctuations from virtualization.

The server has a Intel(R) Xeon(R) Gold 5412U CPU with 48 cores, 256 GB RAM, and a 2 TB SAMSUNG MZQL21T9HCJR NVram disk dedicated for the tests (OS was running on a different disk).

Leave a Comment