Press "Enter" to skip to content

Category: Administration

The First 30 Days as a DBA

Tracy Boggiano has some experience with new jobs:

Over the last four years, ok it seems longer than that, I’ve started four jobs. A couple just weren’t good fits. One I was at for three years. I currently just finished my first 30 days at my fourth one. Having done the first 30 days several times over the last few years, I’ve searched each time what you would do when you start that new position to take over the environment. What would you evaluate, where to start with everything, what to do first? With no luck mind you. So, I’m going to blog about my journey through this as I’ve done it several times over my career and believe it can help others as they start new positions know where to begin. Coincidentally, Aaron Bertrand (t) just blogged about his first month at Stack Overflow as a DBRE.

I can’t believe it’s been four years since Tracy and I worked together. Somebody’s been messing with my time machine, right?

Comments closed

Stopping Azure Kubernetes Service Nodes

Andrew Pruski wants to shut the whole thing down:

A while back I wrote a post on Adjusting Pod Eviction Timings in Kubernetes. To test the changes made in that post I had to shut down nodes in an Azure Kubernetes Service cluster.

This can be done easily in the Azure portal: –

However I did a presentation recently and didn’t want to have to keep jumping into the portal from VS Code…so I wanted to be able to shut down the nodes in code.

So here’s how to use the azure-cli to shut down a node in an Azure Kubernetes Service cluster.

Read on to see how but also read Andrew’s warning / disclaimer so you don’t mess anything up in a production environment.

Comments closed

Distributed Replay Deprecated in SQL Server 2022

Brent Ozar starts the wake:

For SQL Server 2022, Microsoft deprecated Distributed Replay.

The idea behind the feature was that you’d capture a trace against your production environment, set up another environment for load testing or QA testing, and then replay that exact same workload against it. You’d be able to measure which queries got better or worse, and how.

The reality was a complete mess. It was a giant pain in the rear to set up and use, to the point where I got frustrated with it within a few hours and asked my peers about their experiences with it. I got back a string of four-letter words – everybody really struggled to get it across the finish line. Over subsequent versions, Microsoft made token efforts to improve it, but never really gave it the love it required.

Yep, I can concur. What we wanted was a simple button-click (or easy-to-navigate UI) that let you capture “What does a real production workload look like?” and then the ability to re-run it elsewhere, like on new hardware. What we got was indeed a mess.

I don’t fully agree with Brent’s argument that the right answer is to build app-level testing. If everything was architected and developed for this, then yeah, that might be a better answer. But unless you’ve built all relevant applications around APIs (so they can be programmatically invoked rather than trying to do everything via Selenium) and have put in the legwork necessary to track and re-run calls, I think you end up with an even bigger mess—especially if there are multiple applications working with the same database. I do agree that this is a hard problem regardless of the path you choose.

Comments closed

DOP Feedback in SQL Server 2022

Erik Darling talks about a potentially exciting feature:

I’m not going to demo DOP feedback in this post, I’m just going to show you the situation that it hopes to improve upon.

To do that, I’m going to run a simple aggregation query at different degrees of parallelism, and show you the changes in query timing.

Figuring out where that elbow is (in other words, when you move from approximately-linear gains to sub-linear gains) can be extremely helpful. Of course, this is like solving a partial equilibrium problem: it’s part of the problem but there’s a whole separate general equilibrium problem from there—what’s the best number of cores for this query with the constraint that I have all of these other queries running on a busy server? But before I make it seem like I’m minimizing the value of this, the partial answer will, in many circumstances, be good enough.

Comments closed

Sending E-mails on Database Blocking

Thomas Williams combines a few tools:

If your SQL Servers are under pressure, you want to know. Blocking and blocked processes impact end-users, and if not addressed can slow or even stop a database. In this post, I’ll outline a method I use to get timely notifications of blocking processes that you can use too.

I adapted my approach from Tom Collins’s excellent – and still relevant – 2017 article “How to monitor Blocked Processes with SQL Alert and email sp_whoisActive report” at https://www.sqlserver-dba.com/2017/01/how-to-monitor-blocked-processes-with-sql-alert-and-email-sp_whoisactive-report.html. You could implement exactly what Tom covers in his post and come out on top. I’ve gone one small step further to send a formatted HTML e-mail with a table of blocking & blocked processes; like Tom, I generate the table using Adam Machanic’s fantastic sp_WhoIsActive stored procedure, which I’ve assumed is present in the master system database. The complete solution is the sp_WhoIsActive stored procedure (which you’ll need to download and create, see http://whoisactive.com/downloads/), a SQL Agent job with a job step that runs sp_WhoIsActive and sends the e-mail, and an alert that calls the SQL Agent job when there’s blocked processes.

Read on for the script, as well as some important notes.

Comments closed

Refreshing SQL Managed Instances which Use TDE

Bradley Ball keeps the dev environment up to date:

Hello Dear Reader!  I was working with some friends lately and we needed to set up a process to refresh their Development Environment databases from Production.  Additionally, the databases are encrypted using Transparent Data Encryption, The SQL MI instances are in different regions, and the SQL MI Instances are in different subscriptions.  To duplicate the environment, in order to match our friends, we did the following setup.

Click through for a high-level overview, step-by-step guidance, and a whole lot of detail.

Comments closed

Reading the SQL Server Error Log

Lee Markum has two ways to read the SQL Server error log:

Reading the SQL Server Error Log is important when troubleshooting many issues for SQL Server. Some example issues would be errors related to Always On Availability Groups, long IO occurrence messages, and login failures.

Unfortunately, the SQL Server Error Log can be very full of information, making specific things hard to find, especially if you’re just visually scrolling the Error Log. Even if you’re recycling the Error Log each day and keeping 30 or more days of error log, on a busy system, the error log can still be quite full, even for a single day.

Click through for those techniques.

Comments closed

Tracking Table Updates via SQL Audit

Tracy Boggiano wants to figure out who keeps taking her lunch out of the company refrigerator:

I had a problem at work recently where a record was getting updated, and no one knew where or what was updating the record.  Our team discussed the best way to try to figure out what was happening.  The situation was if a record would be updated to active and within a ten-minute window, the record would be set back to inactive.  The system allows ad-hoc statements to run against and since it was to only a certain table, I suggested we set up a SQL Audit to track UPDATEs to the table.  The code for this is fairly simple, but since most of my colleagues don’t have exposure to SQL Audit, I figured a blog post would benefit others.

So, in this case, we are creating a Server SQL Audit that will write to D:\SQL Audit, so make sure that path exists.  Then a Database Server Audit Specification to track any UPDATEs that happen to the table.  Now, keep in mind I choose the method over running a server-side Trace or Extended Events because I knew it would capture everything without me having to worry about setting up anything else put these commands.  An important part of this is where I specify “public”.  That tells the audit to capture anybody that is updating the table.  If you want to look for a certain user or even maybe someone part of a role, you could specify that instead.

Click through for the auditing script. I wish this type of information were a lot easier to get, especially for longer-term audits. I end up creating metadata columns (created/modified user, created/modified date) but that gives limited information and requires all calling code play along.

Comments closed

Working with Azure VM Scale Sets

Arun Sirpal explains the benefit behind scale sets in Azure:

I really like scale sets. It lets you create and manage up to 1000 load balanced VMs per availability zone using windows or Linux images. (We can have flexible or uniforms modes for orchestration which dictates if you go down the homogenous VM route or a mix, where a mix is the flexible option.

There are many other benefits too apart from scaling, such as built-in load balancing options, increased resiliency via 3 Availability Zones and from a cost perspective you can couple scale sets with Azure Hybrid benefit or even use reserved instances – cost is important in the cloud!

Read the whole thing.

Comments closed

Monitoring Open Connections in the Serverless SQL Pool

Liliam Leme has a pair of queries for us:

Consider a scenario where you are trying to monitor the connections from other applications to serverless SQL. I hit this need while trying to understand how many connections opened I had coming from an application. Had I hit some kind of limitation on serverless SQL or not. Spoiler: There is no limit for connections on Synapse serverless SQL  as you would find with a dedicated SQL pool (formerly SQL DW).

Alternatively, the limit to the number of serverless SQL pool connections is how much cash you have in your bank account…though given that it’s $5 per TB processed, if you’re writing good queries, that’s a lot of queries and connections.

Comments closed