Press "Enter" to skip to content

Category: Administration

Who and What is Using Your tempdb Space?

Guy Glantser helps us troubleshoot who is using tempdb space and for what purpose:

In the past two weeks I encountered 3 different scenarios in which tempdb has grown very large, so I decided to write about troubleshooting such scenarios.

Before I describe the methods for troubleshooting tempdb space usage, let’s begin with an overview of the types of objects that consume space in tempdb.

The answer, of course, was Professor Plum in the Aviary with the eager spool.

Comments closed

Thread Pool Exhaustion and Availability Groups

Sean Gallardy lays down the gauntlet:

You’re probably wondering why you couldn’t spawn a new thread, why this error happened, why you shouldn’t just up the max worker threads, and probably also thinking this must be some kind of “bug” in SQL Server. So here’s where our awkward conversation starts… It’s you. Contrary to every relationship breakup you’ve ever had, it’s definitely you. I’m not saying this to be mean but to really drive the point home here. The major reasons for this occurring are large blocking chains, too much workload for the server size (databases, users, etc.), and/or your virtual infrastructure sucks. There aren’t too many reasons for getting yourself into this situation, and while what I’ll be putting forth here isn’t exhaustive of all edge cases and scenarios, these are by far the majority of all the items in the wild that I’ve either worked on or have been involved in at some level. Side Note: If you’ve read this far, are shaking your head, calling me names that an irate sailor might utter, and telling yourself that upping the max worker threads as the product error suggests and Microsoft should fix their bugs then you can stop reading here as you’re probably not open to learning why you have issues in your environments.

One more scenario I’ve seen is mirroring thousands of databases on a single instance. That scenario fit none of Sean’s criteria—there was very little blocking, most of the databases were small and infrequently-used, and the infrastructure was the right size. It was just a huge number of databases and each database requiring a minimum of X worker threads. Mind you, it was still a bad idea…

Comments closed

Capturing CHECKDB Output

Erin Stellato shows how you can track the results of those automated CHECKDB runs you’re doing:

First, you need to be running CHECKDB on a regular basis.  If you’re going to ask what you mean by regular basis, I’ll direct you to a post of Paul’s that is still relevant: Importance of running regular consistency checks.  There are a few factors to consider, but in general I recommend at least once a week.  If you’re a 9 to 5 shop and have the maintenance window to run CHECKDB daily, feel free to do that.

Erin walks us through it and also recommends checking out Ola’s scripts for integrity checks. I’d add to that Minion CheckDB.

Comments closed

Understanding Memory Grants

Taiob Ali walks us through the concept of memory grants:

DesiredMemory: Memory estimated to fit intermediate results in KB for the chosen degree of parallelism. If the query runs in serial mode, this is the same as SerialDesiredMemory.

(Amount needed to store all temporary rows in memory. This depends on the cardinality estimate, expected number rows and average size of row). This is called additional because a query can survive lack of such memory by storing part of temporary rows on hard disk. A query is not guaranteed to have the full amount if the total exceeds the preset limit.)

Read on for explanations of each of the elements in MemoryGrantInfo.

Comments closed

Certificate Management with SQL Server 2019

Niko Neugebauer walks us through improvements in certificate management with SQL Server 2019:

If you have ever used them for connection encryption (TLS 1.2), you might have had some battles with the certificates, having to go into the registry to edit the thumbprint and if you doing a Failover Cluster or Availability Group installation – you would have to enjoy this operation on the multiple nodes. Not-so-very-user-friendly to say at least!
So many times, it would scare-off a non Server/Database Administrator or a junior Server/Database Administrator from trying those features.

Niko also mentions something very interesting about SQL Server Configuration Manager compatibility at the end of the post.

Comments closed

Monitoring Azure Synapse Analytics SQL Pools with Power BI

Brett Powell has a pair of Power BI templates for monitoring Azure Synapse Analytics:

Upon clicking ‘Load’ you’ll either need to provide your credentials for this source (if you don’t have this data source saved from previous use) or the queries will execute and the following report pages will be available:

– Executions
– Waits
– Sessions
– Waits Detail
– Execution Detail
– Memory
– ExecutionDrillThrough (hidden)

Click through to see what the templates look like and how to obtain them.

Comments closed

Change Tracking and Internal Tables

Tim Weigel continues a series on change tracking:

In my last post, I showed you how to configure change tracking at the table level and how to get configuration information about change tracking from the database engine. We looked at sys.change_tracking_databases and sys.change_tracking_tables, and looked at some sample scripts that present the information in a more readable format.

Before moving on to working with change tracking, I’d like to show you a little bit about how SQL Server handles change tracking data under the hood. Let’s take a few minutes to talk about sys.internal_tablessys.dm_tran_commit_table, and sys.syscommittab. These aren’t objects that most DBAs interact with on a routine basis, but they’re useful for understanding how change tracking does what it does.

Click through to learn more about these internal tables.

Comments closed

The Pains of Database Restoration

Stuart Moore covers some of the pains of database restoration in two posts. First, why dbatools’ Restore-DbaDatabase is a complicated as it is:

At first glance Restore-DbaDatabase looks like a slow lumberig complex beast. In reality it’s not that bad.

It’s the result of design decisions I took in wanting a solid versatile command that could cope with everything that people would want from it.

In this post, we’ll go through the main decisions/points of contention one by one

Stuart then covers the limitations of Restore-DbaDatabase:

Like all tools, Restore-DbaDatabase isn’t able to do everything that everyone wants it to. Certainly, at the moment I’d like it to write it’s own blog posts and fetch me a cold beer, but that doesn’t happen

A lot of the below isn’t complaining about people asking for features. If we can do it, we will, and we’re keen to make this work for as many people in as many situations as possible

But quite a few requests over the years have been non starters for a number of reasons.

Read them both; they’re part of Stuart’s 31 Days of Backup and Restore with dbatools series.

Comments closed

Don’t Use sys.dm_hadr_cluster_members for Quorum Info

Sean Gallardy explains a limitation of sys.dm_hadr_cluster_members:

I’ve now run across a few different instances where the monitoring for quorum was done via this DMV. On the surface, it seems like nothing would be wrong with using the “number_of_quorum_votes” column to check on the members of the cluster and see their voting status. However, this isn’t quite the case… you see there are various mechanisms that influence whether or not a member (or witness) has a vote and these continue to be expanded in each version of WSFC.

Click through for a short history lesson as well as some good advice on how accurately to get this information.

Comments closed

The Value of Negative Identity Values

Randolph West explains why you might start at MIN(INT) for an identity integer column:

A quick(er) post this week, in response to Greg Low’s blog post from a few weeks ago titled “Don’t start identity columns or sequences with large negative values.”

Greg writes that you shouldn’t use large negative values in a table, because… it’s hard to read them, I guess? And also they don’t compress well.

I disagree … to a degree. Dang, words are hard. Anyway, when I design a table I create what’s called a surrogate key as my primary key, which is a value that is intended for the table to uniquely identify a row so that it participates in relational activities like joins and foreign keys in an efficient way. In other words the identity column is not for me, it’s for the database engine. I don’t need to worry about what row a value has. I choose the data type for that identity column based on the estimated number of rows, not whether I can memorize that a [StatusID] of 5 means something. Magic numbers are bad, mmmkay?

I don’t mind using negative values, especially for things like queue tables where the rows are ephemeral. The identity values may be harder to read, but as Randolph points out, in those types of cases, you aren’t really reading the values anyhow.

Comments closed