Press "Enter" to skip to content

Category: Administration

Cores Visible Offline

John Morehouse enlists some assistance to figure out why his SQL Server instance is ignoring 24 cores:

Hidden schedulers are used to process requests that are internal to the engine itself.  Visible schedulers are used to handle end-user requests.  When you run the casual SELECT * query, it will utilize a visible scheduler to process the query.  With this information, if I have a 64 core server and all is well, I should have 64 visible schedulers online to process requests.

However, I discovered that some of the schedulers were set to “VISIBLE OFFLINE”.  This essentially means that those particular schedulers are unavailable to SQL Server for some reason.   How many offline schedulers do I have? A quick query resulted in 24 schedulers currently offline.  24 logical cores means that 12 physical cores are offline.

But why would a scheduler be set to “VISIBLE OFFLINE”?

Read on for the answer, and check the comments for a helpful plug for sp_Blitz.

Comments closed

Cost Threshold For Parallelism Testing

Tim Peters shows that the Dynamics people are probably right with their assertion regarding Cost Threshold for Parallelism:

These plans are from a SQL Server 2008 R2 SP3 instance that runs a version of Microsoft Dynamics. The MS Dynamics team knows their product issues a bunch of really small queries and recommend a Max Degree of Parallelism of 1. In my case there is one query plan with a cost of 34,402 query bucks that a MAXDOP = 1 can’t afford. Increasing the MAXDOP from 1 while keeping the Cost Threshold at 5 will keep all of my little queries running in their single threaded paradise while allowing some infrequent biguns to spread their lovely wings across a few more threads. My Cost Threshold For Parallelism is set to 5 and I hope to never have to defend my settings in a dark alley with Erik Darling.

Different systems will behave in different ways, so it makes sense that they might require different settings.  In the case of Dynamics, it appears that almost all of the queries are extremely low-cost, so relatively few plans would go parallel and the ones that do, you probably want going parallel.

Comments closed

Disk Space Shenanigans

Meagan Longoria writes about an outage due to improper file layout:

One day, a manager asked me if I could help on an urgent matter: the application suddenly could no longer execute transactions on the production database and the database connection was intermittently failing. The system admin was busy with other duties, so I was the closest thing they had to a DBA.  All they could tell me was the production database had crashed and they got an error message about insufficient disk space.

Click through for the rest of the story.

Comments closed

Violating Swart’s Law

Kenneth Fisher notes that maxima are not aspirations:

It could be databases on an instance, indexes on a table, columns in a table, etc. etc. And in case you were wondering, you can get the answers here.

You see this come up every once in a while. A forum question, a question on #sqlhelp, even in *shudder* your own systems. And the answer always comes back the same: Limits are not goals! Usually accompanied by a few jokes.

For more on this, Michael Swart coined Swart’s Ten Percent Rule.

Comments closed

Thinking About Parallelism

Grant Fritchey continues his thoughts on parallelism:

Microsoft set the default value for the Cost Threshold for Parallelism back in the 1990s. They were developing SQL Server 2000. That means this value was determined as a good starting point for query plan costs over 17 years ago. In case you’re not aware, technology, T-SQL, SQL Server, and all the databases and database objects within them shifted, just a little, in the intervening 17 years. We can argue whether or not this value made sense as a starting point (and remember, the default settings are meant to be starting points covering a majority of cases, not a final immutable value) for determining your Cost Threshold for Parallelism 17 years ago. I think we can agree that it’s no longer even a good starting point.

For more thoughts, check out a prior post on figuring out the cost threshold.

Comments closed

Helpful Scripts

Rolf Tesmer has published a set of helpful scripts to his Github repo:

The below is a link to my GitHub repo of my personal collection of scripts.  BTW this isn’t all of the scripts, but is probably 50% of the ones I have lurking around.

These are either scripts I have developed in my own time over the years, or adapted from various other websites which I found handy at the time for whatever reason.

Where I have remembered that web site (like most people, which is unfortunately almost never) it was typically from sites like stack overflow etc, and if so I have tried to cite it as such to provide credits.

Click through for the link to Rolf’s repo.

Comments closed

Handling Overly Large Log Files

Kevin Hill shows how to recover from a scenario with an unexpectedly large SQL Server transaction log file:

Step 2: Verify if the log is full or “empty”

Verify if the log file is actually full or not.  If you are backing up and the file still grew to ridiculous size…it may have just been a one time thing and you can deal with that easily.  Right-click the database, go to reports, standard reports, disk usage.  This will give you 2 pie charts.  Left is the data file, right is the log.  If the log shows almost or completely full AND the huge size, you need to backup.  If the log file is huge and mostly empty, you simply need to shrink to an acceptable size.

Great read for a junior-level DBA.

Comments closed

Interrogating A Stack Dump

Kendra Little looks at a SQL Server stack dump:

In the video, I show an example of a stack dump caused by running DBCC PAGE with format style 3 against a table with a filtered index in SQL Server 2014.

It looks like this bug is fixed in SQL Server 2016, at least by SP1.

Sample code to reproduce this against the AdventureWorks2012 database (which I had restored to SQL Server 2014) is in my gist here.

Click through to watch the video.

Comments closed

Google Compute Engine Whitepapers

Brent Ozar Unlimited has a couple whitepapers out about working with SQL Server in Google Compute Engine.  First, Brent and Tara Kizer create an Availability Group:

In this white paper we built with Google, we’ll show you:

  • How to build your first Availability Group in Google Compute Engine

  • How to test your work with four failure simulations

  • How to tell whether your databases will work well in GCE

Erik Darling also has a whitepaper on performance tuning:

Relax. Have a drink. In this white paper we built with Google, we’ll show you:

  • How to measure your current SQL Server using data you’ve already got

  • How to size a SQL Server in Google Compute Engine to perform similarly

  • After migration to GCE, how to measure your server’s bottleneck

  • How to tweak your SQL Server based on the performance metrics you’re seeing

If you’re looking at GCE as a potential migratory spot, you’ve got some extra reading material.

Comments closed

Finding Physical Row Location

Wayne Sheffield shows how to find the physical location of a row in SQL Server:

Acquiring the physical location of a row

SQL Server 2008 introduced a new virtual system column: “%%physloc%%”. “%%physloc%%” returns the file_id, page_id and slot_id information for the current row, in a binary format. Thankfully, SQL Server also includes a couple of functions to split this binary data into a more useful format. Unfortunately, Microsoft has not documented either the column or the functions.

Read on for two functions you can use to format this data more nicely, as well as a short re-write Wayne did to improve performance of one of them.

Comments closed