Press "Enter" to skip to content

Category: Administration

Database Connection Leaks

Michael J. Swart explains how to find database connection leaks:

So, if your application experiences connection timeouts because of a database connection leak, the stack traces may not help you. Just like an out-of-memory exception due to a memory leak the stack trace has information about the victim, but not the root cause. So where can you go to find the leak?

Even though database connection leaks are a client problem, you can find help from the database server. On the database server, look at connections per process per database to get a rough estimate of the size of each pool:

This is a good thing to remember, particularly if you have a busy system.

Comments closed

Checking For Instant File Initialization

Klaas Vandenberghe shows how to use Powershell to determine whether Instant File Initialization is turned on:

Sometimes we want to apply a filter to an array or other collection of objects, but keep both the items that pass the filter and those that fail it. Instead of cycling twice through the collection, there’s a one-step method.

Instant File Initialization is a privilege assigned in the local security policy. Here’s some explanation by MSSQL Tiger Team.
There’s a lot to tell about it, but I’m not going to do that here. Let’s just assume it’s a good thing to assign that privilege to the account with which the SQL Service runs.

Klaas explains how to use Powershell filtering with Where-Object and the Where method for people new to Powershell, and then uses this to figure out if IFI is enabled.

Comments closed

Monitoring On Linux

Steven Schneider shows how Microsoft’s SQLCAT monitors SQL Server on Linux:

The following solutions were tested:

  • Graphing with Grafana and Graphite
  • Collection with collectd and Telegraf
  • Storage with Graphite/Whisper and InfluxDB

We landed on a solution which uses InfluxDB, collectd and Grafana. InfluxDB gave us the performance and flexibility we needed, collectd is a light weight tool to collect system performance information, and Grafana is a rich and interactive tool for visualizing the data.
In the sections below, we will provide you with all the steps necessary to setup this same solution in your environment quickly and easily. Details include step-by-step setup and configuration instructions, along with a pointer to the complete GitHub project.

I’ve been a big fan of Grafana since Hortonworks introduced it as the primary monitoring tool in HDP 2.5.  We use Grafana extensively for monitoring SQL on Windows and SQL on Linux.

Comments closed

Fixing Power Settings With T-SQL

Randolph West shows how to use T-SQL and xp_cmdshell to switch a server’s power settings from Balanced to High Performance:

Windows has the same setting. It’s in the Power Options under Control Panel, and for all servers, no matter what, it should be set to High Performance.

Here’s a free T-SQL script I wrote that will check for you what the power settings are. We don’t always have desktop access to a server when we are checking diagnostics, but it’s good to know if performance problems can be addressed by a really simple fix that doesn’t require the vehicle to be at rest.

(The script also respects your settings, so if you had xp_cmdshell disabled, it’ll turn it off again when it’s done.)

Click through for the script.

Comments closed

Finding Last DBCC Command Runs

Andrew Kelly has a script to find the last time somebody ran a DBCC command like DBCC FREEPROCCACHE:

Let me explain a few things about the script. I am getting the path of the current trace file and placing it into a variable. The current file name will almost certainly have a suffix of _nn just before the .trc extension.  If I were to run the script as is I would only be reading the current log file and not the other 4 that preceded it. If all you care about is the current log file then fine but most will want to search all the existing log files. One way to do this is to simply replace the current file name with just log.trc and use default as the 2nd parameter as I did above in the fn_trace_gettable function. The default parameter value tells the function to read all files from that one onward. even though log.trc doesn’t actually exist it knows how to handle it and reads all of the existing trace files in order.

So if the string that we search on (here we use ‘dbcc free%’) is in any of the files it will return the matching rows. You may have to adjust the wildcards and such but I think you get the idea. Again remember that the data is transient so always look at the StartTime column in the logs to ensure you know which Date and Time range you are looking at. You can do something like this but I will leave that up to you.

SELECT MIN(StartTime) AS [Begin], MAX(StartTime) AS [End]  FROM ::fn_trace_gettable(@Path,default)

A word of caution in that I never bothered to see just how resource intensive this function is. while I don’t expect any issues with normal use it is not something you want to be searching on every second. Be sensible and you should have no problems.

Click through for more details, including the script Andy uses to do this search.

Comments closed

Using Event Notifications To E-Mail Deadlock Graphs

Dave Mason captures details whenever a deadlock occurs and uses Event Notifications to e-mail them to himself:

As noted, there are other ways to handle deadlocks in SQL Server. The approach presented here may have some drawbacks compared to others. There is an authorization issue for msdb.dbo.sp_send_dbmail that will need to be addressed for logins without elevated permissions. Additionally, you might get hit with an unexpected deluge of emails. (The first time I got deadlock alerts, there were more than 500 of them waiting for me in my Inbox.) Lastly, there’s the XML issue: it’s not everyone’s cup of tea. On the plus side, I really like the proactive nature: an event occurs, I get an email. I think most would agree it’s better to know something (bad) happened before the customers start calling. The automated generation of Deadlock Graph (*.xdl) files is convenient. And event notifications have been available since SQL Server 2005. As far as I know, the feature is available in all editions, including Express Edition.

Click through for all of the code Dave used to set this up.

Comments closed

Winnowing Down WhoIsActive Data

Kendra Little shows how to use temporary objects to pare down the results of sp_whoisactive for later storage:

I used the @schema parameter to have sp_WhoIsActive generate the schema for the table itself. Full instructions on doing this by Adam are here.

Since I care about tempdb in the case of this example, I used @output_column_list to specify that those columns should come first, followed by the rest of the columns.

I also elected to set @get_plans to 1 to get query execution plans if they’re available. That’s not free, and they can take up a lot of room, but they can contain a lot of helpful info.

This is a very useful guide, and also read the linked documentation for sp_whoisactive; there’s a huge amount of goodness in that one procedure.

Comments closed

Indirect Checkpoint And Non-Yielding Scheduler Problems

Parikshit Savjani has a post describing an issue you might experience with indirect checkpoint post SQL Server-2012:

One of the scenarios where skewed distribution of dirty pages in the DPList is common is tempdb. Starting SQL Server 2016, indirect checkpoint is turned ON by default with target_recovery_time set to 60 for model database. Since tempdb database is derived from model during startup, it inherits this property from model database and has indirect checkpoint enabled by default. As a result of the skewed DPList distribution in tempdb, depending on the workload, you may experience excessive spinlock contention and exponential backoffs on DPList on tempdb. In scenarios when the DPList has grown very long, the recovery writer may produce a non-yielding scheduler dump as it iterates through the long list (20k-30k) and tries to acquire spinlock and waits with exponential backoff if spinlock is taken by multiple IOC routines for removal of pages.

This is worth taking a close read.

Comments closed

Bacpacing In Azure

Derik Hammer shows how to use a bacpac file to deploy an existing database to Azure SQL Database:

The recommended method for working with Azure is always PowerShell. The Azure portal and SSMS are tools there for your convenience but they do not scale well. If you have multiple databases to migrate, potentially from multiple servers, using PowerShell will be much more efficient. Scripting your Azure work makes it repeatable and works towards the Infrastructure as Code concept.

In this demonstration, the below steps will be used.

  1. Export the bacpac file to a local directory with sqlpackage.exe.

  2. Copy the bacpac to Azure Blob Storage with AzCopy.exe

  3. Use the PowerShell AzureRM module and cmdlets to create an Azure SQL Database from the bacpac file.

Derik shows the point-and-click way as well as the Powershell way.

Comments closed

Self-Analysis Of SQL Server Dump Files

Arun Sirpal walks through the SQL Server Diagnostics preview:

Notice the region to upload – If you are using a work machine I would suggest getting authorisation. The great thing here is that this is GDPR compliant.

Once ready hit the upload button, it goes through 3 phases. Upload, Analysis and a recommendation.

It sends your dump files to an external service, which is important enough to point out.  If you want more details on the product, Rony Chatterjee has a FAQ.

Comments closed