Press "Enter" to skip to content

Category: Administration

Alerting On tempdb Growth

Lori Brown shows how to use a SQL Agent alert to warn you if tempdb grows beyond a certain size:

Lastly, create a SQL Alert to notify you as soon as tempdb grows past the threshold you stipulate. Using the GUI to create the alert, you need to fill out every field on the General page and make sure the Enabled checkbox is marked. Create a Name for the alerts, then specify the Type as SQL Server performance condition alert. The Object should be Databases, the Counter is Data File(s) Size (KB), and the Instance will be tempdb. The alert will trigger if counter rises above the value. The Value will depend upon the cumulative size of your tempdb files. In this case each tempdb file is 12GB (or 12,288,000 KB), so the total size is 98,304,000 KB.

I liked the approach of only firing the SQL Agent job after a trigger was met, rather than running a job which queries and then creates an e-mail afterward.

Comments closed

Automatically Updating dbatools

Garry Bargsley gives us two ways to update dbatools on a schedule:

I have been using dbatools heavily since I was introduced to it.  I have automated processes and created new processes with it.  There are new commands that come out almost daily that fill in certain gaps or enhance current commands.  One way to stay current with these updates is to update your dbatools install frequently.

How better to do this than to have an auto update process that will run daily and get the latest dbatools version for you…

I have put together two ways of doing this based on your preferred method.  One is via a SQL Agent Job and the other is using a Windows Task Scheduler job.

Read on for examples of both techniques.

Comments closed

Findings From The Field: Reviewing Monitored Servers

Brent Ozar shares some findings from his SQL ConstantCare service:

Let’s look at a sample day from last week when 562 servers sent in data (out of the users who agreed to share their data to improve recommendations):

  • No one was using the -x startup flag

  • No one had known-dangerous third party modules installed

  • No one was running a 32-bit SQL Server

  • No one had high memory usage for in-memory OLTP (Hekaton)

  • No one’s plan cache had been erased recently (aside from server restarts)

  • No one needed to restart to fix a TempDB file configuration mismatch in the DMVs

  • Everyone on 2016 Standard Edition had patched to SP1 to get the free features

Read on to see the rules that customers are struggling with the most, as well as some thoughts on a scenario where a broken rule might not be that important.

Comments closed

Monitoring InfluxDB Using Telegraf

Tracy Boggiano continues her performance metrics series by showing how to monitor the monitoring solution:

We need to generate a special config file for the Linux machine to capture the CPU and disk metrics. This config file will be in your /etc/telegraf folder.

After you run this you will need to open it with your favorite editor, mine is vim:

Once the file the is open you can type the letter i to be able to edit the file. Then type /influxdb to find the outputs.influxdb section of the file and edit the same sections we edited for Windows. Type /inputs.outfluxdb to jump down and edit the InFluxdb section. After you have finished editing the file in vim you can type Esc then :wq! and the hit Enter for it save the config file.

Click through to check out her solution.

Comments closed

Notes On Automating Automatic Indexing

Grant Fritchey shares with us some of his findings with automatic indexing on Azure SQL Database:

What you’ll notice is that several of the queries are filtering on the FirstName column. There’s no good index there. If you look at the execution plans for those queries you’ll also note the Missing Index suggestion. That suggestion is a necessary part of the automatic indexing. Yeah, missing indexes. I know. They’re not always accurate. It’s just a suggestion. Blah, blah, blah. I hear you.

The magic is not supplied by missing indexes. The magic is supplied by lots of data. Microsoft can take advantage of three things. Yes, missing index suggestions is first. Then, they can use the query metrics gathered in Query Store to see the behavior of your queries over time. Finally, they can use machine learning algorithms to determine if indexes will be helpful and measure how helpful they’ve been if one gets added. It’s great stuff. Go and read on it.

Click through for more notes, as well as a Powershell script you can use to replicate his findings.

Comments closed

Tracking Down Long-Running xp_cmdshell Processes

Thomas Rushton investigates what’s taking so long with an xp_cmdshell call:

I wanted to know what he was up to, but the sql_text field only gives “xp_cmdshell”, not anything useful that might help to identify what went wrong.

So we have to go to Taskmanager on the server. On the “Process Details” page, you can select which detail columns you want to see. We want to see the Command Line, as that’ll tell us if it’s some manually-launched batch job that’s failed or something else going wrong.

An alternative to using the Task Manager is to open ProcMon, part of the Sysinternals toolset.  It takes a bit of getting used to, but is quite powerful once you know its ins and outs.

Comments closed

Running The Azure DTU Calculator On An Older Server

Jim Donahoe shows us how to get the Azure DTU calculator running on an older server without Powershell:

I recently had to do an analysis of a client’s database workload using the Azure DTU Calculator(DTU Calculator) and thought it might be interesting to share just how I did that.  I have run this tool numerous times on other clients via the PowerShell method and the Command Line method, however this client’s environment was: Windows Server 2008R2, and SQL Server 2008R2 SP3 and had to be done differently.

Now, from the DTU Calculator page itself, it tells you how the process works.  It essentially runs a perfmon trace for an hour with the following counters:

  • Processor – % Processor Time
  • Logical Disk – Disk Reads/sec
  • Logical Disk – Disk Writes/sec
  • Database – Log Bytes Flushed/sec

My client did not have PowerShell accessible for me to use unfortunately.  I normally prefer the PowerShell script, however in this case I had to use the Command Line Interface, they both return the same results.

Click through to see how Jim did it.

Comments closed

Finding Database Files In The Wrong Folder

Matthew McGiffen has a script which shows which database log and data files are outside of the default folder for that instance:

One common issue that bugs me is where databases have been moved from one instance to another, usually through backup and restore, and the files haven’t been moved as part of the restore so they get recreated in the data\log folders for the old instance.

This has caused me various problems, e.g. working out which instance is hammering the disk or using up all the space.

Click through for the script.

Comments closed

Copying Azure SQL Databases

Arun Sirpal noticed a problem when he tried to copy an Azure SQL Database:

Now, I was looking at the following code.

CREATE DATABASE CodeDBP1 AS COPY OF CodeDB ( SERVICE_OBJECTIVE = 'P1' ) ;

You would think this is okay? I did, especially with the fact that it parsed and was executing. I was thinking a copy of the CodeDB database will be created as a premium P1 database regardless of what the source database service tier was. This  source database is 0.5GB in size under the basic tier and 40 minutes later the copy was still executing. It just didn’t seem right.

Click through for the solution.  If this is going to be normal behavior, I’d really like to see an error message.

Comments closed

Reading SQL Server Error Logs

Jana Sattainathan has a procedure which makes it easier to read the error logs in SQL Server:

xp_ReadErrorLog has some limitations

  • Reads only the specified error log whose ArchiveNumber is specified
  • Shows only the rows with matching string (not adjacent context info rows)

The first bullet is obvious in that we cannot read ALL the logs to look for something more holistic and meaningful with all the information we have. This MSSQLTips post does describe a method to loop through.

Click through for Jana’s stored procedure code.

Comments closed