Press "Enter" to skip to content

Category: Administration

Update Statistics After An Upgrade

Erin Stellato gives us some good life advice:

There are a variety of methods we use for helping customers upgrade to a new SQL Server version, and one question we often get asked is whether or not statistics need to be updated as part of the upgrade process.

tl;dr

Yes.  Update statistics after an upgrade.  Further, if you’re upgrading to 2012 or higher from an earlier version, you should rebuild your indexes (which will update index statistics, so then you just need to update column statistics).

Make this one of your “Not too long; did read” posts of the day.

Comments closed

An RDS Postmortem

Andy Isaacson covers a performance issue Honeycomb experienced with RDS:

In retrospect, the failure chain had just 4 links:

  1. The RDS MySQL database instance backing our production environment experienced a sudden and massive performance degradation; P95(query_time) went from 11 milliseconds to >1000 milliseconds, while write-operation throughput dropped from 780/second to 5/second, in just 20 seconds.

  2. RDS did not identify a failure, so the Multi-AZ feature did not activate to fail over to the hot spare.

  3. As a result of the delays due to increased query_time, the Go MySQL client’s connection pool filled up with connections waiting for slow query results to come back and opened more connections to compensate.

  4. This exceeded the max_connections setting on the MySQL server, leading to cron jobs and newly-started daemons being unable to connect to the database and triggering many “Error 1040: Too many connections” log messages.

This was very interesting to read, and I applaud companies making public these kinds of post-mortems, especially because the idea of publicizing the reasons for failures is so scary.

Comments closed

Determining Compute Resources From Managed Instances

Dimitri Furman explains Azure SQL Database Managed Instance resource allocations:

In the current Azure SQL Database Managed Instance (MI) preview, when customers create a new instance, they can allocate a certain number of CPU vCores and a certain amount of disk storage space for the instance. However, there is no explicit configuration option for the amount of memory allocated to the instance, because on MI, memory allocation is proportional to the number of vCores used.

How can a customer determine the actual amount of memory their MI instance can use, in GB? The answer is less obvious than it may seem. Using the traditional SQL Server methods will not provide the right answer on MI. In this article, we will go over the technical details of CPU and memory allocation on MI, and describe the correct way to answer this question.

The information and behavior described in this article are as of the time of writing (April 2018). Some aspects of MI behavior, including the visibility of certain compute resource allocations, may be temporary and will likely change as MI progresses from the current preview to general availability and beyond. Nevertheless, customers using MI in preview will find that this article answers some of the common questions about MI resource allocation.

What you see in SQL Server Management Studio is true, but it is also not the whole picture.

Comments closed

Alerting On tempdb Growth

Lori Brown shows how to use a SQL Agent alert to warn you if tempdb grows beyond a certain size:

Lastly, create a SQL Alert to notify you as soon as tempdb grows past the threshold you stipulate. Using the GUI to create the alert, you need to fill out every field on the General page and make sure the Enabled checkbox is marked. Create a Name for the alerts, then specify the Type as SQL Server performance condition alert. The Object should be Databases, the Counter is Data File(s) Size (KB), and the Instance will be tempdb. The alert will trigger if counter rises above the value. The Value will depend upon the cumulative size of your tempdb files. In this case each tempdb file is 12GB (or 12,288,000 KB), so the total size is 98,304,000 KB.

I liked the approach of only firing the SQL Agent job after a trigger was met, rather than running a job which queries and then creates an e-mail afterward.

Comments closed

Automatically Updating dbatools

Garry Bargsley gives us two ways to update dbatools on a schedule:

I have been using dbatools heavily since I was introduced to it.  I have automated processes and created new processes with it.  There are new commands that come out almost daily that fill in certain gaps or enhance current commands.  One way to stay current with these updates is to update your dbatools install frequently.

How better to do this than to have an auto update process that will run daily and get the latest dbatools version for you…

I have put together two ways of doing this based on your preferred method.  One is via a SQL Agent Job and the other is using a Windows Task Scheduler job.

Read on for examples of both techniques.

Comments closed

Findings From The Field: Reviewing Monitored Servers

Brent Ozar shares some findings from his SQL ConstantCare service:

Let’s look at a sample day from last week when 562 servers sent in data (out of the users who agreed to share their data to improve recommendations):

  • No one was using the -x startup flag

  • No one had known-dangerous third party modules installed

  • No one was running a 32-bit SQL Server

  • No one had high memory usage for in-memory OLTP (Hekaton)

  • No one’s plan cache had been erased recently (aside from server restarts)

  • No one needed to restart to fix a TempDB file configuration mismatch in the DMVs

  • Everyone on 2016 Standard Edition had patched to SP1 to get the free features

Read on to see the rules that customers are struggling with the most, as well as some thoughts on a scenario where a broken rule might not be that important.

Comments closed

Monitoring InfluxDB Using Telegraf

Tracy Boggiano continues her performance metrics series by showing how to monitor the monitoring solution:

We need to generate a special config file for the Linux machine to capture the CPU and disk metrics. This config file will be in your /etc/telegraf folder.

After you run this you will need to open it with your favorite editor, mine is vim:

Once the file the is open you can type the letter i to be able to edit the file. Then type /influxdb to find the outputs.influxdb section of the file and edit the same sections we edited for Windows. Type /inputs.outfluxdb to jump down and edit the InFluxdb section. After you have finished editing the file in vim you can type Esc then :wq! and the hit Enter for it save the config file.

Click through to check out her solution.

Comments closed

Notes On Automating Automatic Indexing

Grant Fritchey shares with us some of his findings with automatic indexing on Azure SQL Database:

What you’ll notice is that several of the queries are filtering on the FirstName column. There’s no good index there. If you look at the execution plans for those queries you’ll also note the Missing Index suggestion. That suggestion is a necessary part of the automatic indexing. Yeah, missing indexes. I know. They’re not always accurate. It’s just a suggestion. Blah, blah, blah. I hear you.

The magic is not supplied by missing indexes. The magic is supplied by lots of data. Microsoft can take advantage of three things. Yes, missing index suggestions is first. Then, they can use the query metrics gathered in Query Store to see the behavior of your queries over time. Finally, they can use machine learning algorithms to determine if indexes will be helpful and measure how helpful they’ve been if one gets added. It’s great stuff. Go and read on it.

Click through for more notes, as well as a Powershell script you can use to replicate his findings.

Comments closed

Tracking Down Long-Running xp_cmdshell Processes

Thomas Rushton investigates what’s taking so long with an xp_cmdshell call:

I wanted to know what he was up to, but the sql_text field only gives “xp_cmdshell”, not anything useful that might help to identify what went wrong.

So we have to go to Taskmanager on the server. On the “Process Details” page, you can select which detail columns you want to see. We want to see the Command Line, as that’ll tell us if it’s some manually-launched batch job that’s failed or something else going wrong.

An alternative to using the Task Manager is to open ProcMon, part of the Sysinternals toolset.  It takes a bit of getting used to, but is quite powerful once you know its ins and outs.

Comments closed

Running The Azure DTU Calculator On An Older Server

Jim Donahoe shows us how to get the Azure DTU calculator running on an older server without Powershell:

I recently had to do an analysis of a client’s database workload using the Azure DTU Calculator(DTU Calculator) and thought it might be interesting to share just how I did that.  I have run this tool numerous times on other clients via the PowerShell method and the Command Line method, however this client’s environment was: Windows Server 2008R2, and SQL Server 2008R2 SP3 and had to be done differently.

Now, from the DTU Calculator page itself, it tells you how the process works.  It essentially runs a perfmon trace for an hour with the following counters:

  • Processor – % Processor Time
  • Logical Disk – Disk Reads/sec
  • Logical Disk – Disk Writes/sec
  • Database – Log Bytes Flushed/sec

My client did not have PowerShell accessible for me to use unfortunately.  I normally prefer the PowerShell script, however in this case I had to use the Command Line Interface, they both return the same results.

Click through to see how Jim did it.

Comments closed