Press "Enter" to skip to content

Category: Administration

Automatically Updating dbatools

Garry Bargsley gives us two ways to update dbatools on a schedule:

I have been using dbatools heavily since I was introduced to it.  I have automated processes and created new processes with it.  There are new commands that come out almost daily that fill in certain gaps or enhance current commands.  One way to stay current with these updates is to update your dbatools install frequently.

How better to do this than to have an auto update process that will run daily and get the latest dbatools version for you…

I have put together two ways of doing this based on your preferred method.  One is via a SQL Agent Job and the other is using a Windows Task Scheduler job.

Read on for examples of both techniques.

Comments closed

Findings From The Field: Reviewing Monitored Servers

Brent Ozar shares some findings from his SQL ConstantCare service:

Let’s look at a sample day from last week when 562 servers sent in data (out of the users who agreed to share their data to improve recommendations):

  • No one was using the -x startup flag

  • No one had known-dangerous third party modules installed

  • No one was running a 32-bit SQL Server

  • No one had high memory usage for in-memory OLTP (Hekaton)

  • No one’s plan cache had been erased recently (aside from server restarts)

  • No one needed to restart to fix a TempDB file configuration mismatch in the DMVs

  • Everyone on 2016 Standard Edition had patched to SP1 to get the free features

Read on to see the rules that customers are struggling with the most, as well as some thoughts on a scenario where a broken rule might not be that important.

Comments closed

Monitoring InfluxDB Using Telegraf

Tracy Boggiano continues her performance metrics series by showing how to monitor the monitoring solution:

We need to generate a special config file for the Linux machine to capture the CPU and disk metrics. This config file will be in your /etc/telegraf folder.

After you run this you will need to open it with your favorite editor, mine is vim:

Once the file the is open you can type the letter i to be able to edit the file. Then type /influxdb to find the outputs.influxdb section of the file and edit the same sections we edited for Windows. Type /inputs.outfluxdb to jump down and edit the InFluxdb section. After you have finished editing the file in vim you can type Esc then :wq! and the hit Enter for it save the config file.

Click through to check out her solution.

Comments closed

Notes On Automating Automatic Indexing

Grant Fritchey shares with us some of his findings with automatic indexing on Azure SQL Database:

What you’ll notice is that several of the queries are filtering on the FirstName column. There’s no good index there. If you look at the execution plans for those queries you’ll also note the Missing Index suggestion. That suggestion is a necessary part of the automatic indexing. Yeah, missing indexes. I know. They’re not always accurate. It’s just a suggestion. Blah, blah, blah. I hear you.

The magic is not supplied by missing indexes. The magic is supplied by lots of data. Microsoft can take advantage of three things. Yes, missing index suggestions is first. Then, they can use the query metrics gathered in Query Store to see the behavior of your queries over time. Finally, they can use machine learning algorithms to determine if indexes will be helpful and measure how helpful they’ve been if one gets added. It’s great stuff. Go and read on it.

Click through for more notes, as well as a Powershell script you can use to replicate his findings.

Comments closed

Tracking Down Long-Running xp_cmdshell Processes

Thomas Rushton investigates what’s taking so long with an xp_cmdshell call:

I wanted to know what he was up to, but the sql_text field only gives “xp_cmdshell”, not anything useful that might help to identify what went wrong.

So we have to go to Taskmanager on the server. On the “Process Details” page, you can select which detail columns you want to see. We want to see the Command Line, as that’ll tell us if it’s some manually-launched batch job that’s failed or something else going wrong.

An alternative to using the Task Manager is to open ProcMon, part of the Sysinternals toolset.  It takes a bit of getting used to, but is quite powerful once you know its ins and outs.

Comments closed

Running The Azure DTU Calculator On An Older Server

Jim Donahoe shows us how to get the Azure DTU calculator running on an older server without Powershell:

I recently had to do an analysis of a client’s database workload using the Azure DTU Calculator(DTU Calculator) and thought it might be interesting to share just how I did that.  I have run this tool numerous times on other clients via the PowerShell method and the Command Line method, however this client’s environment was: Windows Server 2008R2, and SQL Server 2008R2 SP3 and had to be done differently.

Now, from the DTU Calculator page itself, it tells you how the process works.  It essentially runs a perfmon trace for an hour with the following counters:

  • Processor – % Processor Time
  • Logical Disk – Disk Reads/sec
  • Logical Disk – Disk Writes/sec
  • Database – Log Bytes Flushed/sec

My client did not have PowerShell accessible for me to use unfortunately.  I normally prefer the PowerShell script, however in this case I had to use the Command Line Interface, they both return the same results.

Click through to see how Jim did it.

Comments closed

Finding Database Files In The Wrong Folder

Matthew McGiffen has a script which shows which database log and data files are outside of the default folder for that instance:

One common issue that bugs me is where databases have been moved from one instance to another, usually through backup and restore, and the files haven’t been moved as part of the restore so they get recreated in the data\log folders for the old instance.

This has caused me various problems, e.g. working out which instance is hammering the disk or using up all the space.

Click through for the script.

Comments closed

Copying Azure SQL Databases

Arun Sirpal noticed a problem when he tried to copy an Azure SQL Database:

Now, I was looking at the following code.

CREATE DATABASE CodeDBP1
  AS COPY OF CodeDB ( SERVICE_OBJECTIVE = 'P1' )  ;

You would think this is okay? I did, especially with the fact that it parsed and was executing. I was thinking a copy of the CodeDB database will be created as a premium P1 database regardless of what the source database service tier was. This  source database is 0.5GB in size under the basic tier and 40 minutes later the copy was still executing. It just didn’t seem right.

Click through for the solution.  If this is going to be normal behavior, I’d really like to see an error message.

Comments closed

Reading SQL Server Error Logs

Jana Sattainathan has a procedure which makes it easier to read the error logs in SQL Server:

xp_ReadErrorLog has some limitations

  • Reads only the specified error log whose ArchiveNumber is specified
  • Shows only the rows with matching string (not adjacent context info rows)

The first bullet is obvious in that we cannot read ALL the logs to look for something more holistic and meaningful with all the information we have. This MSSQLTips post does describe a method to loop through.

Click through for Jana’s stored procedure code.

Comments closed

Creating Azure VMs Using Powershell: Laying The Groundwork

Robert Cain has part one of a two-part series on creating VMs in Azure using Powershell:

Creating a virtual machine is great, but it won’t be of much use unless it can communicate outside of itself. That’s where virtual networking comes in. To setup a virtual network, often abbreviated vnet, you need to accomplish three things. First is the creation of the virtual network itself. After the network is created, you need to define a security group for it. In essence, the security group defines a firewall. In the process of creating it, the PSAzure module automatically creates firewall rules that allow HTTP and RDP (Remote Desktop Protocol) traffic through the firewall. There are functions in PSAzure to create security groups at a lower level, allowing one to create alternate rules. This example will demonstrate the most common options.

The final step is to create a virtual NIC, or Network Interface Card. The NIC will form the bridge between the virtual network and the virtual machine, much like a physical network card allows a physical computer to connect to a real network. First off, a few variables are assigned. These will hold names for the security group, network and subnet names. The network addresses for the main network and subnet are also placed into into variables. Finally, a name is assigned to the NIC.

Check it out, especially if you build a lot of VMs in Azure.

Comments closed