Press "Enter" to skip to content

Category: Administration

Tracking Down Long-Running xp_cmdshell Processes

Thomas Rushton investigates what’s taking so long with an xp_cmdshell call:

I wanted to know what he was up to, but the sql_text field only gives “xp_cmdshell”, not anything useful that might help to identify what went wrong.

So we have to go to Taskmanager on the server. On the “Process Details” page, you can select which detail columns you want to see. We want to see the Command Line, as that’ll tell us if it’s some manually-launched batch job that’s failed or something else going wrong.

An alternative to using the Task Manager is to open ProcMon, part of the Sysinternals toolset.  It takes a bit of getting used to, but is quite powerful once you know its ins and outs.

Comments closed

Running The Azure DTU Calculator On An Older Server

Jim Donahoe shows us how to get the Azure DTU calculator running on an older server without Powershell:

I recently had to do an analysis of a client’s database workload using the Azure DTU Calculator(DTU Calculator) and thought it might be interesting to share just how I did that.  I have run this tool numerous times on other clients via the PowerShell method and the Command Line method, however this client’s environment was: Windows Server 2008R2, and SQL Server 2008R2 SP3 and had to be done differently.

Now, from the DTU Calculator page itself, it tells you how the process works.  It essentially runs a perfmon trace for an hour with the following counters:

  • Processor – % Processor Time
  • Logical Disk – Disk Reads/sec
  • Logical Disk – Disk Writes/sec
  • Database – Log Bytes Flushed/sec

My client did not have PowerShell accessible for me to use unfortunately.  I normally prefer the PowerShell script, however in this case I had to use the Command Line Interface, they both return the same results.

Click through to see how Jim did it.

Comments closed

Finding Database Files In The Wrong Folder

Matthew McGiffen has a script which shows which database log and data files are outside of the default folder for that instance:

One common issue that bugs me is where databases have been moved from one instance to another, usually through backup and restore, and the files haven’t been moved as part of the restore so they get recreated in the data\log folders for the old instance.

This has caused me various problems, e.g. working out which instance is hammering the disk or using up all the space.

Click through for the script.

Comments closed

Copying Azure SQL Databases

Arun Sirpal noticed a problem when he tried to copy an Azure SQL Database:

Now, I was looking at the following code.

CREATE DATABASE CodeDBP1
  AS COPY OF CodeDB ( SERVICE_OBJECTIVE = 'P1' )  ;

You would think this is okay? I did, especially with the fact that it parsed and was executing. I was thinking a copy of the CodeDB database will be created as a premium P1 database regardless of what the source database service tier was. This  source database is 0.5GB in size under the basic tier and 40 minutes later the copy was still executing. It just didn’t seem right.

Click through for the solution.  If this is going to be normal behavior, I’d really like to see an error message.

Comments closed

Reading SQL Server Error Logs

Jana Sattainathan has a procedure which makes it easier to read the error logs in SQL Server:

xp_ReadErrorLog has some limitations

  • Reads only the specified error log whose ArchiveNumber is specified
  • Shows only the rows with matching string (not adjacent context info rows)

The first bullet is obvious in that we cannot read ALL the logs to look for something more holistic and meaningful with all the information we have. This MSSQLTips post does describe a method to loop through.

Click through for Jana’s stored procedure code.

Comments closed

Creating Azure VMs Using Powershell: Laying The Groundwork

Robert Cain has part one of a two-part series on creating VMs in Azure using Powershell:

Creating a virtual machine is great, but it won’t be of much use unless it can communicate outside of itself. That’s where virtual networking comes in. To setup a virtual network, often abbreviated vnet, you need to accomplish three things. First is the creation of the virtual network itself. After the network is created, you need to define a security group for it. In essence, the security group defines a firewall. In the process of creating it, the PSAzure module automatically creates firewall rules that allow HTTP and RDP (Remote Desktop Protocol) traffic through the firewall. There are functions in PSAzure to create security groups at a lower level, allowing one to create alternate rules. This example will demonstrate the most common options.

The final step is to create a virtual NIC, or Network Interface Card. The NIC will form the bridge between the virtual network and the virtual machine, much like a physical network card allows a physical computer to connect to a real network. First off, a few variables are assigned. These will hold names for the security group, network and subnet names. The network addresses for the main network and subnet are also placed into into variables. Finally, a name is assigned to the NIC.

Check it out, especially if you build a lot of VMs in Azure.

Comments closed

Running The SQL Server Features Discovery Report

Dave Mason shows us how to run the SQL Server features discovery report via command prompt and PowerShell:

I don’t need to validate SQL Server installations on a regular basis. When the need arises, my preference is to run the SQL Server features discovery report. Further, I prefer to run it from the command line. After looking up command line parameters one too many times, I decided to script it out.

It turns out the script commands are a little more complicated than I realized: there is a different setup.exe file for each version of SQL Server installed. I ended up making two script versions: a DOS batch file with hard-coded paths, and a PowerShell script that’s more robust. Check them out and let me know what you think. (Keep scrolling down for a report sample image.)

I’m not sure I’ve ever run that report, but now I know how to do it from Powershell.

Comments closed

Configuring Telegraf To Monitor Windows Servers

Tracy Boggiano continues her series on configuring Telegraf:

The solution for this is fairly simple now that you have setup Part 1 of this series.  You can download the Windows conf file for Telegraf from my presentation.  Below are the important pieces of the file. The main part of the OUTPUT PLUGINS being to place the data in the InfluxDB database. The data will be housed in the same database as our SQL performance metrics. Next, you can collect any Windows Performance Counters you want and group them into a “Measurement”. I’m using the dashboard that is on the Grafana website along with the performance metrics they have set up to be collected.

Click through for Tracy’s setup script to get an idea of which Perfmon counters she’s tracking.

Comments closed

Synchronizing User Logins Across SQL Server Instances

Hamish Watson shows how easy it is to synchronize SQL authenticated logins using dbatools:

When building new servers the most important thing after restoring and securing the database is syncing up the users. This is especially important for Availability Groups as SQL Authenticated users required the SIDS to be the same.

In the past I had some very long winded code that would do the sync – it was a mixture of TSQL and PowerShell. It worked but you know – it was cumbersome.

Read on to see how life gets easier with dbatools.

Comments closed

Auto Soft-NUMA And Scheduler Waits

Joe Obbish walks us through a scenario with automatic soft-NUMA leading to poor performance:

Consider a server with soft-NUMA nodes of 8 schedulers with MAXDOP 8. The first parallel query will be sent to numa node 0. The number of active workers matches the number of schedulers exactly so each active worker is assigned to a different scheduler in the NUMA node. The second parallel query will be sent to NUMA node 1. The third parallel query will be sent to NUMA node 2, and so on. Execution of serial queries or creation of sessions does not matter. That advances a counter that’s separate from the “global enumerator” used for parallel query scheduler placement. As far as I can tell the scheduler assigned to execution context 0 does not affect the scheduling of the parallel worker threads, although it can certainly affect parallel query performance.

The scenario described above doesn’t sound so bad. It can work well if the parallel queries take roughly about the same amount of time to complete and query MAXDOPmatches the number of schedulers per soft-NUMA node. Problems can emerge when at least one of those is not true. With the spread selection type it’s possible that the amount of work already assigned to schedulers has no effect on parallel query scheduler placement. Let that sink in. You could have 100 serial queries all assigned to schedulers in numa node 0 but SQL Server may send a parallel query to that NUMA node. It depends on the position of the “global enumerator” as opposed to current work on the server.

Joe offers up some alternatives if you find yourself dealing with this issue.  Definitely a must-read.

Comments closed