Press "Enter" to skip to content

Category: Administration

Creating Azure VMs Using Powershell: Laying The Groundwork

Robert Cain has part one of a two-part series on creating VMs in Azure using Powershell:

Creating a virtual machine is great, but it won’t be of much use unless it can communicate outside of itself. That’s where virtual networking comes in. To setup a virtual network, often abbreviated vnet, you need to accomplish three things. First is the creation of the virtual network itself. After the network is created, you need to define a security group for it. In essence, the security group defines a firewall. In the process of creating it, the PSAzure module automatically creates firewall rules that allow HTTP and RDP (Remote Desktop Protocol) traffic through the firewall. There are functions in PSAzure to create security groups at a lower level, allowing one to create alternate rules. This example will demonstrate the most common options.

The final step is to create a virtual NIC, or Network Interface Card. The NIC will form the bridge between the virtual network and the virtual machine, much like a physical network card allows a physical computer to connect to a real network. First off, a few variables are assigned. These will hold names for the security group, network and subnet names. The network addresses for the main network and subnet are also placed into into variables. Finally, a name is assigned to the NIC.

Check it out, especially if you build a lot of VMs in Azure.

Comments closed

Running The SQL Server Features Discovery Report

Dave Mason shows us how to run the SQL Server features discovery report via command prompt and PowerShell:

I don’t need to validate SQL Server installations on a regular basis. When the need arises, my preference is to run the SQL Server features discovery report. Further, I prefer to run it from the command line. After looking up command line parameters one too many times, I decided to script it out.

It turns out the script commands are a little more complicated than I realized: there is a different setup.exe file for each version of SQL Server installed. I ended up making two script versions: a DOS batch file with hard-coded paths, and a PowerShell script that’s more robust. Check them out and let me know what you think. (Keep scrolling down for a report sample image.)

I’m not sure I’ve ever run that report, but now I know how to do it from Powershell.

Comments closed

Configuring Telegraf To Monitor Windows Servers

Tracy Boggiano continues her series on configuring Telegraf:

The solution for this is fairly simple now that you have setup Part 1 of this series.  You can download the Windows conf file for Telegraf from my presentation.  Below are the important pieces of the file. The main part of the OUTPUT PLUGINS being to place the data in the InfluxDB database. The data will be housed in the same database as our SQL performance metrics. Next, you can collect any Windows Performance Counters you want and group them into a “Measurement”. I’m using the dashboard that is on the Grafana website along with the performance metrics they have set up to be collected.

Click through for Tracy’s setup script to get an idea of which Perfmon counters she’s tracking.

Comments closed

Synchronizing User Logins Across SQL Server Instances

Hamish Watson shows how easy it is to synchronize SQL authenticated logins using dbatools:

When building new servers the most important thing after restoring and securing the database is syncing up the users. This is especially important for Availability Groups as SQL Authenticated users required the SIDS to be the same.

In the past I had some very long winded code that would do the sync – it was a mixture of TSQL and PowerShell. It worked but you know – it was cumbersome.

Read on to see how life gets easier with dbatools.

Comments closed

Auto Soft-NUMA And Scheduler Waits

Joe Obbish walks us through a scenario with automatic soft-NUMA leading to poor performance:

Consider a server with soft-NUMA nodes of 8 schedulers with MAXDOP 8. The first parallel query will be sent to numa node 0. The number of active workers matches the number of schedulers exactly so each active worker is assigned to a different scheduler in the NUMA node. The second parallel query will be sent to NUMA node 1. The third parallel query will be sent to NUMA node 2, and so on. Execution of serial queries or creation of sessions does not matter. That advances a counter that’s separate from the “global enumerator” used for parallel query scheduler placement. As far as I can tell the scheduler assigned to execution context 0 does not affect the scheduling of the parallel worker threads, although it can certainly affect parallel query performance.

The scenario described above doesn’t sound so bad. It can work well if the parallel queries take roughly about the same amount of time to complete and query MAXDOPmatches the number of schedulers per soft-NUMA node. Problems can emerge when at least one of those is not true. With the spread selection type it’s possible that the amount of work already assigned to schedulers has no effect on parallel query scheduler placement. Let that sink in. You could have 100 serial queries all assigned to schedulers in numa node 0 but SQL Server may send a parallel query to that NUMA node. It depends on the position of the “global enumerator” as opposed to current work on the server.

Joe offers up some alternatives if you find yourself dealing with this issue.  Definitely a must-read.

Comments closed

Deleting Lots Of Data

Kenneth Fisher wants to delete a lot of rows:

I recently had the task of deleting a bit over a billion rows from a table. Now I could have done just this:

DELETE FROM tablename WHERE createdate >= '1/1/2017'

But I have a few problems here. The table has no index on createdate, potentially causing problems with tempdb (the sort on createdate). Although in this case tempdb is pretty large because of some large batch work done at various times. I’m also going to be deleting > billion rows of ~6 billion which is probably going to fill up the log of the database (which fortunately isn’t in use yet) and end up rolling back my delete anyway. Even if I don’t fill up the log, I’m still going to bloat it pretty badly (autogrowth). And last, and anything but least, this is on a production server. Even if this database was on its own drive (meaning growth of the log can’t cause a problem with any other databases) that tempdb thing (let alone other resource usage) is going to be an issue.

Read on to see how to delete in batches.  My pattern is to have an explicit transaction within the WHILE loop, opening and closing for each deletion operation.  That has worked pretty well in the past when deleting large numbers of rows from a table.  It might also make sense to put a temporary filtered index on the table, dropping it afterward.

Comments closed

Essential SQL Server Tools

Tracy Boggiano has a top 5 list of tools she uses on a day-to-day basis:

This T-SQL Tuesday is brought to us by Jens Vestergaard (b |  t), and we are asked to share our favorite SQL Server tools. Hint Profiler will not be on the list. But where do you start there are so many tools out there. In alphabetical order here are my top 5 tools because I can’t pick which one is better than other.

Click through to see Tracy’s top 5 list.

Comments closed

Collecting Login Details With dbatools

Chrissy LeMaire shows us several ways to track who has connected to your SQL Server instance:

Using the default trace is pretty lightweight and backwards compatible. While I generally try to avoid traces, I like this method because it doesn’t require remote access, it works on older SQL instances, it’s accurate and reading from the trace isn’t as CPU-intensive as it would be with an Extended Event.

Click through for details on this as well as three other methods, along with the dbatools glue to make it work.

Comments closed

Things To Think About Before Detaching And Attaching A Database

Jana Sattainathan has a few considerations before running a detach-attach operation:

On the detached database files, the file permissions change when detaching. i.e., The AD account (your AD account) performing the detach operation becomes the owner and the only person with permissions to the file. It does not inherit the permissions from the folder. Oddly, if you just detach and reattach, it would attach fine even though SQL Server Service account does not have any explicit permissions on the files. However, others cannot attach a file that you detached even if they are on the Administrators group until they are explicitly granted permissions on the files themselves. This is well explained in this MSSQLTips article. Quoting the article, if you want to retain file permissions on detach, set the trace flag 1802.

“SQL Server 2005 introduced trace flag 1802 which retains the database files permission after the detach operation. The trace flag is tested and still applicable with SQL Server 2016.”

Click through for several tips of similar ilk.

Comments closed

Configuring Azure SQL Analytics

Esat Erkec has a guide showing how to configure and use Azure SQL Analytics on Azure SQL Database:

The most important and challenging responsibility of a database administrator is monitoring performance metrics. Because monitoring performance and troubleshooting performance issues are considered to be difficult. For this reason, we need diagnostic and monitoring tools to measure performance counters and metrics. For Azure SQL there is a tool which is named SQL Analytics. With this tool, we can measure and monitor Azure SQL databases and elastic pools. At the same time, we can create alerts for notifications. SQL Analytics offers performance metrics in graphical form. In this article, we will learn how to enable Azure SQL Analytics.

This is a long and screenshot-filled post, which is helpful if you’re getting started.

Comments closed