Press "Enter" to skip to content

Category: Administration

DNS Aliases

Drew Furgiuele shows us how to use CNAME records to give us easy aliases for servers hosting SQL Server:

When you connect to a SQL Server instance, you’re most likely connecting directly to the host name of the server running that instance. So for example, if the host name of my instance is SQLSERVER-A, then in my SSMS connection (or application) I probably type in a host name offully qualified domain name (FQDN) of a server. If I want to move a database from one server to another, or stand up a new server and move everything over to it, from now on I’ll need to make sure I type in the new host name. And for a DBA, this is fine. We mostly identify our servers by the hosts they run on.

Developers and users, on the other hand, don’t always think like that. They each probably only care about one or two databases on a given instance, and depending on their release cycle even something as simple as changing a connection string might need to be a scheduled change. So when databases move or you build a new server you not only need to ensure as little downtime as possible from a system perspective but with as little impact to a user’s system too. And you can do that with the help of your network team and your local domain name system: DNS.

I’ve had great experiences with CNAME records masking actual server names.  This was most relevant in an environment in which devs just couldn’t remember which X-Men character was the production SQL Server and  which was QA.

Comments closed

Shredding Event Data

Jason Brimhall has a script to shred extended events:

In the following script, I have tried to accomplish just that – a single script to create the entire XML parser for me, for you, and for anybody wishing to use it. I don’t want to have to remember the subtle nuances of how to parse each of the events each time I need to parse the session data. I want something that is quick, easy, and repeatable.

With all of that said, here is the script that I now use to parse my session data. You should notice that it has been simplified and is more extensive now.

Jason also has sample usage.  Check this out for sure.

Comments closed

Azure Data Lake ACLs

Saveen Reddy introduces file and folder level Access Control Lists for Azure Data Lake storage:

We’ve emphasized that Azure Data Lake Store is compatible with WebHDFS. Now that ACLs are fully available, it’s important to understand the ACL model in WebHDFS/HDFS because they are POSIX-style ACLs and not Windows-style ACLs.  Before we five deep into the details on the ACL model, here are key points to remember.

  • POSIX-STYLE ACLs DO NOT ALLOW INHERITANCE. For those of you familiar with POSIX ACLs, this is not a surprise. For those coming from a Windows background this is very important to keep in mind. For example, if Alice can read files in folder /foo, it does not mean that she can rad files in /foo/bar. She must be granted explicit permission to /foo/bar. The POSIX ACL model is different in some other interesting ways, but this lack of inheritance is the most important thing to keep in mind.

  • ADDING A NEW USER TO DATA LAKE ANALYTICS REQUIRES A FEW NEW STEPS. Fortunately, a portal wizard automates the most difficult steps for you.

This is an interesting development.

Comments closed

Automating Patching?

Kendra Little takes on the question of whether patching should be automated on SQL Server instances:

I used to spend a lot of time doing patching, and I had plenty of times when:

  • Servers wouldn’t come back up after a reboot. Someone had to go into the iLo/Rib card and give them a firm shove

  • Shutdown took forever. SQL Server can be super slow to shut down! I understand this better after reading a recent post on the “SQL Server According to Bob” blog. Bob Dorr explains that when SQL Server shuts down, it waits for all administrator (sa) level commands to complete. So, if you’ve got any servers where jobs or applications are running as sa, well….  hope they finish up fast.

  • Patching accidentally interrupted something important. Some process was running from an app server, etc, that failed because patching rebooted the server, and it fired off alarms that had to be cleaned up.

  • Something failed during startup after reboot. A service started and failed, or a database wasn’t online.  (Figuring out “was that database offline before we started?” was the first step. Ugh.)

  • Miscommunication caused a problem on a cluster.  Whoops, you were working on node2 while I was working on node1? BAD TIMES.

This is a really good post.  Kendra’s done a lot more patching than I have, and she’s definitely though about it in more detail.  Me, I’m waiting for the day—which is very close for some companies—in which you don’t patch servers.  Instead, you spin up and down virtual apps and virtual servers which are fully patched.  It’s a lot harder to do with databases compared to app servers, but if you separate data from compute, your compute centers are interchangeable.  When a new OS patch comes out, you spin up new machines which have this patch installed, they take over for the old ones, and after a safe period, you can delete the old versions forever.  If there’s a failure, shut down the new version, spin back up the old version, and you’re back alive.

Comments closed

Logging WhoIsActive Output

Tara Kizer has a primer on storing WhoIsActive outputs for subsequent analysis:

Create a new job and plop the below code into the job step, modifying the first 3 variables as needed. The code will create the logging table if it doesn’t exist, the clustered index if it doesn’t exist, log current activity and purge older data based on the @retention variable.

How often should you collect activity? I think collecting sp_WhoIsActive data every 30-60 seconds is a good balance between logging enough activity to troubleshoot production problems and the storage needed to keep the data in a very busy environment.

I like having something like this in place because often times, when you need these results, it’s already too late.

Comments closed

Upgrades And Regressions

Kendra Little explains when upgrades can cause performance to suffer:

The cluster’s servers and SQL Server configurations were built to be as close to identical as possible to the previous instance (memory, cores, disk, maxdop, CTP, etc).

After the migration, I noticed that CPU utilization jumped from the normal 25% to a consistent 75%.

I did several other migrations with similar server loads with no issues, so I’m a bit puzzled as to what might be going on here. Could the upgrade from SQL Server 2008 R2 to SQL Server 2012 simply be exposing bad queries that 2008 was handling differently?

Kendra goes through a number of reasons, building a troubleshooting guide in the process.  This is a great read.

Comments closed

SQL Server’s Basic Installer

Derik Hammer walks through the new SQL Server Basic Installer:

When using the Basic Installer only the database engine and SQL Client Connectivity SDK are installed. I find this better than using the advanced installer’s Install with all defaults option. Typically any local user will not need Reporting Services, Integration Services, Analysis Services, or any other feature available.

I was a bit concerned about this when it was first announced—the default installer already allows you to make too many poor decisions—but I think it works within the use case Derik describes:  developers installing a local edition on their dev boxes.

Comments closed

Stretch Database

SQL Padre checks out Stretch database:

As you can see, we now have a new path in our query plan with an operator called “Remote Query”.  Basically the local server queries the remote query then using the local Primary key Concatenates them back together to produce the desired result. So can we update the data?

Nope, sure can’t.  Once the data lives in Azure, the data is READ ONLY.

Check it out.  He’s a bit more sanguine about Stretch than I am, so maybe it will fit your use cases.

Comments closed

Auditing Power Plan Settings

Drew Furgiuele writes a bit of Powershell to control power plan settings and expands upon this one-liner:

It’s a classic one-liner, but if you’re not used to reading it I’ll break it down for you. First, we useGet-ChildItem  to return a list of registered servers in our central management server (named PRO-CM-SQL in my example). This returns a series of objects that lists all the registered names on the central management server in each directory, so we need to filter out the directory names with a Where-Object  (objects that don’t have a “mode” value of “d” for directory). Once we have our list, we just select the names ( Select-Object ). Then we pipe the list of names over to a ForEach-Object  and execute the script each time. Finally, we tack on a Export-CSV  cmdlet to output the results to an easy to read file we can open in Excel (or notepad, or whatever).

Our script also doesn’t control output, so you leave that up to the user. They can put it on the screen or pipe it to a file. And that’s an important style point: never put your users on rails. You may like output one way, but someone else may not. Just think of the next person.Because some day you might be that next person.

This is a good post if you need to figure out how to find your servers’ current power settings, but a great post if you want to think about how to write helpful Powershell scripts.

Comments closed