Press "Enter" to skip to content

Category: Administration

Alerting On Missed Log Backups

Jamie Wick shows us how to build out an alert if we’ve exceeded a threshold for time since the last log backup:

The primary reason most transaction logs start to (unexpectedly) grow is that the transaction log hasn’t been backed up. This is usually caused by one of two situations: the log backup job failed, or the log backup job didn’t start. Most options for backing up SQL transaction logs (3rd party software, SQL Agent job, or maintenance plan) will include an alert/notification mechanism for when the job fails. Where some backup options are lacking is in notifying that a (log) backup that should have been taken, wasn’t. If the backup (or management) software doesn’t include an alert for missing backups, SQL Server (2008 & newer) has a solution. Policy Based Management (PBM) includes a policy (Last Log Backup Date) that will send an email notification when the last backup date/time exceeds a specified threshold.

A good use for a woefully underappreciated tool.

Comments closed

The Corruption Scenario That Wasn’t

Andy Galbraith walks us through a scary-looking error message that turned out to be not quite accurate:

There were a stack of errors overnight in the DB123 database on SQL01, including one horror show error:

Log Name:      Application

Source:        MSSQLSERVER

Date:          5/24/2018 12:49:39 AM

Event ID:      3314

Task Category: Server

Level:         Error

Keywords:      Classic

User:          N/A

Computer:      SQL01.client99.com

Description:

During undoing of a logged operation in database ‘DB123’, an error occurred at log record ID (195237:96178:4). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

The catch is that when I checked the server, the database was online.

Say what?

Read on for root cause analysis and issue correction.  This is a good example where jumping to conclusions could lead you down the wrong path and miss the root cause entirely.

Comments closed

Configuring An Azure Runbook For Index Maintenance

Jim Donahoe explains how to perform index and statistics maintenance for Azure SQL Database, where you don’t have SQL Agent available:

I had a lot of issues when I created my first one, and after discussing with some folks, they had the same issues.  I searched for the best blog posts that I could find on the subject, and the one I LOVED the most was here: Arctic DBA.  He broke it down so simply, that I finally created my own pseudo installer and I wanted to share it with all of you.  Please, bear in mind, these code snippets may fail at anytime due to changes in Azure.

**IMPORTANT**

These next steps assume the following:

You have created/configured your Azure Automation Account and credential to use to execute this runbook.

Read on for a reasonably short Powershell script and a modified version of Ola Hallengren’s index maintenance procedures.

Comments closed

Auditing xp_cmdshell Usage

Patrick Keisler shows how to create a SQL Server audit file for xp_cmdshell usage:

This article assumes you already have a basic understanding of SQL Server Audit, but if not, use this link to catch up on all the details.

Are you required to have xp_cmdshell enabled on one of your servers? If so, then setup a SQL Audit now to track its use. Never mind the implications of enabling xp_cmdshell, as a DBA you are responsible for what happens on your servers and tracking the use of xp_cmdshell should be a priority.

Some smart people will tell you to disable xp_cmdshell altogether, but I don’t like that advice at all.  Auditing usage can give you more peace of mind while not limiting your ability to use a valuable tool.

Comments closed

Breaking A Database File Into Multiple Files

Lori Brown shows us how to take a database with one database file and add new database files to it:

I occasionally come across some pretty good sized databases that are set up with a single data file.  We recently have been working with a client to break up their single data file into multiple data files so that we can spread them over several different LUNs and so that they can take advantage of the improved performance of using the files in parallel.  The concept is much like setting up tempdb with 1 file (up to 8) per core.

Since most people don’t think about using multiple files for databases until they have grown large enough to be a problem, I think that most don’t realize that breaking up a database can be done at any time, you just need to have enough space for new files.  Here is a bit of a demo on how to do this.

Do read Lori’s warning at the end, however, should you decide to do this in production.

Comments closed

Understanding sp_reset_connection

Greg Low explains what sp_reset_connection does and why it’s often a good thing:

Anyone who’s ever traced activity against a SQL Server will have no doubt seen a large number of commands where the procedure sp_reset_connection has been executed. Yet, this command won’t appear anywhere in the source code of the applications that are running.

As an example of why this occurs, one of the most common data access technologies that is used to connect applications to SQL Server is ADO.NET. It has a SqlConnection object that represents a connection that can be opened to a SQL Server instance. In the design of the SqlConnection class, the architects of it were grappling with two big issues:

  • They knew that opening and closing connections to SQL Server was a relatively expensive process.
  • They also knew that on a busy website, they didn’t want to use enough resources (or might not even have had them), to open up a connection for each concurrent session on the website.

So they decided to make the connections to SQL Server able to be shared.

Read the whole thing.

Comments closed

The Halloween Problem And Deleting From Self-Referencing Tables

Louis Davdison walks us through a case where deleting from a table can be extremely slow:

I have a great interest in hierarchy solutions, as I plan to write a book on implementing hierarchies in SQL Server in the next year or so (after I see what happens in SQL Server vNext after 2017). Something I didn’t include in my design testing when I built my presentation on hierarchies (available from a link here) a few years back was the cost to remove nodes from a tree. I learned a big lesson today about the cost of deleting from a hierarchy (the hard way). It is definitely something I will be digging deeper into at some point in a lab environment, comparing multiple methods of deleting from a hierarchy, but I wanted to write down the issue with self-referencing table for future use (and hopefully for your use too.)

We have a table with 100s of millions of rows, and deletes from this table were taking forever. There were 17 foreign key constraints to this table, but each of the references were only showing as 1% of the plan. The physical DELETE from the clustered index operator was ~80%, but in the query, there showed up an operator that looked ominous.

Definitely worth a read.

Comments closed

Tracking Latency To Azure With PsPing

Arun Sirpal shows us how to use PsPing (part of the Sysinternals tool set) to determine latency between your computer and a VM in an Azure data center:

This is the tool of choice when wanting to find out latency to your Azure SQL Server. In addition to standard ICMP ping functionality, it can report the latency of connecting to TCP ports, the latency of TCP round-trip communication.

I use this to find the latency from my location to various Azure SQL Servers which are in different Azure regions. I am based in the heart of England so let’s look and compare a couple of locations (just out of curiosity). Once you have downloaded the tool you will need to CD to the directory and call the following command.

Read on to see how to use PsPing.

Comments closed

Don’t Set Max Size For Containers In In-Memory OLTP

Ned Otter recommends you not mess with the maximum container size when creating a memory-optimized filegroup:

I recently saw a thread on twitter, where the OP talked about setting the max size for an In-Memory OLTP container. I responded as I always do: it’s not possible to set a limit on anything having to do with storage for In-Memory OLTP.

Unfortunately, that’s not correct: through SSMS or TSQL, you can in fact set a max size for a container.

But you should not ever do that…..

Why?

Because if you do, and your checkpoint files exceed the max size of the container, your database can go into the In Recovery, Suspect, or OFFLINE state.

Read on for a repro that you should not try in production.  Or anywhere, really.

Comments closed

Managing Central Management Server

Chrissy LeMaire shows how you can use dbatools to manage Central Management Server and registered servers:

It’s a super useful feature that not all DBAs know about. Since CMS data is stored in msdb and accessible via SMO, you can access it from SQL Server Management Studio or PowerShell modules like dbatools.

Central Management Server’s essential functionality includes:

Actions that are taken by using a central management server group act on all servers in the server group. This includes connecting to servers by using Object Explorer and executing Transact-SQL statements and Policy-Based Management policies on multiple servers at the same time.

I mostly use it as a visual repository of my SQL Servers. Prior to using dbatools and Invoke-DbaSqlQuery, however, I did use CMS to easily execute code against a number of different servers.

CMS is a great feature, and is a critical tool for scaling out a SQL Server infrastructure.

Comments closed