Press "Enter" to skip to content

Category: Administration

Building A Windows Failover Cluster

David Fowler continues his series on building a test lab:

Now I don’t want to get into the details of Quorum, there are plenty of great posts out there that explain it far better than I can but in a nutshell, each node in the cluster has a vote and we really want the total number of votes to be an odd number.  But we’ve only got two servers, does that mean that we need a create another server to make an odd number?  Well, no we don’t.  What we can use is what’s known as a file share witness, and that’s simply a file share that each of the nodes in the cluster can access.  That file share will effectively act as our third vote.

So first thing that you’re going to need to do is create a file share somewhere, the best place for that in our setup would be on the domain controller or somewhere that we know is always likely to be available.  So go and do that now, call it what you like but make sure that the servers are going to have full rights to it.  As this is just our own personal little test lab and we’re not too worried about best practices you could possibly open it up to EVERYONE (probably not a great idea in a production environment but not the end of the world if we want to be lazy in our own little play pen).

David also shows how to set up an Availability Group.

Comments closed

Cycling The SQL Server Error Log

Kenneth Fisher recommends cycling the SQL Server error log regularly:

You’ll note I have it set to keep 32 files. This way I have at least a months worth of information. It’s probably more than I need, but for my personal box, that’s what I chose. For your company, it’s going to depend on your needs. You can keep up to 99 files but that seems a bit excessive to me. Of course, these files aren’t all that large (usually) so it’s up to you.

I’d go with a bit more than 32—if you have three service restarts in March, you won’t have a full month’s worth of information.  Regardless of the exact number you choose, it’s a smart idea to cycle this regularly.

Comments closed

Cloning A SQL Server Installation

Jana Sattainathan shows how you can find the configuration options used when installing SQL Server:

Let us say that you want to install SQL Server on another host say NewHost but you want it to have the same settings as another host/instance on say ModelHost. In fact, let us say that these are almost identical hosts with similar drive locations and such and you are positive that you want the exact setup on both.

The hard way is to look at what features ModelHost has and try to click through the installation wizard with the correct options selected/input.

Fortunately, there is an easier way to clone an installation. Even if you installed manually using the installation wizard, SQL Server still generates a configuration file with all the settings used (except passwords) for the installation. We can simply use that file from the ModelHost to drive the installation on NewHost

Click through to see where that configuration file is and how you can use it.

Comments closed

Automatic Tuning In SQL Server 2017

Arun Sirpal shows off one of the more interesting features in SQL Server 2017:

Before we begin any further let’s do a little recap. Automatic tuning in SQL Server 2017 notifies you whenever a potential performance issue is detected, and lets you apply corrective actions, or lets the Database Engine automatically fix performance problems, this is also available in Azure SQL Database.

There are 2 parts to it, automatic plan correction and automatic index management, for SQL Server 2017, automatic index management it IS NOT part of the product.

To switch automatic plan correction on you will need to run the following code against your database.

I’m looking forward to seeing this expand much further.

Comments closed

The SQLUndercover Inspector

Adrian Buckman announces the SQLUndercover Inspector:

In a Nutshell:

The SQLUndercover Inspector is a configurable daily report written in SQL that will send you an email (or log the report to a SQL Table) showing you information about specific parts of SQL Server in HTML format including highlighted warnings/advisory conditions, the report has configurable thresholds and settings leaving you in control.

Click through to see what they track.

Comments closed

Expanding LVM Drives

David Klee shows how to expand an LVM drive on Linux:

Next in our SQL Server on Linux series is one important question. On Windows, if you’re about to run out of space, you get your VM admin / storage admin to expand one or more of your drives, and you go to Disk Management and expand the drive with no downtime. How do we accomplish this same task on Linux?

First, SSH into your VM. Get your appropriate system engineer to expand the drive that needs to be expanded. You won’t be able to see it at first in Linux because, just like in Windows, it’ll need to rescan the storage to ‘see’ the extra space. Sometimes Windows does it automatically, and sometimes you have to initiate it manually. In Linux it only does this on system startup.

Let’s grow our data drive from 250GB to 300GB first.

Click through to see how to do that.

Comments closed

Filtering Alert Noise With A Leaky Bucket Algorithm

I have a post implementing a leaky bucket algorithm in T-SQL:

Now that we have a table, we want to do something with it.  The most naive solution would be to fire off an alert every time a row gets added to this table.  The problem with this solution is that it can flood our inbox with useless information.  Suppose the developers push out an API change that breaks everything.  The first 500 response will be important information.  The second one might be important because it’s confirmatory.  The six hundredth alert will not help.  And heaven help you if you’ve got this alert tied to your PagerDuty account…

So let’s only alert if we get to a certain threshold—we’ll call it 5 messages.  Fewer than 5 and we can assume it’s a blip or just somebody doing something weird.  The next part of the problem is, 5 messages per what?  We don’t want to alert for every fifth error message.  Let’s say we get one error message a year, so there was one in 2014, one in 2015, one in 2016, one in 2017, and now one in 2018.  If we simply set our threshold to report on every fifth error, we’ll get an alert in 2018, but most likely, the prior years’ errors are long gone by now, so that’s not helpful.

Read on for the solution.  I’ve been quite happy with the solution in practice, as it has cut down the number of spurious alert e-mails to practically nil.

Comments closed

Deleting From Large Tables

Andy Galbraith tells a tale of a failed query and a lot of log file growths:

The App01 database was in SIMPLE recovery (when I find this in the wild, I always question it – isn’t point-in-time recovery important? – but that is another discussion).  The relevance here is that in SIMPLE recovery, LOG backups are irrelevant (and actually *can’t* be run) – once a transaction or batch completes, the LDF/LOG file space is marked available for re-use, meaning that *usually* the LDF file doesn’t grow very large.

A database in SIMPLE recovery growing large LDF/LOG files almost always means a long-running unit of work or accidentally open transactions (a BEGIN TRAN with no subsequent CLOSE TRAN) – looking at the errors in the SQL Error Log the previous night, the 9002 log file full errors stopped at 1:45:51am server time, which means the offending unit of work ended then one way or another (crash or success).

Sure enough when I filtered the XEvents event file to things with a duration > 100 microseconds and then scanned down to 1:45:00 I quickly saw the row shown above.   Note this doesn’t mean the unit of work was excessively large in CPU/RAM/IO/etc. (and in fact the query errored out due to lack of LOG space) but the excessive duration made all of the tiny units of work over the previous 105 minutes have to persist in the transaction LDF/LOG file until this unit of work completed, preventing all of the LOG from that time to be marked for re-use until this statement ended.

It turns out that deleting millions of records in a single transaction is an expensive operation.

Comments closed

Migrating A Memory-Optimized Database

Michael Bourgon notes that there’s an extra step when migrating a database with a memory-optimized filegroup from one server to another:

So, I was trying to get an in-memory database moved from one server to another (long story, involving IMOLTP melting down on me and resulting in a 2 terabyte log file).

I thought it’d be simple: copy the files over, along with the Filestream folders, and attach.  Nope!  Various errors, including “5(Access is denied.)”.

Click through for the steps involved.

Comments closed

Database-Scoped Optimize For Ad Hoc Workloads

Joe Sack introduces a new database-scoped configuration option:

SQL Server provides the “optimize for ad hoc workloads” server-scoped option that is used to reduce the memory footprint of single use ad hoc batches and associated plans.  When enabled at the SQL Server instance scope, the “optimize for ad hoc workloads” option stores a reduced-memory compiled plan stub on the first execution of an ad hoc batch for any database on the instance. This server option has been available in SQL Server for several years now, but until recently there hasn’t been a way to enable this option in Azure SQL Database for individual databases.

We are now introducing a new database scoped configuration called OPTIMIZE_FOR_AD_HOC_WORKLOADS which enables this behavior at the database scope in Azure SQL Database.

I’m not sure if this will make it to the on-prem product, and if it does, I’m not sure how useful it would be in practice.  But it is good that we can use it in Azure SQL Database.

Comments closed