Press "Enter" to skip to content

Category: Administration

Managing Central Management Server

Warren Estes explains how he keeps track of his servers using a Central Management Server:

We use a CMS server for each domain and I can’t imagine life without it. Kind of like when I discovered Amazon prime, or bought my first memory foam mattress.

The real magic of a CMS comes from being able to push jobs, or evaluate policies, on any server (targets) you want.

You can also execute T-SQL against all, or a subset of servers with either registered servers or CMS.

There are some caveats to look out for like collation differences and version specific DMVs when running queries across instances. Also security needs to be addressed. However, that is outside the scope of this post. You can find that information in the links in the first section.

I liked CMS when I had to deal with a dozen instances.  With hundreds of instances, I wouldn’t want to administer anything without one.

Comments closed

Early Stats From SQLConstantCare Customers

Brent Ozar has some interesting and mostly disheartening statistics:

One of my favorite things about my job is being able to take the pulse of how real companies are managing their databases. I don’t wanna be locked in an ivory tower, preaching down to you, dear reader, about how you should be doing five million things a day – when in reality, you’re struggling to get an hour of real work done because you’ve got so many meetings.

But sometimes I wonder – am I out of touch? When I was a DBA, I remember struggling with backups and corruption checking – has that gotten easier? Have today’s DBAs started using more automation and tools to protect their estate? Is Transparent Data Encryption catching on? Did DBAs start using Extended Events for monitoring when I wasn’t looking?

And it’s important because I wanna build the right training material and scripts for our customers. I see a problem trending, I want to be able to give people the right information to fix the problem, fast.

Standard disclaimers about potential bias in samples apply, but it’s an interesting slice of the population.

Comments closed

Stored Procedure To Build Filegroups And Files

Tracy Boggiano has a lengthy script which dynamically builds out files and filegroups for you:

Here are the six different ways you can run this procedure to files and filegroups for your database:

  1. Set up partitions with a set size for each file

  2. Set up partitions based on the size of a table

  3. By monthly date range with a set size for each file

  4. By monthly date range with sized based on the size of a table

  5. One file with a set size

  6. One file based on the size of a table

She then has examples for each call, so check it out.

Comments closed

Using DM_DB_LOG_STATS

Kellyn Pot’vin-Gorman explains the DM_DB_LOG_STATS dynamic management function in SQL Server 2017:

This is an addition to SQL Server 2017 and available in Azure Database/Azure Data Warehouse to simplify and synchronize how we query information about log statistics, not just backups.  The added benefit of this is that it only requires the VIEW DATABASE STATE privilege to utilize it, which eases the demand on anyone who’s tried to grant rights to view backup information to non-DBA personnel in previous releases.  With this DMV, you can monitor, alert and kick off jobs to manage the transaction log backups.

It does require the database_id to be passed to it to provide results, but who keeps that around?  Much easier to just do a Cross Apply to sys.databases and make your life simple.

Click through to see what’s included on this useful DMV.

Comments closed

Understanding Recovery Intervals

Paul Randal explains what the recovery interval option on SQL Server really means:

One of the concepts I find people misunderstand frequently is the recovery interval, either for the server as a whole or the per-database setting that was introduced in SQL Server 2012 for indirect checkpoints.

There are two misconceptions here:

  1. The recovery interval equals how often a checkpoint will occur

  2. SQL Server guarantees the recovery interval (i.e. crash recovery for the database will only take the amount of time specified in the recovery interval)

It’s good to keep this in mind.

Comments closed

Remote DAC And Vulnerability Assessments

Max Vernon points out a SQL Server Management Studio Vulnerability Assessment check which seems somewhat incomplete:

Certainly, you’d want to ensure the port for the DAC is not available to the Internet, but hopefully if you’re reading this blog you already know how silly it would be to open SQL Server to the Internet.

Assuming you don’t have the port open to the Internet, it’s very likely the DAC will not be of any use at all if you disable Remote Admin Connections as advised in the Vulnerability Assessment. My advice is to ignore this warning completely and configure the DAC to allow remote connections. Microsoft Technet has documentation about using the DAC, and says to configure it for remote connections by logging onto the server locally first, then configuring SQL Server to allow remote DAC connections, which seems a bit like putting the cart before the horse.

Read the whole thing.  I agree with Max’s assessment that if there are some basic controls around your instance (like not letting SQL Server be Internet-accessible, putting SQL Server instances in a protected subnet, etc.), remote DAC is definitely useful enough to keep running.

Comments closed

Database Migration With dbatools

Jess Pomfret shows how easy it is to migrate databases from one SQL Server instance to another using dbatools:

Now that there are no connections we can move the database.  Depending on the situation it might be worth setting the database to read only or single user mode first. In my case, I had the application taken down so I felt confident no connections would be coming in.

With one line of code we can select the source and destination servers, the database name, specify that we want to use the backup and restore method, and then provide the path to a file share that both instance service accounts have access to:

The whole process is just five lines of code, so it could hardly be easier.

Comments closed

Working With Azure SQL Managed Instances

Jovan Popovic has a couple of posts covering configuration for Azure SQL Managed Instances.  First, he looks at how to configure tempdb:

One limitation in the current public preview is that tempdb don’t preserves custom settings after fail-over happens. If you add new files to tempdb or change file size, these settings will not be preserved after fail-over, and original tempdb will be re-created on the new instance. This is a temporary limitation and it will be fixed during public preview.

However, since Managed Instance supports SQL Agent, and SQL Agent can be configured to execute some script when SQL Agent start, you can workaround this issue and create a SQL Agent job that will pre-configure your tempdb.

SQL Agent will start whenever Managed Instance fail-over and the job that contains script above can increase tempdb size before you start running your workload on the new instance.

Then, he covers network configuration:

Managed Instance is your dedicated resource that is placed in Azure Virtual network with assigned private IP address. Before you create Managed Instance, you need to create Azure Virtual network using Azure portalPowerShell, or Azure CLI.

If you are using Azure portal, make sure that you use Resource Manager ake sure that Service Endpoints option is Disabled in Creating Virtual Network Blade (this is default option so don’t change it).

If you want to have only one subnet in your Virtual Network (Virtual Network blade will enable you to define first subnet called default), you need to know that Managed Instance subnet can have between 16 and 256 addresses. Therefore, use subnet masks /28 to /24 when defining your subnet IP ranges for default subnet. If you know how many instances you will have make sure that you have at least 2 addresses per instance + 5 system addresses in the default subnet.

Both posts are useful if you’re interested in getting started with a managed instance.

Comments closed

Automatically Restarting Telegraf On Windows

Tracy Boggiano has a quick Powershell script to try starting Telegraf until it succeeds:

I’ve noticed on demo machines that sometimes Telegraf doesn’t start on the first try, and this seems to not happen on most of my production servers, but they have a lot more memory and CPU power. So I figured I would write a quick blog post and provide a way to set up a way to get the service to start when the machine is rebooted. This is a known issue that a user has offered a bounty to get it fixed so if you know some Go and have time, please check out the issue on Github.

Click through for the script.

Comments closed