Press "Enter" to skip to content

Category: Administration

Initial Thoughts On dbachecks

Jess Pomfret has an initial use case for dbachecks:

Each check has one unique tag which basically names the check and then a number of other tags that can also be used to call a collection of checks.

For this example we are going to use several checks to ensure that we meet the following requirements:

  • Full backup once a week – using LastFullBackup
  • Differential backup once a day – using LastDiffBackup
  • Log backup every hour – using LastLogBackup

Since each of the three checks we want to run also have the LastBackup tag we can use that to call the collection of checks at once.

Jason Squires looks at this for enterprise reporting:

This module was developed and designed to ensure you can see if you have the best settings/configurations set up on your SQL systems using powershell. There are three pre-requisites that are required to load the module. Those are Pester 4.3.1, PS Framework 0.9.10.23, and currently as of this post dbatools 0.9.207. However, the team of dbachecks, kindly built in a notification for you if those modules and versions should those not be installed or would have a need to update.

What I really love about this module, is how you can utilize SQL CMS, and view the results at an enterprise reporting level.

Shane O’Neill has a bit more:

Straight away, dbachecks gives you the option to include or exclude checks that you feel aren’t for you. If you only want to run a subset of the checks, then you can specify that.

The real beauty that I think dbachecks provide is that you are getting a wealth of checks for things that you may never have thought of checking or known how to check while being able to add any personal tests as well.

Sounds like something for DBAs to check out.

Comments closed

Introducing dbachecks

Chrissy LeMaire announces that the dbatools team is onto something big:

dbachecks is a framework created by and for SQL Server pros who need to validate their environments. Basically, we all share similar checklists and mostly just the server names and RPO/RTO/etc change.

This module allows us to crowdsource our checklists using Pester tests. Such checks include:

  • Backups are being performed
  • Identity columns are not about to max out
  • Servers have access to backup paths
  • Database integrity checks are being performed and corruption does not exist
  • Disk space is not about to run out
  • All enabled jobs have succeeded
  • Network latency does not exceed a specified threshold

We currently provide over 80 checks

Chrissy also shows you how to install dbachecks and explains the commands.

This is the biggest community-driven news since, well, dbatools…

Comments closed

Data Discovery And Classification In SQL Server

Gilad Mittelman explains how the SQL Information Protection (aka Data Discovery and Classification) process works in SQL Server and Azure SQL Database:

SQL Information Protection (SQL IP) introduces a set of advanced services and new SQL capabilities, forming a new information protection paradigm in SQL aimed at protecting the data, not just the database:

  • Discovery & recommendations – The classification engine scans your database and identifies columns containing potentially sensitive data. It then provides you an easy way to review and apply the appropriate classification recommendations via the Azure portal.

  • Labeling – Sensitivity classification labels can be persistently tagged on columns using new classification metadata attributes introduced into the SQL Engine. This metadata can then be utilized for advanced sensitivity-based auditing and protection scenarios.

  • Monitoring/Auditing – Sensitivity of the query result set is calculated in real time and used for auditing access to sensitive data (currently in Azure SQL DB only).

  • Visibility – The database classification state can be viewed in a detailed dashboard in the portal. Additionally, you can download a report (in Excel format) to be used for compliance & auditing purposes, as well as other needs.

Check it out, especially with GDPR breathing down our necks.

Comments closed

Changing Int To Bigint

Danny Kruge shows one way to change a table’s identity value from integer to bigint:

The table was around 500GB with over 900 million rows. Based on the average number of inserts a day on that table, I estimated that we had eight months before inserts on that table would grind to a halt. This was an order entry table, subject to round-the-clock inserts due to customer activity. Any downtime to make the conversion to BIGINT was going to have to be minimal.

This article describes how I planned and executed a change from an INT to a BIGINT data type, replicating the process I used in a step by step guide for the AdventureWorks database. The technique creates a new copy of the table, with a BIGINT datatype, on a separate SQL Server instance, then uses object level recovery to move it into the production database.

There’s a way to do this without any downtime, though the trigger logic gets a little more complex and it does take longer.

Comments closed

Changing The Default Filegroup

Kenneth Fisher shows how you can change the default filegroup:

You know you can have multiple filegroups right? You might have a separate filegroup for the data (the clustered index & heaps) and another for the indexes (non-clustered indexes). Or maybe you want to separate your data tables from the system tables. There are any number of reasons why you might want to have multiple filegroups, however, there will always be a primary filegroup and it will always be the default if you don’t specify otherwise. Right? Wrong.

I’ve never seen a way to remove primary or to move the system objects in it. However, you can change the primary filegroup.

Having a separate filegroup for your tables and another for indexes (or splitting things up some other way) can help get a database back online faster, as you can restore the system tables first and then restore filegroups as needed.

Comments closed

Backup And Restore With Move

John Morehouse has a script for database migration onto a differently configured server:

Now, not every environment is the same.  Instances get configured differently or things change just due to the nature of the business.  In a previous life I would routinely have to backup a database and restore it to another server.  However, the server I was using to restore to had a different drive configuration.  It happens.  Anyway, I wanted a script that would give me

  • A backup statement
  • A restore statement with the appropriate MOVE switch

This would allow me to easily execute the backup statement and then on the target server execute the restore statement.  Since the restore statement already had the appropriate MOVE switch provided, I didn’t have to manually compose the statement.

Click through for the script.

Comments closed

Finding Long-Running Transactions

David Fowler has a dream:

It was 3am in the morning and I was asleep and enjoying a delightful dream (I knew it was a dream because I was surrounded by drifting clouds, singing angels and hundreds of softly humming SQL Servers where the hardware had been sensibly provisioned and all code carefully optimised) when I was rudely awoken by a Service Desk call informing me that the systems were unresponsive.  A quick check and I could see that everything was being blocked a particular transaction.  My suspicion was that someone had run a script which had opened a transaction and then toddled off home without checking that either the script had finished or closed the transaction that it had opened.

My guess was right and killing the transaction got the cogs turning again.

Click through for the script.

Comments closed

Maintain MSDB

Lori Brown points out that there are some SQL Server service tables which can bloat your msdb database:

I recently received a panicked call from a client who had a SQL instance go down because the server’s C drive was full. As the guy looked he found that the msdb database file was 31 GB and was consuming all of the free space on the OS drive causing SQL to shut down. He cleaned up some other old files so that SQL would work again but did not know what to do about msdb.

As we looked at it together I found that the sysmaintplan_logdetail table was taking all the space in the database. The SQL Agent had been set to only keep about 10000 rows of history but for some unknown reason the table never removed history. After consulting MSDN I found this code did the trick for truncating this table.

Lori’s focus here is on SQL Agent history, but don’t forget about things like backup history as well.

Comments closed

Mapping Server Audit Action IDs

Solomon Rutzky tries to reverse engineer the action_id values used in SQL Server audits:

This post is, for the most part, a continuation of Server Audit Mystery 1: Filtering class_type gets Error Msg 25713. In that post I was trying to filter on the class_type field / predicate source (i.e. the object, or whatever, that the event is on). The tricky part was that class_type is supposed to be filterable on string values such as U for “User Table”, P for “Stored Procedure”, etc. But, the reality is that it has to be filtered using numbers, and that list does not seem to be documented anywhere. Fortunately, I was able to find the list of possible numbers, so all is well with regards to class_type.

When it comes to the action_id field, however, we are not so lucky. It, too, should be filterable on the string codes that are returned by sys.database_audit_specification_details and sys.fn_get_audit_file. But, just like class_type, only numbers can be used for filtering. And, like class_type, that list of numbers does not seem to be documented anywhere. But, unlike class_type, I cannot find any list of numbers (in SQL Server) that corresponds to the available actions. I did find a list of statements by looking at Extended Events info (since Audits use Extended Events):

SELECT *
FROM   sys.dm_xe_map_values
WHERE  [name] = N'statement_type';

but it wasn’t the correct list. So, this means that I have to go hunting for these values the even-harder way

Read on for some fun with binary tree search.

Comments closed

Connection Pooling And Slow Leaks

Warren Estes explains how connection pools work and troubleshoots a connection pooling issue:

When an application connects to a database it takes resources to establish that connection. So rather than doing this over and over again a connection pool is established to handle this functionality and cache connections. There are several issues that can arise if either the pool is not created with the same connection string (fragmentation), or if the connections are simply not closed/disposed of properly.

In the case of fragmentation, each connection string associated with a connection is considered part of 1 connection pool. If you create 2 connection strings with different database names, maxpool values, timeouts, or security then you will in effect create different connection pools. This is much like how query plans get stored in the plan cache. Different white space, capital letters all create different plans.

You can get the .NET pool counts from:
Performance Monitor> .NET data provider for SQL Server > NumberOfActiveConnectionPools

Click through for more information.

Comments closed