Press "Enter" to skip to content

Category: Administration

Anticipating Disk Growth

Adrian Buckman has a script which gives you an idea of what would happen if your databases all grew by some factor overnight:

The other day I got thinking about what would happen if all databases on a single instance grew out, every single one of them! but not just once, what if they all grew out three, four or fives times overnight – what would things look like?

Well I know the likelihood may be slim but wouldn’t it be nice just to see how many times things could grow before it all runs out of space.

I decided for a bit of fun I would write a query to see what the drive space would look like, this would simulate database growth and then show what drive space would be left after the total growths specified.

It’s a good idea to anticipate this kind of activity, though based on the companies I’ve worked for in the past, the answer would be “run out of disk really fast.”

Comments closed

Migrating Database Files

Jeff Mlakar gives us three methods for migrating database files from one location to another:

The database will be unavailable during this operation so we need to notify our end users. Consider the ramifications if an application is using the database – we might want to stop application services or take some other custom action during the move.

Plan ahead before starting the job. Know what you are going to do before doing it. If you can test your method against a lab or development database that will help too.

Sound advice and technique.  Click through to see those three methods.

Comments closed

Initial Thoughts On dbachecks

Jess Pomfret has an initial use case for dbachecks:

Each check has one unique tag which basically names the check and then a number of other tags that can also be used to call a collection of checks.

For this example we are going to use several checks to ensure that we meet the following requirements:

  • Full backup once a week – using LastFullBackup
  • Differential backup once a day – using LastDiffBackup
  • Log backup every hour – using LastLogBackup

Since each of the three checks we want to run also have the LastBackup tag we can use that to call the collection of checks at once.

Jason Squires looks at this for enterprise reporting:

This module was developed and designed to ensure you can see if you have the best settings/configurations set up on your SQL systems using powershell. There are three pre-requisites that are required to load the module. Those are Pester 4.3.1, PS Framework 0.9.10.23, and currently as of this post dbatools 0.9.207. However, the team of dbachecks, kindly built in a notification for you if those modules and versions should those not be installed or would have a need to update.

What I really love about this module, is how you can utilize SQL CMS, and view the results at an enterprise reporting level.

Shane O’Neill has a bit more:

Straight away, dbachecks gives you the option to include or exclude checks that you feel aren’t for you. If you only want to run a subset of the checks, then you can specify that.

The real beauty that I think dbachecks provide is that you are getting a wealth of checks for things that you may never have thought of checking or known how to check while being able to add any personal tests as well.

Sounds like something for DBAs to check out.

Comments closed

Introducing dbachecks

Chrissy LeMaire announces that the dbatools team is onto something big:

dbachecks is a framework created by and for SQL Server pros who need to validate their environments. Basically, we all share similar checklists and mostly just the server names and RPO/RTO/etc change.

This module allows us to crowdsource our checklists using Pester tests. Such checks include:

  • Backups are being performed
  • Identity columns are not about to max out
  • Servers have access to backup paths
  • Database integrity checks are being performed and corruption does not exist
  • Disk space is not about to run out
  • All enabled jobs have succeeded
  • Network latency does not exceed a specified threshold

We currently provide over 80 checks

Chrissy also shows you how to install dbachecks and explains the commands.

This is the biggest community-driven news since, well, dbatools…

Comments closed

Data Discovery And Classification In SQL Server

Gilad Mittelman explains how the SQL Information Protection (aka Data Discovery and Classification) process works in SQL Server and Azure SQL Database:

SQL Information Protection (SQL IP) introduces a set of advanced services and new SQL capabilities, forming a new information protection paradigm in SQL aimed at protecting the data, not just the database:

  • Discovery & recommendations – The classification engine scans your database and identifies columns containing potentially sensitive data. It then provides you an easy way to review and apply the appropriate classification recommendations via the Azure portal.

  • Labeling – Sensitivity classification labels can be persistently tagged on columns using new classification metadata attributes introduced into the SQL Engine. This metadata can then be utilized for advanced sensitivity-based auditing and protection scenarios.

  • Monitoring/Auditing – Sensitivity of the query result set is calculated in real time and used for auditing access to sensitive data (currently in Azure SQL DB only).

  • Visibility – The database classification state can be viewed in a detailed dashboard in the portal. Additionally, you can download a report (in Excel format) to be used for compliance & auditing purposes, as well as other needs.

Check it out, especially with GDPR breathing down our necks.

Comments closed

Changing Int To Bigint

Danny Kruge shows one way to change a table’s identity value from integer to bigint:

The table was around 500GB with over 900 million rows. Based on the average number of inserts a day on that table, I estimated that we had eight months before inserts on that table would grind to a halt. This was an order entry table, subject to round-the-clock inserts due to customer activity. Any downtime to make the conversion to BIGINT was going to have to be minimal.

This article describes how I planned and executed a change from an INT to a BIGINT data type, replicating the process I used in a step by step guide for the AdventureWorks database. The technique creates a new copy of the table, with a BIGINT datatype, on a separate SQL Server instance, then uses object level recovery to move it into the production database.

There’s a way to do this without any downtime, though the trigger logic gets a little more complex and it does take longer.

Comments closed

Changing The Default Filegroup

Kenneth Fisher shows how you can change the default filegroup:

You know you can have multiple filegroups right? You might have a separate filegroup for the data (the clustered index & heaps) and another for the indexes (non-clustered indexes). Or maybe you want to separate your data tables from the system tables. There are any number of reasons why you might want to have multiple filegroups, however, there will always be a primary filegroup and it will always be the default if you don’t specify otherwise. Right? Wrong.

I’ve never seen a way to remove primary or to move the system objects in it. However, you can change the primary filegroup.

Having a separate filegroup for your tables and another for indexes (or splitting things up some other way) can help get a database back online faster, as you can restore the system tables first and then restore filegroups as needed.

Comments closed

Backup And Restore With Move

John Morehouse has a script for database migration onto a differently configured server:

Now, not every environment is the same.  Instances get configured differently or things change just due to the nature of the business.  In a previous life I would routinely have to backup a database and restore it to another server.  However, the server I was using to restore to had a different drive configuration.  It happens.  Anyway, I wanted a script that would give me

  • A backup statement
  • A restore statement with the appropriate MOVE switch

This would allow me to easily execute the backup statement and then on the target server execute the restore statement.  Since the restore statement already had the appropriate MOVE switch provided, I didn’t have to manually compose the statement.

Click through for the script.

Comments closed

Finding Long-Running Transactions

David Fowler has a dream:

It was 3am in the morning and I was asleep and enjoying a delightful dream (I knew it was a dream because I was surrounded by drifting clouds, singing angels and hundreds of softly humming SQL Servers where the hardware had been sensibly provisioned and all code carefully optimised) when I was rudely awoken by a Service Desk call informing me that the systems were unresponsive.  A quick check and I could see that everything was being blocked a particular transaction.  My suspicion was that someone had run a script which had opened a transaction and then toddled off home without checking that either the script had finished or closed the transaction that it had opened.

My guess was right and killing the transaction got the cogs turning again.

Click through for the script.

Comments closed

Maintain MSDB

Lori Brown points out that there are some SQL Server service tables which can bloat your msdb database:

I recently received a panicked call from a client who had a SQL instance go down because the server’s C drive was full. As the guy looked he found that the msdb database file was 31 GB and was consuming all of the free space on the OS drive causing SQL to shut down. He cleaned up some other old files so that SQL would work again but did not know what to do about msdb.

As we looked at it together I found that the sysmaintplan_logdetail table was taking all the space in the database. The SQL Agent had been set to only keep about 10000 rows of history but for some unknown reason the table never removed history. After consulting MSDN I found this code did the trick for truncating this table.

Lori’s focus here is on SQL Agent history, but don’t forget about things like backup history as well.

Comments closed