Press "Enter" to skip to content

Author: Kevin Feasel

Automating Stats Maintenance With Azure SQL DW

Grant Fritchey shows how to create automated statistics maintenance for an Azure SQL Data Warehouse database:

NOTE: The most important habit you can start with in Azure is putting everything into discrete, planned, Resource Groups. These make management so much easier.

Once the account is set, the first thing you need is to create a Runbook. There is a collection of them for your use within Azure. None of them are immediately applicable for what I need. I’m just writing a really simple Powershell script to do what I want:

Runbooks are an important part of Azure maintenance, and this is a gentle introduction to them.

Comments closed

Right-Sizing Elastic Pools

Arun Sirpal points out a nice potential money-saving feature with Azure:

The recommendation is to setup a standard 50 EDTU pool. I am convinced that this pool is a new pricing tier. Even though the cost saving is small it is still clever that it suggests this. I assume the analysis done in the background really does understand my utilization patterns as we know that the patterns are absolutely crucial for when using elastic pools so it is something to definitely consider.

Within a click of a button the portal will create it for you.

It’s interesting that the feature can actually save you money rather than just telling you that you need to buy more expensive services.

Comments closed

The Importance Of Powershell

Kevin Hill explains why he’s advocating that DBAs learn Powershell:

2 very solid reasons (there are others) that every DBA should be learning and using PowerShell:

1 – Its very useful for admin at the O/S level.

At my current client I am team lead of System and SQL Admins, along with doing any of the work that comes our way.  This means we need to be able to manage the modest server farm we have.  Its big enough that we can’t log onto every server every day, but small enough nobody wants to buy a proper monitoring toolset.  So…PS to the rescue!

Read on for the other reason.  I think the relatively poor Powershell tooling with SQL Server (with respect to other groups like Exchange) limited general acceptance, but they’ve made some big improvements over the past year and there are some sharp minds in the community working to make Powershell even more important for DBAs.

Comments closed

ApplicationName On Invoke-SqlCmd2

Andy Levy improves the Invoke-SqlCmd2 cmdlet:

I decided to change this around so that it no longer uses string formatting, but instead a SqlConnectionStringBuilder. I had a couple reasons for this:

  • It will eliminate redundant code. There are several common elements in each of the ConnectionStrings above. If more complex logic is needed, there are potentially more copies of this ConnectionString kicking around.

  • It’s prone to copy/paste and other editing errors. If there’s a change that affects both versions of the ConnectionString and the developer just copies the line from one branch of the if statement to the other, code will be lost or invalid values will be substituted because of positioning.

This is something I’d like to see make it to the main cmdlet.

Comments closed

Switching Instead Of Renaming Tables

Kendra Little has an interesting solution to when you need to swap out an old table for a new version:

This pattern works in SQL Server 2014 and higher. And it even works in Standard Edition of 2014.

Some folks will see the word ‘Switch’ in this pattern and assume the pattern that I’m suggesting is Enterprise Edition only for versions before SQL Server 2016 SP1.

However, oddly enough, you can use partition switching even in Standard Edition, as long as the tables only have one partition.

And all rowstore tables have at least one partition! That happens automagically when you create a table.

Read the whole thing.

Comments closed

Data Professional Survey Results

Brent Ozar has a roundup of blog posts concerning the data professional survey for 2017:

We asked to see your papers, and 2,898 people from 66 countries answered.

Download the raw data in Excel, and you can slice and dice by country, years of experience, whether you manage staff or not, education, and more.

Community bloggers have already started to analyze the results:

There were several entrants and some good posts, so check it out.

Comments closed

Finding Suspect Objects

Wayne Sheffield shows how to find which specific objects are suspect without running a full CHECKDB:

You’d really like to know what tables are affected without having to wait. Luckily(?), this corruption was recorded in msdb.dbo.suspect_pages, and having just recently read Paul Randal’s post here, we know we can use DBCC PAGE to determine this information. And, after having read my prior blog post, you know that we can automate DBCC PAGE, so we can use our new friend “WITH TABLERESULTS” to find out what objects have been corrupted.

The suspect_pages table, documented here, has three particular columns of interest: database_id, file_id and page_id. These correspond nicely to the first three parameters needed for DBCC PAGE. To automate this, we need to know what information we need to return off of the page – and from Paul’s post, we know that this is the field “METADATA: ObjectId”. For this code example, let’s assume that this corruption is on page 11 of the master database (just change “master” to the name of your 2TB database).

Read on for a script, including a script which checks all such suspect pages, and the possibly-better solution as of SQL Server 2012.

Comments closed

Finding Transactions After A Crash

Paul Randal has a procedure which will find rolled-back transactions after a crash:

Then we can search in the transaction log, using the fn_dblog function, for LOP_BEGIN_XACT log records from before the crash point that have a matching LOP_ABORT_XACT log record after the crash point, and with the same transaction ID. This is easy because for LOP_BEGIN_XACT log records, there’s a Begin Time column, and for LOP_ABORT_XACT log records (and, incidentally, for LOP_COMMIT_XACT log records), there’s an End Time column in the TVF output.

And there’s a trick you need to use: to get the fn_dblog function to read log records from before the log clears (by the checkpoints that crash recovery does, in the simple recovery model, or by log backups, in other recovery models), you need to enable trace flag 2537. Now, if do all this too long after crash recovery runs, the log may have overwritten itself and so you won’t be able to get the info you need, but if you’re taking log backups, you could restore a copy of the database to the point just after crash recovery has finished, and then do the investigation.

Read on for the code, as well as a test.

Comments closed

Lead Blockers

Kenneth Fisher talks about fullbacks:

Blocking is just part of life I’m afraid. Because we have locks (and yes we have to have them, and no, NOLOCK doesn’t avoid them) we will have blocking. Typically it’s going to be very brief and you won’t even notice it. But sometimes you get a query or two blocked for long enough to cause a problem. Even more rarely you end up with a long chain of blocked sessions. Session 100, 101, and 102 are blocked by 67 which is blocked by 82, which is blocked by … Well, you get the idea. It can be very difficult to scan through all of those blocked sessions to find the root cause. That one or two session(s) that are actually causing the problem. So to that end I’ve written the following query. Among other things it will return any lead blockers, how many sessions are actually being blocked by it, and the total amount of time those sessions have been waiting. It will also give you the last piece of code run by the that particular session. Although be aware that won’t always tell you exactly what code caused the blocking.

Click through for the script.

Comments closed