Press "Enter" to skip to content

Category: Administration

SQL Server 2008 (And R2) Nearing End Of Support

The SQL Server team reminds us that those SQL Server 2008 and 2008 R2 instances are nearing end of support:

SQL Server 2008 and 2008 R2 have had a tremendous run. But all good things come to an end, right? On July 9, 2019, Microsoft will end Extended Support, which means no more updates or support of any kind, potentially leaving you vulnerable to security and compliance issues.

The good news is, you still have plenty of time and options to avoid any heartburn caused by the technology “circle of life.” And we’ll lay out all of those options for you in a webinar on July 12.

Non-contrarian opinion:  get those old SQL Server instances updated.  Life is so much better on the 2012 and later branch.

Contrarian opinion:  core-based licensing had such a major impact on businesses that five years from now, we’ll still see more SQL Server 2008 and 2008 R2 in the wild than we see 2005 today.

Comments closed

The Hidden Performance Costs Of Collation Mismatch

Nate Johnson explains why you want collation consistency when joining tables together on varchar/nvarchar columns:

There’s a subtle difference here, vs. those many community blog posts, which I’ll repeat.  The columns are of the same type.  Just different collations.

And when the collation on the join predicates is different, bad things happen. Let’s take CustomerNumber for example. On the ERP side, it’s a nvarchar(20) collate Latin1_General_100_CI_AS. On the internal & web apps side, it’s a varchar(20) collateSQL_Latin1_General_CP1_CI_AS. As you might imagine, this is a prime field for joining because it’s the main customer identified throughout all the systems.

Let’s be clear here. This is a numeric value only. Did it need to support Unicode? Absolutely not. Should it have been an int or bigint? Probably. But did The ERP designers choose to make it Unicode string anyway? Yep.

Read on to see how Nate tries to dig himself out of this hole.

Comments closed

How Perfmon Memory Counters Fit Together

Lonny Niederstadt takes us through a tour of how various Perfmon memory counters relate:

Wading through all of the SQL Server memory-related perfmon counters to understand how they related to each other took me a really long time.  Time-series graphs that show the relationship help me tremendously, and when I started trying to account for SQL Server memory years ago I couldn’t find any.  So I started to blog some time-series graphs, under the theory that either my understanding was correct and my graphs would be helpful to someone… or they’d be wrong and someone would correct me.
Well… its been about 5 years and my graphs haven’t generated too much discussion, but they’ve really helped me 😀😀😀

Perfmon: SQL Server Database pages + Stolen pages + Free pages = Total pages
http://sql-sasquatch.blogspot.com/2013/09/perfmon-database-pages-stolen-pages.html

Working with SQL Server 2016 and some demanding ColumnStore batch mode workloads, I began to see suspicious numbers, and graphs that didn’t make sense to me.  Today I got pretty close to figuring it out so I wanted to share what I’ve learned.

The following graphs are from a 4×10 physical server running Windows and SQL Server.  Four sockets, 4 NUMA nodes.

For bonus points, Lonny traces down a problem where expectations aren’t meeting reality.

Comments closed

Reading The Transaction Log

Nesha Maric shows us a couple of methods and a third-party tool for reading the SQL Server transaction log:

Update operations in SQL Server are not fully logged in the transaction log. Full before-and-after values, unfortunately don’t exist, only the delta of the change for that record. For example, SQL Server may show a change from “H” to “M” when the actual record that was changed was from “House” to “Mouse”. To piece together the full picture a process must be devised to manually reconstruct the history of changes, including the state of the record prior to update. This requires painstakingly re-constructing every record from the original insert to the final update, and everything in between.

BLOBs are another challenge when trying to use fn_dblog to read transaction history. BLOBs, when deleted, are never inserted into the transaction log. So examining the transaction log won’t provide information about its existence unless the original insert can be located. But only by combining these two pieces of data will you be able to recover a deleted BLOB file. This obviously requires that the original insert exists in the active/online portion of the transaction log, the only part accessible to fn_dblog. This may be problematic if the original insert was done some weeks, months or years earlier and the transaction log has been subsequently backed up or truncated

I’ve tried to avoid messing directly with the transaction log whenever possible, but there are scenarios where it’s the only place you have needed information.

Comments closed

Testing Network Speed Between Azure Regions

Dave Bermingham has some quick inter-region tests for Azure network performance:

This is the question I asked myself today and of course I couldn’t find this documented anywhere. I’m assuming there is no guarantee and it probably depends on current utilization, etc. If I’m wrong, someone please point me to the documentation that states the available speed. I primarily looked here and here.

So I set up two Windows 2016 D4s v3 instances, one in Central US and one in East US 2, which are paired regions.

If you don’t know what peering is, it essentially lets you to easily connect two different Azure virtual networks. Peering is very easy to setup, just make sure you configure it from both Virtual Networks, I made that mistake at first. Once it is configured properly it will look something like this.

Read on for Dave’s results.

Comments closed

Finding The Max Number Of Sequences You Can Have In SQL Server

Jon Shaulis looks into how many sequences you can have on a SQL Server instance:

I thought this was an interesting question, but it makes sense to have some concern about it. If you are using Sequences over identity inserts, this typically means you need more control over how those numbers are handled from creation to archive. So what if you need a lot of different Sequences stored on your server to handle your needs? Will you run out of Sequence objects that you can create?

This information is not intuitively simple to find, this is one of those times where you need to read various articles and connect the dots. Let’s start simple

This is one of those cases where Swart’s 10% Rule comes into play.

Comments closed

Updating SQL Agent Job Owners With dbatools

Stuart Moore gives us two methods of updating SQL Agent job owners, one using T-SQL and the other with dbatools:

Now we all know that having SQL Server Agent jobs owned by ‘Real’ users isn’t a good idea. But I don’t keep that close an eye on some of our test instances, so wasn’t surprised when I spotted this showing up in the monitoring:

The job failed. Unable to determine if the owner (OldDeveloper) of job important_server_job has server access (reason: Could not obtain information about Windows NT group/user 'OldDeveloper', error code 0x534. [SQLSTATE 42000] (Error 15404)).

Wanting to fix this as quickly and simply as possible I just wanted to bulk move them to our job owning account (let’s use the imaginative name of ‘JobOwner’).

Click through for both scripts.

Comments closed

Just Update Those Servers Already

Randolph West wants none of your excuses:

Folks, we all like to make sure we’re doing our level best to make things work smoothly.

So why am I staring at someone’s server that has never been updated since it was first set up almost three years ago?

Do better, so that I don’t have to yell at you. Seriously.

When we ignore updates, we are ignoring preventable catastrophic problems; we are ignoring fixes to security bugs, performance bugs, and data corruption bugs. Each one of these things could give you a really bad day. In two out of three cases it might even be a career-limiting move.

There are risks to patching servers, but the downside risk tends to be much larger and administrators need to be able to mitigate update risks through redundancy, automation, and having a rollback plan if needed.  It’s more work than not patching, but the outcome is much better.

Comments closed

Missing KB2919355 When Installing SQL Server

Ryan Allport explains how to install SQL Server 2016 on Windows Server 2012 R2 when you get the Rule “KB2919355 Installation” failed error message:

As you can see, the upgrade feature rules check failed around the KB2919355 installation. At this point, reading the error message, I assumed (I know, I know, it’s something we should never do as a DBA!) that the patch had been downloaded and applied during the latest round of Windows patching, and all that was required was a server reboot. I was wrong.

Upon running the upgrade again, I got the same error message. Hmm, annoying. So, after some Googling I was confident I knew what to do to resolve this; download and install the KB2919355 patch. So, I downloaded the patch from the official Microsoft website (KB2919355) and kicked off the installation.

There’s a bit more to it than “install the patch.”

Comments closed

Scripting Maintenance Mode Tasks

Jamie Wick shares some hard-earned knowledge regarding scripting out maintenance tasks using Powershell:

Given that we have several hundred servers (and growing), this process is taking an increasing amount of time each month. Over the years we’ve implemented various automated patching systems (WSUS, IBM BigFix, etc.) and they’ve worked reasonably well for managing the Download & Install step. The pain point lately has become the first two steps (snapshots and maintenance mode). Both processes are simple to complete using the VCenter web-based user interface and SCOM console. The problem is the volume of button clicks it takes to complete the process for ALL of the servers. Using the standard (web) user interfaces, over an hour of the monthly maintenance window can be lost to just getting the snapshots and maintenance mode tasks completed. Extrapolate that out over a year and we’re looking at over 1.5 DAYS of work-time lost to getting the servers ready to START applying updates. That’s not a statistic we want to publish to senior management. So, how to fix (or minimize) the problem? The answer to which is: Script It.

Let’s take a look at how to use PowerShell to automate the snapshot and maintenance mode tasks.

Read on for sample scripts.

Comments closed