Press "Enter" to skip to content

Category: Administration

The SQLUndercover Inspector

Adrian Buckman announces the SQLUndercover Inspector:

In a Nutshell:

The SQLUndercover Inspector is a configurable daily report written in SQL that will send you an email (or log the report to a SQL Table) showing you information about specific parts of SQL Server in HTML format including highlighted warnings/advisory conditions, the report has configurable thresholds and settings leaving you in control.

Click through to see what they track.

Comments closed

Expanding LVM Drives

David Klee shows how to expand an LVM drive on Linux:

Next in our SQL Server on Linux series is one important question. On Windows, if you’re about to run out of space, you get your VM admin / storage admin to expand one or more of your drives, and you go to Disk Management and expand the drive with no downtime. How do we accomplish this same task on Linux?

First, SSH into your VM. Get your appropriate system engineer to expand the drive that needs to be expanded. You won’t be able to see it at first in Linux because, just like in Windows, it’ll need to rescan the storage to ‘see’ the extra space. Sometimes Windows does it automatically, and sometimes you have to initiate it manually. In Linux it only does this on system startup.

Let’s grow our data drive from 250GB to 300GB first.

Click through to see how to do that.

Comments closed

Filtering Alert Noise With A Leaky Bucket Algorithm

I have a post implementing a leaky bucket algorithm in T-SQL:

Now that we have a table, we want to do something with it.  The most naive solution would be to fire off an alert every time a row gets added to this table.  The problem with this solution is that it can flood our inbox with useless information.  Suppose the developers push out an API change that breaks everything.  The first 500 response will be important information.  The second one might be important because it’s confirmatory.  The six hundredth alert will not help.  And heaven help you if you’ve got this alert tied to your PagerDuty account…

So let’s only alert if we get to a certain threshold—we’ll call it 5 messages.  Fewer than 5 and we can assume it’s a blip or just somebody doing something weird.  The next part of the problem is, 5 messages per what?  We don’t want to alert for every fifth error message.  Let’s say we get one error message a year, so there was one in 2014, one in 2015, one in 2016, one in 2017, and now one in 2018.  If we simply set our threshold to report on every fifth error, we’ll get an alert in 2018, but most likely, the prior years’ errors are long gone by now, so that’s not helpful.

Read on for the solution.  I’ve been quite happy with the solution in practice, as it has cut down the number of spurious alert e-mails to practically nil.

Comments closed

Deleting From Large Tables

Andy Galbraith tells a tale of a failed query and a lot of log file growths:

The App01 database was in SIMPLE recovery (when I find this in the wild, I always question it – isn’t point-in-time recovery important? – but that is another discussion).  The relevance here is that in SIMPLE recovery, LOG backups are irrelevant (and actually *can’t* be run) – once a transaction or batch completes, the LDF/LOG file space is marked available for re-use, meaning that *usually* the LDF file doesn’t grow very large.

A database in SIMPLE recovery growing large LDF/LOG files almost always means a long-running unit of work or accidentally open transactions (a BEGIN TRAN with no subsequent CLOSE TRAN) – looking at the errors in the SQL Error Log the previous night, the 9002 log file full errors stopped at 1:45:51am server time, which means the offending unit of work ended then one way or another (crash or success).

Sure enough when I filtered the XEvents event file to things with a duration > 100 microseconds and then scanned down to 1:45:00 I quickly saw the row shown above.   Note this doesn’t mean the unit of work was excessively large in CPU/RAM/IO/etc. (and in fact the query errored out due to lack of LOG space) but the excessive duration made all of the tiny units of work over the previous 105 minutes have to persist in the transaction LDF/LOG file until this unit of work completed, preventing all of the LOG from that time to be marked for re-use until this statement ended.

It turns out that deleting millions of records in a single transaction is an expensive operation.

Comments closed

Migrating A Memory-Optimized Database

Michael Bourgon notes that there’s an extra step when migrating a database with a memory-optimized filegroup from one server to another:

So, I was trying to get an in-memory database moved from one server to another (long story, involving IMOLTP melting down on me and resulting in a 2 terabyte log file).

I thought it’d be simple: copy the files over, along with the Filestream folders, and attach.  Nope!  Various errors, including “5(Access is denied.)”.

Click through for the steps involved.

Comments closed

Database-Scoped Optimize For Ad Hoc Workloads

Joe Sack introduces a new database-scoped configuration option:

SQL Server provides the “optimize for ad hoc workloads” server-scoped option that is used to reduce the memory footprint of single use ad hoc batches and associated plans.  When enabled at the SQL Server instance scope, the “optimize for ad hoc workloads” option stores a reduced-memory compiled plan stub on the first execution of an ad hoc batch for any database on the instance. This server option has been available in SQL Server for several years now, but until recently there hasn’t been a way to enable this option in Azure SQL Database for individual databases.

We are now introducing a new database scoped configuration called OPTIMIZE_FOR_AD_HOC_WORKLOADS which enables this behavior at the database scope in Azure SQL Database.

I’m not sure if this will make it to the on-prem product, and if it does, I’m not sure how useful it would be in practice.  But it is good that we can use it in Azure SQL Database.

Comments closed

Azure SQL Analytics

Arun Sirpal gives an introduction to Azure SQL Analytics:

Please see the prerequisites section within this document – YOU MUST do this else you will not be able to use this feature. https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-azure-sql#prerequisites

Once setup it should take approximately 15 minutes to start capturing and rendering back some data. Don’t be surprised if it does take a little longer as was the case for myself.

My biggest complaint is about the visuals; otherwise, this looks like the beginning of a solid monitoring solution within Azure SQL Database.

Comments closed

Fixing Orphaned Users In SQL Server

Eitan Blumin shares a couple of methods to fix orphaned users in SQL Server:

The most correct solution for this problem, is to have consistent SIDs to your Logins across all your SQL Servers.
So that even when a database is moved to a different server, it could still use the same SID that it was originally created for.
And also, when you recreate a previously deleted Login, you’d need to create it with the same SID that it originally had.

This is, obviously, not a trivial matter, and not always possible.

But if this is a direction that interests you, then you will find the following very useful:

Read on for the best solution, as well as the second-best solution using sp_change_users_login.

Comments closed

Fun With Meltdown And Spectre

Brent Ozar looks at some of the consequences of Meltdown and Spectre for SQL Server DBAs:

That’s because some test results have found big slowdowns when the operating system is patched for Meltdown and/or Spectre. These are big vulnerabilities in the processors themselves, and OS vendors are having to make big changes that aren’t tuned for performance yet. Early benchmarks yesterday were showing 30% drops in PostgreSQL performance, but thankfully newer benchmarks have been showing smaller drops. Red Hat’s benchmarks show 3-7% slower analytics workloads, and 8-12% slower OLTP.

Joey D’Antoni has more:

Will This Impact My Performance?

Probably–especially If you are running on virtual hardware. For workloads on bare metal, the security risk is much lower, so Microsoft is offering a registry option to not include the microcode fixes. Longer term especially if you are audited, or allow application code to run on your database servers, you will need to enable the microcode options.

This will likely get better over time as software patches are released, that are better optimized to make fewer calls. Ultimately, this will need to fixed on the hardware side, and we will need a new generation of hardware to completely solve the security issue with a minimum impact.

Allan Hirt has even more:

There are two bugs which are known as Meltdown and Spectre. The Register has a great summarized writeup here – no need for me to regurgitate. This is a hardware issue – nothing short of new chips will eradicate it. That said, pretty much everyone who has written an OS, hypervisor, or software has (or will have) patches to hopefully eliminate this flaw. This blog post covers physical, virtualized, and cloud-based deployments of Windows, Linux, and SQL Server.

The fact every vendor is dealing with this swiftly is a good thing. The problem? Performance will most likely be impacted. No one knows the extent, especially with SQL Server workloads. You’re going to have to test and reset any expectations/performance SLAs. You’ll need new baselines and benchmarks. There is some irony here that it seems virtualized workloads will most likely take the biggest hit versus ones on physical deployments. Time will tell – no one knows yet.

This will have long-term ramifications.  We’ll deal with them like we’ve dealt with other issues in the past, but it does seem that, at least for now, there will be some performance hit from this.

Comments closed

Generating Passwords In T-SQL And Powershell

Dave Mason shares a couple methods for generating good passwords:

There’s really nothing special there. On line 7 I specify how many characters long I want the password to be. I can run the code as many times as needed, or put it in a UDF or a loop if I want to get fancy.

Recently, though, I had the need to generate passwords outside of a T-SQL environment. I immediately went looking in the .NET Framework, and none to my surprise, I found something: the function System.Web.Security.Membership.GeneratePassword(). I did some initial testing in C#, then decided to proceed with a PowerShell scrip

Click through for the scripts.

Comments closed