Press "Enter" to skip to content

Category: Administration

Deleting From Large Tables

Andy Galbraith tells a tale of a failed query and a lot of log file growths:

The App01 database was in SIMPLE recovery (when I find this in the wild, I always question it – isn’t point-in-time recovery important? – but that is another discussion).  The relevance here is that in SIMPLE recovery, LOG backups are irrelevant (and actually *can’t* be run) – once a transaction or batch completes, the LDF/LOG file space is marked available for re-use, meaning that *usually* the LDF file doesn’t grow very large.

A database in SIMPLE recovery growing large LDF/LOG files almost always means a long-running unit of work or accidentally open transactions (a BEGIN TRAN with no subsequent CLOSE TRAN) – looking at the errors in the SQL Error Log the previous night, the 9002 log file full errors stopped at 1:45:51am server time, which means the offending unit of work ended then one way or another (crash or success).

Sure enough when I filtered the XEvents event file to things with a duration > 100 microseconds and then scanned down to 1:45:00 I quickly saw the row shown above.   Note this doesn’t mean the unit of work was excessively large in CPU/RAM/IO/etc. (and in fact the query errored out due to lack of LOG space) but the excessive duration made all of the tiny units of work over the previous 105 minutes have to persist in the transaction LDF/LOG file until this unit of work completed, preventing all of the LOG from that time to be marked for re-use until this statement ended.

It turns out that deleting millions of records in a single transaction is an expensive operation.

Comments closed

Migrating A Memory-Optimized Database

Michael Bourgon notes that there’s an extra step when migrating a database with a memory-optimized filegroup from one server to another:

So, I was trying to get an in-memory database moved from one server to another (long story, involving IMOLTP melting down on me and resulting in a 2 terabyte log file).

I thought it’d be simple: copy the files over, along with the Filestream folders, and attach.  Nope!  Various errors, including “5(Access is denied.)”.

Click through for the steps involved.

Comments closed

Database-Scoped Optimize For Ad Hoc Workloads

Joe Sack introduces a new database-scoped configuration option:

SQL Server provides the “optimize for ad hoc workloads” server-scoped option that is used to reduce the memory footprint of single use ad hoc batches and associated plans.  When enabled at the SQL Server instance scope, the “optimize for ad hoc workloads” option stores a reduced-memory compiled plan stub on the first execution of an ad hoc batch for any database on the instance. This server option has been available in SQL Server for several years now, but until recently there hasn’t been a way to enable this option in Azure SQL Database for individual databases.

We are now introducing a new database scoped configuration called OPTIMIZE_FOR_AD_HOC_WORKLOADS which enables this behavior at the database scope in Azure SQL Database.

I’m not sure if this will make it to the on-prem product, and if it does, I’m not sure how useful it would be in practice.  But it is good that we can use it in Azure SQL Database.

Comments closed

Azure SQL Analytics

Arun Sirpal gives an introduction to Azure SQL Analytics:

Please see the prerequisites section within this document – YOU MUST do this else you will not be able to use this feature. https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-azure-sql#prerequisites

Once setup it should take approximately 15 minutes to start capturing and rendering back some data. Don’t be surprised if it does take a little longer as was the case for myself.

My biggest complaint is about the visuals; otherwise, this looks like the beginning of a solid monitoring solution within Azure SQL Database.

Comments closed

Fixing Orphaned Users In SQL Server

Eitan Blumin shares a couple of methods to fix orphaned users in SQL Server:

The most correct solution for this problem, is to have consistent SIDs to your Logins across all your SQL Servers.
So that even when a database is moved to a different server, it could still use the same SID that it was originally created for.
And also, when you recreate a previously deleted Login, you’d need to create it with the same SID that it originally had.

This is, obviously, not a trivial matter, and not always possible.

But if this is a direction that interests you, then you will find the following very useful:

Read on for the best solution, as well as the second-best solution using sp_change_users_login.

Comments closed

Fun With Meltdown And Spectre

Brent Ozar looks at some of the consequences of Meltdown and Spectre for SQL Server DBAs:

That’s because some test results have found big slowdowns when the operating system is patched for Meltdown and/or Spectre. These are big vulnerabilities in the processors themselves, and OS vendors are having to make big changes that aren’t tuned for performance yet. Early benchmarks yesterday were showing 30% drops in PostgreSQL performance, but thankfully newer benchmarks have been showing smaller drops. Red Hat’s benchmarks show 3-7% slower analytics workloads, and 8-12% slower OLTP.

Joey D’Antoni has more:

Will This Impact My Performance?

Probably–especially If you are running on virtual hardware. For workloads on bare metal, the security risk is much lower, so Microsoft is offering a registry option to not include the microcode fixes. Longer term especially if you are audited, or allow application code to run on your database servers, you will need to enable the microcode options.

This will likely get better over time as software patches are released, that are better optimized to make fewer calls. Ultimately, this will need to fixed on the hardware side, and we will need a new generation of hardware to completely solve the security issue with a minimum impact.

Allan Hirt has even more:

There are two bugs which are known as Meltdown and Spectre. The Register has a great summarized writeup here – no need for me to regurgitate. This is a hardware issue – nothing short of new chips will eradicate it. That said, pretty much everyone who has written an OS, hypervisor, or software has (or will have) patches to hopefully eliminate this flaw. This blog post covers physical, virtualized, and cloud-based deployments of Windows, Linux, and SQL Server.

The fact every vendor is dealing with this swiftly is a good thing. The problem? Performance will most likely be impacted. No one knows the extent, especially with SQL Server workloads. You’re going to have to test and reset any expectations/performance SLAs. You’ll need new baselines and benchmarks. There is some irony here that it seems virtualized workloads will most likely take the biggest hit versus ones on physical deployments. Time will tell – no one knows yet.

This will have long-term ramifications.  We’ll deal with them like we’ve dealt with other issues in the past, but it does seem that, at least for now, there will be some performance hit from this.

Comments closed

Generating Passwords In T-SQL And Powershell

Dave Mason shares a couple methods for generating good passwords:

There’s really nothing special there. On line 7 I specify how many characters long I want the password to be. I can run the code as many times as needed, or put it in a UDF or a loop if I want to get fancy.

Recently, though, I had the need to generate passwords outside of a T-SQL environment. I immediately went looking in the .NET Framework, and none to my surprise, I found something: the function System.Web.Security.Membership.GeneratePassword(). I did some initial testing in C#, then decided to proceed with a PowerShell scrip

Click through for the scripts.

Comments closed

DBA Morning Health Checks

Patrick Keisler has a process which walks through some morning health checks for SQL Server instances:

A few weeks ago, a customer asked me to help develop a way to improve their morning routine of checking the health of each SQL Server. This daily task would typically take about an hour for the DBA to complete. The solution I provided him, reduced that task to under one minute.

The DBA needed me to answer these questions for each SQL Server:

1.       What is the uptime of each SQL Server?

2.       What is the status of each database?

3.       What is the status of each Availability Group?

4.       What is the backup status of each database?

5.       What is the available disk space?

6.       Are there any SQL Agent failed jobs in the last 24 hours?

7.       What errors appeared in the SQL errorlog in the last 24 hours?

In addition, the customer asked to display the results using the typical stop light colors (red, yellow, and green) to represent the condition of each server.

Click through for more details, as well as a link to Patrick’s GitHub repo which hosts the script.

Comments closed

Finding Open Transactions

Andy Mallon has a stored procedure to find open transactions:

I’m fussy. Paul’s example script is awesome, but I want to tweak it to meet my needs. I also want to wrap it into a stored procedure so that I can put it in my DBA database, and have it installed on every server I manage. In addition to the columns Paul pulls back, I want to find out:

  • More session details: host name, application name, etc
  • More transaction info: Transaction state, duration, etc
  • Filter out very short transactions: I want to filter out the noise for regular activity

There’s also a transaction state that I exclude. Transaction State 3 means “The transaction has been initialized but has not generated any log records.” A transaction in this state won’t affect log reuse. However, it could affect the size of the version store in TempDb if you’re using Read Committed Snapshot Isolation. I haven’t run into this as a problem, so I filter it out.

I used dbo.Check_Blocking as a base to create another check script to check open transactions

Click through to see what Andy’s procedure offers and also for a link to get the script.

Comments closed

Simulating Network Latency

John Paul Cook shows how to use WANem to simulate network latency in a Hyper-V environment:

Access WANem from either SQL Server virtual machine using a case sensitive URL that includes WANem’s IP address. In this example, the URL is http://99.99.99.99/WANem. Inside the SQL Server virtual machines, I set the browser’s start page to the WANem home page. Create a delay of 1000 msec and retest SQL Server to SQL Server connectivity.

It looks like a good way of proving out whether your setup can handle extreme latency before you build it for real.

Comments closed