Understanding Lock Escalation

Kendra Little explains some of the rules behind lock escalation, including which locks don’t cause escalation:

Books Online has a good article about this, which explains a lot of the details about how many locks you need to take out to trigger lock escalation. Here are the (simplified) basics:

  • The ‘magic’ number to trigger escalation for the first time is 5,000 locks on a single table reference

  • Locks do NOT escalate from row level to page level. Row locks escalate to table. Page level locks also escalate to table level. In other words, forcing row level locking will not make it less likely to escalate locks to the table level, but rather it will do the opposite.

    • Note: for partitioned tables, you have the option to enable partition level escalation
  • If you’re modifying data, the escalated table lock will be exclusive. That means nobody else can party with the table if lock escalation succeeds while you’re doing your work.

Not all locks count, though, and Kendra has provided a test to show this.

Deadlock Priority

Kenneth Fisher tells a story of a deadlock:

Why does it matter that they were system sessions? The important thing to remember here is that these sessions can not be KILLed. So because they were holding locks on the database (And somehow even though it was in single user there were multiple sessions with locks in the database. Don’t ask me how.) I wasn’t able to get that exclusive access I needed.

Interestingly when I tried to do the ALTER instead of just hanging I immediately got a deadlock error. I spent a little while trying various things and searching through forums before I went for help on twitter using the #SQLHELP hashtag.

Read on for the answer, including how deadlock priorities saved the day.

Blocked Process Report: monitorLoop

Michael Swart explains what the “monitorLoop” attribute is on the blocked process report:

You won’t find too much explanation about that field in the official documentation but I believe I know what it means.

The blocked process report is closely tied to deadlock detection and it’s generated by the same process as the deadlock monitor. If you remember, the deadlock monitor runs frequently looking for deadlocks (which are just blocking chains in a circle). It runs every couple seconds when there are no deadlocks, and if it detects any, it runs a bit more frequently. Each time it runs it’s called a monitor loop. The monitorLoop is just a number that starts at zero when the server restarts and increments by one each time the monitor runs.

Read on for more details.

Understanding Deadlock Priority

Kenneth Fisher explains deadlock priority:

Everyone deals with deadlocks from time to time. But sometimes we need to control who’s the deadlock victim and who isn’t. For example, I’m doing a big delete on a table in a 24×7 environment, I can’t afford downtime to do it so I’m doing my delete in small chunks to reduce transaction size and blocking time. My delete needs to happen but I’m in no hurry and I really can’t afford to deadlock some other transaction. So how do I make sure?

Or on the other hand, I’m running an update that absolutely has to happen right now. It’s going to take a bit and I can’t afford the time for it to be started over. A deadlock would be a disaster. What do I do?

That’s where deadlock priority comes into play.

Click through for the explanation.

Lead Blockers

Kenneth Fisher talks about fullbacks:

Blocking is just part of life I’m afraid. Because we have locks (and yes we have to have them, and no, NOLOCK doesn’t avoid them) we will have blocking. Typically it’s going to be very brief and you won’t even notice it. But sometimes you get a query or two blocked for long enough to cause a problem. Even more rarely you end up with a long chain of blocked sessions. Session 100, 101, and 102 are blocked by 67 which is blocked by 82, which is blocked by … Well, you get the idea. It can be very difficult to scan through all of those blocked sessions to find the root cause. That one or two session(s) that are actually causing the problem. So to that end I’ve written the following query. Among other things it will return any lead blockers, how many sessions are actually being blocked by it, and the total amount of time those sessions have been waiting. It will also give you the last piece of code run by the that particular session. Although be aware that won’t always tell you exactly what code caused the blocking.

Click through for the script.

Getting A Handle On Isolation Levels

Robert Bishop explains some of the basics behind transaction isolation levels:

Lost Updates

It is possible for two or more transactions to modify the same row. While the subsequent transactions are reading the data, the 1st transaction commits its changes. Then the subsequent transaction makes its changes and commits, thereby possibly losing the changes committed by the 1st transaction. In this scenario, the last transaction always wins.

Dirty Read

A dirty read is when a SELECT statement will read against a table that is currently being modified and not yet committed to the database. By default, SQL Server will protect against this on all isolation levels, except for Read Uncommitted or by using NOLOCK in your queries.

This is the first in a series.

Solving Blocking Without Sysadmin

Michael Swart has a story on blocking due to an edge case scenario:

SQL Server was struggling to compile the procedure in time and the application wouldn’t let it catch its breath. The query optimizer was attempting to create statistics automatically that it needed for optimizing the query, but after thirty seconds, the application got impatient and cancelled the query.

So the compilation of the procedure was cancelled and this caused two things to happen. First, the creation of the statistics was cancelled. Second, the next session in line was allowed to run. But the problem was that the next session had already spent 28 seconds blocked by the first session and only had two seconds to try to compile a query before getting cancelled itself.

The frequent calls to the procedure meant that nobody had time to compile this query. And we were stuck in an endless cycle of sessions that wanted to compile a procedure, but could never get enough time to do it.

There are two important lessons here:  how Michael solved the problem and also a reminder that plan cache entries are dependent upon specific application settings.

Optimistic Locking Via HTTP ETags

Kevin Sookocheff diagrams how to implement optimistic concurrency for a server which uses HTTP requests to handle resources like files:

A conditional request is a request that may be executed differently depending on the value of specific HTTP headers. These headers define the precondition that must be true before the server should execute the request. With respect to entity tags, we have two options for making requests conditional.

  1. If-Match: The request will succeed if the ETag of the remote resource is equal to the one listed in this header.
  2. If-None-Match: The request will succeed if the ETag of the remote resource is different to each listed in this header.

By specifying the appropriate ETag and condition header, you can perform optimistic locking for concurrent operations on a resource. Let’s walk through an example of how this works in practice.

Read on for more details.

NOLOCK, No Problem?

Arun Sirpal explains that NOLOCK not only takes locks, but also lets you read invalid data:

A Sch-S (schema stability) lock is taken.  This is a lightweight lock; the only lock that can conflict with this is a Sch-m (schema modification) lock. (C = Conflict). This means that a NOLOCK can actually block for example against an ALTER TABLE command.

I would lean heavily toward turning on Read Committed Snapshot Isolation instead of using NOLOCK in most environments.  It’s something you’d need to test, but it does come with fewer bad ramifications.

Locks In Sp_configure

Kendra Little discusses the locks entry in sp_configure:

Each lock uses 96 bytes of memory. On the instance in question, 25,000 locks  = 2,400,000 bytes.

That’s only 2.3 MB of memory devoted to locks. Even though 25K  sounds like a lot, the memory footprint for that is pretty darn small.

I checked back with our questioner, and their instance has 32GB of memory. That’s a pretty small amount in the grand scheme of things (as of SQL Server 2014, Standard Edition can use up to 128GB of memory for the Buffer Pool), but 2.3 MB isn’t anything to worry about, percentage wise.

Read on for advice if you’re seeing your SQL Server instance take a very large number of locks.

Categories

April 2017
MTWTFSS
« Mar  
 12
3456789
10111213141516
17181920212223
24252627282930