Many of us have not made the effort to switch from profiler to Extended events. It’s 2018, if you haven’t found a few hours to learn about this incredibly powerful tool, I urge you to do so now.
I’m going to provide a quick means of tracking queries with extended events. This is not an example of how comprehensive this is, but I hope that it atleast spurs some interest.
One of the main reasons we use profiler is to quickly capture some real time data. I’m going to not only show you how to do that with extended events, but this same session can be a historical view as it’s so easy to sift through and filter through the data. (No you don’t have to create a table for the result sets ala profiler).
Click through for step-by-step instructions.
When Microsoft introduced Extended Events (XE) in 2008, they also gave us a built-in XE session called system_health.
This is a great little tool. I mainly use it for troubleshooting deadlocks as it logs all the information for any deadlocks that occur. No more having to mess about making sure specific trace flags are enabled to ensure deadlock information is captured in the error log.
It also captures the SQL text and Session Id (along with other relevant data) in a number of other scenarios you may need to troubleshoot:
Where an error over severity 20 is encountered
Where a session has waited on a latch for over 15 seconds
Where a session has waited on a lock for over 30 seconds
Sessions that have encountered other long waits (the threshold varies by wait type)
Simply knowing what this session includes can give you a leg up on troubleshooting, especially when it’s a machine you haven’t seen before.
On your servers, any of them that are SQL Server 2008 or newer, right now, unless you’ve performed actions to prevent this, you’re running the Extended Events system_health session. It’s just happening, currently, on all your servers. Nothing you need to do about it at all. I’ll be a lot of you never even knew it was there.
If you follow the link you can see all the various types of information being gathered by the Extended Event system_health session. I won’t detail all of it here. Let me just provide a little context around how the session works. First and foremost, similar to the error log, this session consists of four files, each 5mb in size, rolling over as they get filled. For systems with a very high degree of activity, that means the information here may only be hours old. However, for most of us, this provides days, if not weeks worth of information about the behavior of your system.
The system_health extended event misses a lot of stuff, but it’s quite useful when you don’t have a purpose-built monitoring solution in place.
I was troubleshooting an issue last week which led to me firing up extended events to look at records being written to the transaction log, I typed into the search bar ‘Transaction’ hoping to find something that would do the trick and didn’t quite find what I was looking for.
After a few more failed attempts I headed to the internet and found a post by Paul Randal describing exactly what I needed for this situation, using the [sqlserver].[transaction_log] event. Hold on, that’s exactly what I searched for. I ran the T-SQL within his blog post, the event was successfully created and gave me the information I was looking for.
But, as Jess points out, you can still get to it from the GUI. Read on to learn how.
Last year I wrote about Azure SQL Database extended events (https://blobeater.blog/2017/02/06/using-extended-events-in-azure/) and gave an example where I was capturing deadlocks via the ring buffer. Ever since then I wanted to do a follow-up post but using Azure storage as the target for my XEL files.
This is more complicated than using the ring buffer as the target and requires a couple of things:
Azure storage account where you create a dedicated container for the files.
Database master key.
Database scoped credential.
Also, fantastic obfuscation in the post. Hello, Shane!
This one isn’t bad, but imagine a multi-statement deadlock, or a server with several deadlocks in an hour – how do you easily see if there were other errors on the server at the same time?
With SQL Server 2012+, we have a better tool to see when deadlocks occur – and the deadlock graphs are saved by default, so we don’t have to read the text version to figure it out, or run a separate trace to capture them.
In SSMS, open Object Explorer and navigate to Extended Events > Sessions > system_health > package0.event_file. Double-click to view the data.
Click through for the entire process.
Notice anything missing? Yeah, the first operator, the SELECT operator (technically, not really an operator, but they don’t have any name or designation in the official documentation, so I’m calling them operators). It’s not there. Why do I care?
Because it’s where all the information about the plan itself is stored. Stuff like, Cached Plan Size, Compile Time, Optimizer Statistics Usage, Reason for Early Termination, is all there, properties and details about the plan itself. Now, the weird thing is, if you look to the XML, as shown here, all that data is available:
Read on for Grant’s best guess as to the root cause of the problem.
The AlwaysOn_health event session in Extended Events is intended to make analyzing problems with Availability Groups possible after they have occurred. While this event session goes a long way towards making it possible to piece together the puzzle of what went wrong in a lot of situations, it can still be a difficult task. One of the things I wish Microsoft had included in the AlwaysON_health event session definition is the sqlserver.server_instance_name action for every event, and this is something that I usually recommend clients add to each of their AG servers using a script after I work with them the first time. Why would this be useful, since if the files come from a specific server we should know the events are for that server right? Well, when we are working with AG configurations with more than two servers and trying to see the big picture of what is happening across the servers, it can be difficult to follow timelines with multiple files from multiple servers open. It’s not impossible, but it does make things more difficult.
Click through to see how to do this through the UI or via T-SQL.
So why do people keep using traces? We compiled a list of reasons from Erin Stellato’s Why do YOU avoid Extended Events. And this list is LONG!
- Traces are straightforward and less complex than Extended Events
Totally seems that way!
- Traces provide a consistent interface for mixed environments
Whether you use SQL Server 7 or SQL Server 2017, the interface is pretty much the same.
- Traces are faster to setup quick traces
Just open up Profiler, connect to a server, click a few times and you’re set.
- “Extended Events are more efficient for the SQL Server engine, but not more efficient for the DBA”
Love this quote.
- People already have a library of Profiler templates
- Ignorance of XML / Querying all the generated XML is outrageous
When I first saw what it takes to query Extended Events, I bailed immediately. I am not learning XPATH, ever.
- Templates work remotely across all instances
This is also true for Extended Events, but the commenter did not know that.
- XEvents are persistent and must be stopped manually
Once you close Profiler or restart SQL Server, all non-default traces will disappear. Extended Events will persist until you delete them.
- Ability to import PerfMon data and look at Trace and PerfMon counter data at the same time
Most people that use Profiler don’t seem to know about this feature but those who do LOVE it. You can read more at Brad McGehee’s Correlating SQL Server Profiler with Performance Monitor.
Microsoft reportedly has no plans to provide this functionality.
- Consistent user experience across SSAS and Database Engine
Gotta take their word, I don’t use SSAS.
- It’s easy to train others to use Profiler
Imagine – if it’s easier to learn Profiler, it’ll be far easier to teach.
- Traces can be easily replayed
There are a number of Microsoft tools to replay traces, but none to replay Extended Events.
- MS Premier Support still asks for traces
Likely because they also have tools that they want to work across all supported versions, which still includes SQL Server 2008 R2.
- xe_file_target_read_file is a CPU hog
This wasn’t listed on Erin’s page but was told to me while I was performing my research.
Whewf! That’s a lot of compelling reasons not to make the switch. So let’s see how we can address each of them using PowerShell. All code listed here can be found at sqlps.io/xecode.
Chrissy & co can’t solve all of them, but they solve a majority. Click through to see how.
If you have browsed XEvents to any extent you should probably be familiar with at least one map object that Microsoft has given us. That particular object is sys.dm_xe_map_values and I wrote a bit about it here. That object presents some good information about various data values within XEvents that are delivered with the Event payload as well as some hidden gems that can be extra useful for exploring SQL Server internals. The point is, maps is not necessarily a new concept.
While the concept of a map is not new within XEvents, the implementation in this case is a tad different. You see, the dm_xe_map_values object is more like an EAV object while the map I will introduce today is more of an ordinary lookup table. Let’s take a look.
The map I want to concentrate on for now is the sys.trace_xe_event_map table. You read that correctly – it is a table. We don’t have very many “tables” when dealing with XEvents, but this one happens to be. As Microsoft Docs describes it, the table “contains one row for each Extended Events event that is mapped to a SQL Trace event class.”
Click through for a script which shows how to map them, as well as a couple interesting points.