So how does this talk of AGs pertain to this T-SQL Tuesday topic? It should be pretty obvious – we need to periodically grab all of the .xel files generated by the cluster, and move them to a different directory, with a different retention policy. Yup… we need to back up these files. Sometimes, we need to be backing up things other than the databases themselves.
I created a PowerShell script that takes a few parameters, then moves the files from the source directory to the destination directory. And then it deletes files from the destination directory that are over x days old.
Wayne goes into more detail, including permissions required to run the script.
Right here would be a good time to point out that your mileage may vary. Some items that are deprecated are more painful than others. Some may be deprecated and may have been on the list for 10+ years at this point. The point is, know your environment and then use good judgement to determine which items in your results need the most attention to fix and update. Why? Well, things really may break especially if you are looking to upgrade to a new version of SQL Server. Just because an item is still available in your current edition, that does not ensure it will still be available in a future release of SQL Server.
Now for the juicy stuff. As I mentioned, finding when and where a deprecated feature or syntax is employed, there are a couple of neat little events within Extended Events that can help to track each time a deprecated feature is accessed or employed. How do we do that? Use the deprecation_announcement and deprecation_final_support events. To help create sessions to track these events, I have the following script to create an “audit” session to do exactly that.
Click through for Extended Event scripts and demo code.
This has been something I’ve wanted to investigate for a while now. I’ve know you could use Profiler and set up server-side traces to capture long-running events, but was curious how to do the same with Extended Events. I then came across this post from Pinal Dave ( b | t ) that pointed me in the right direction. I followed along with the guidelines he was suggesting and was having trouble finding the “Duration” filter. Turns out I had a bit too much selected in my filtering options or perhaps the Wizard was giving me fits seeing it, but I eventually selected just the Batch Completed or RPC Completed events to see and set the Duration filter. The one change that I’d make from Dave’s script is to set the duration to 500,000 because Duration in SQL 2012 is in microseconds, not milliseconds. I also want queries longer than 5 seconds to start.
Click through for the script.
This past weekend, after one of my presentations on Analysis Services at SQL Saturday 520, an attendee described a performance issue in his environment that he thought could be related to locking and blocking and asked if there was a way to confirm if this was the cause. The short answer is yes, absolutely!
While I wouldn’t say this is a common issue across *all* analysis services environments, it’s definitely common amongst the subset of analysis services environments that do intra-day processing – which is when the SSAS database is being processed during hours of the day when users might be simultaneously querying it.
This is a very interesting article. I’d never thought about SSAS taking locks, but it makes perfect sense.
Here’s an example for DBCC CHECKDB on a corrupt database. Remember from the last post that in this scenario, control never passes to the CATCH block. So we’ll need to check the Event Session data after END CATCH. You can also run this as a single batch in SSMS, but you’ll need a corrupt database to get similar results. As before, replace “2016” with your SPID.
There are a lot of working parts to this, so read the scripts carefully if you’re interested in implementing something similar yourself.
With the release of SQL Server 2016 there is another default session that gets installed. This session is probably the most difficult to find. This is a private hidden session when it is installed. In addition, it is not present by default with SQL 2016 RTM but it was present by default with various CTP versions. The session is specific to the new stretch feature and is called rdaxesession.
Breaking down this particular session reveals pretty easily what feature the session is designed to support. The name rdaxesession breaks down to the following: rda = remote data archive, and then the rest is self explanatory. If I investigate the metadata for the session I will discover that the following events are traced in the session:
The third part of the invitation was to write this. I covered what I did, and some of what I learned. I’ll add a bit more here.
I certainly was clumsy working with XE, and despite working my way through the course, I realize I have a lot of learning to do in order to become more familiar with how to use XE. While I got a basic session going, depending on when I started it and what I was experimenting with, I sometimes found myself with events that never went away, such as a commit or rollback with no corresponding opening transaction.
This T-SQL Tuesday was a bit broader in scope, so it has been interesting watching people respond.
So how do we handle the scenario where the server is rebooted?
- Option 1: always remember to restart the trace after server reboots
- Option 2: create a SQL Agent job to poll for the SSAS service status and start the xEvent trace if its not already running
- Option 3: write a custom .NET watchdog service to poll for the SSAS service status and start the xEvents trace if its not already running
Those are the options I’ve used or seen used in the past… and to be sure, all of them have their drawbacks in reliability and/or complexity.
…which is why I was so excited when it was brought to my attention that there is an “AutoRestart” option for SSAS xEvents!
Do read the whole thing.
I have to be a little honest here. Prior to somebody asking how they could possibly achieve a statement audit via extended events, I had not considered it as a tool for the job. I would have relied on Audit (which is Extended Event related), or some home grown set of triggers. In this particular request, Audit was not fulfilling the want and custom triggers was not an option. Another option might have included the purchase of third party software but there are times when budget does not allow for nice expensive shiny software.
So, with a little prodding, I hopped into the metadata and poked around a bit to see what I could come up with to achieve this low-budget audit solution.
Read the whole thing.
In the following script, I have tried to accomplish just that – a single script to create the entire XML parser for me, for you, and for anybody wishing to use it. I don’t want to have to remember the subtle nuances of how to parse each of the events each time I need to parse the session data. I want something that is quick, easy, and repeatable.
With all of that said, here is the script that I now use to parse my session data. You should notice that it has been simplified and is more extensive now.
Jason also has sample usage. Check this out for sure.