Command shell steps in SQL Agent jobs only process the first command. (Yea, I know it’s the title of the post so this wasn’t much of a surprise. You’ll live.)
In order to prove this, I created a job with a series of steps, each one a different test. I’m using two very simple commands. whoami that tells you who is running the shell (very useful if you aren’t certain) and dir which I will assume everyone knows.
Kenneth ran through several tests to see what would trigger the “first command” rule and it turns out that pretty much everything does.
In an effort to rule out whether or not statistics are definitely a factor, I want to UPDATE STATS on all the tables in my query, and at a specific time – sometime the day before we expect our slowdown to occur. I also want to be able to easily see how long the process ran and the duration of the update for each table. I could write all this to a table, and maybe I will do this later, but viewing this from job history is what I want right now – it is easy to look at and easy for everyone else to find and see as well.
Creating a job with multiple steps can sometimes be a bit painful and tedious, if the job is going to have A LOT of steps. With PowerShell, a server name and a database, I can dynamically create this job.
One of many reasons to have at least a little bit of Powershell knowledge if you are a SQL Server DBA.
Recently, at a client, I was challenged to create a stored procedure that would process a tabular model. This stored procedure would then be executed from a web application. The process behind it being: a user enters data into a web application, which gets written to a database. That data then needs to be immediately surfaced up into reports, with additional calculations and measures along the way. Therefore the tabular model, which does all the additional calculation and measures, needs to be processed by a user from the web application.
Click through for the script.
What you don’t see is the way you have the job step succeed or fail. When using most commands in modules, you may find that errors still cause the step to fail because of the way they report the failure (some kind of throw or a Stop condition outside your control). So if you want things to fail that normally would show red on the screen but things would continue, remember that the default ErrorAction is Continue, so even though you get an error, PowerShell will just continue.
Read on for more.
On the plane home from PAS Summit I was sat next to someone who had also attended and when he saw on my laptop that I was part of the SQL Community we struck up a conversation. He asked me how he could compare SQL Agent Jobs across availability group replicas to ensure that they were the same.
He already knew that he could use Copy-DbaAgentJob from dbatools to copy the jobs between replicas and we discussed how to set up an Agent job to accomplish this. The best way to run an Agent Job with a PowerShell script is described here
Maybe I’m out in la-la land, but wouldn’t this be a perfect use case for MSX/TSX?
It’s usually a good idea to write the output of your SQL Agent jobs to a file, so you can investigate should any issue occur.
But when you define the output file, you need to choose between Appending the output to the same file over and over, or to overwrite it, but that defeats the purpose IMHO.
On the other hand, if you forget to roll over the files, they can grow quite large and then finding any error can become a nightmare.
So some time ago, I wrote a stored procedure that rolls the files for me and place them sorted so it’s easy to find any particular date.
This is a clever solution, but read through to the bottom for a warning.
This solution runs sp_WhoIsActive every minute and saves the output into a global temp table. From there, I look for any processes that have been running for more than the low threshold setting. Any of the processes that have not been identified and stored previously get logged, output to an HTML table, and an email alert sent.
Next, I take a second look at the table for anything that’s been running longer than the high threshold. If a second email alert has not been sent for these processes, we output the same data and send the email. If two alerts have already been sent for these processes, I don’t do anything else at the moment. One of the next updates to this script will send an alert to our DevOps notification system for anything running longer than some final threshold (or maybe just the high threshold).
I particularly like this part about not re-alerting over and over for a long-running query. It’s a relatively minor part of the whole solution, but it gets annoying watching the same e-mail come in every 5 minutes, especially if there’s nothing you can (or at least want to) do about the cause.
Today I ran into something on a client server I unfortunately see too often. The DBA goes through the trouble of configuring and setting up alerts\operators but doesn’t really understand what the options in the configurations mean. So unfortunately, that means they take the CYA (cover your ass) approach and they check all of them. Now, not only have I seen this with alerts but also with things like security configurations as well. My advice is to always in to take a second and research what each option is before you check the little boxes, especially when it comes to security. Always follow the rule of less is more.
In the example below the administrator enabled alerts for an operator using the CYA approach. They checked email, pager, and netsend.
The E-mail option is probably the only interesting one anymore; if you need paging, integrating with something like Pagerduty (or one of its competitors) is the better call nowadays.
So here are the steps that I use to schedule my tasks:
Create a Windows-based Login in SQL Server
Ensure dbatools is available to the account
Create a SQL Server Credential
Create the Agent Proxy
Create the PowerShell .ps1 file
Create the Job and Job Step
Chrissy walks you through step by step, making the whole thing easy.
While doing a recent server move, I came across the need to script all agent jobs and save them to a file so that if we needed to reference what was there after the move we could get at the agent jobs without having to restart the old SQL Server.
It made me realize that sometimes what you want to do is available in SQL Server Management Studio, but that perhaps it is not always entirely obvious.
Click through for the demo. I’d probably use Powershell over this, but if you just need to do a one-time job move, this gets you going quickly.