When I first started playing with this version, I noticed that SQL Agent was disabled. That’s not great, since SQL Agent is a great tool for various tasks in SQL Server. I can’t start the agent from here, as the underlying implementation is different, and I’m not really a host OS admin when connecting in SSMS.
After checking which patch level I was at (CU6), I changed to my Linux console, and ran the configuration utility. For Linux, this is mssql-conf.
It’s not an overly complicated process but the process is a bit different from Windows, so check it out.
First, you need to decide how many time blocks or hours you want the jobs to run in. So let’s start with scenario one where you pick for example four time blocks. First, you declare a variable with the time block in it and we will feed in the @@SERVERNAME to let determine a value for the time block that server will run. Then we wrap our code around our time block, our example we will run Index Maintenance for a 12 period spread out for three hours. Mind you for my index process which I probably should blog about as well I am processing one index at a time have something that BREAKs out of the procedure when it exceeds the time block it is. So below we run Index Maintenace between start the index maintenance job on a server between the hours 6 PM and 5 AM based on the time block value we got back.
Click through for a sample.
Lastly, create a SQL Alert to notify you as soon as tempdb grows past the threshold you stipulate. Using the GUI to create the alert, you need to fill out every field on the General page and make sure the Enabled checkbox is marked. Create a Name for the alerts, then specify the Type as SQL Server performance condition alert. The Object should be Databases, the Counter is Data File(s) Size (KB), and the Instance will be tempdb. The alert will trigger if counter rises above the value. The Value will depend upon the cumulative size of your tempdb files. In this case each tempdb file is 12GB (or 12,288,000 KB), so the total size is 98,304,000 KB.
I liked the approach of only firing the SQL Agent job after a trigger was met, rather than running a job which queries and then creates an e-mail afterward.
I have been using dbatools heavily since I was introduced to it. I have automated processes and created new processes with it. There are new commands that come out almost daily that fill in certain gaps or enhance current commands. One way to stay current with these updates is to update your dbatools install frequently.
How better to do this than to have an auto update process that will run daily and get the latest dbatools version for you…
I have put together two ways of doing this based on your preferred method. One is via a SQL Agent Job and the other is using a Windows Task Scheduler job.
Read on for examples of both techniques.
Command shell steps in SQL Agent jobs only process the first command. (Yea, I know it’s the title of the post so this wasn’t much of a surprise. You’ll live.)
In order to prove this, I created a job with a series of steps, each one a different test. I’m using two very simple commands. whoami that tells you who is running the shell (very useful if you aren’t certain) and dir which I will assume everyone knows.
Kenneth ran through several tests to see what would trigger the “first command” rule and it turns out that pretty much everything does.
In an effort to rule out whether or not statistics are definitely a factor, I want to UPDATE STATS on all the tables in my query, and at a specific time – sometime the day before we expect our slowdown to occur. I also want to be able to easily see how long the process ran and the duration of the update for each table. I could write all this to a table, and maybe I will do this later, but viewing this from job history is what I want right now – it is easy to look at and easy for everyone else to find and see as well.
Creating a job with multiple steps can sometimes be a bit painful and tedious, if the job is going to have A LOT of steps. With PowerShell, a server name and a database, I can dynamically create this job.
One of many reasons to have at least a little bit of Powershell knowledge if you are a SQL Server DBA.
Recently, at a client, I was challenged to create a stored procedure that would process a tabular model. This stored procedure would then be executed from a web application. The process behind it being: a user enters data into a web application, which gets written to a database. That data then needs to be immediately surfaced up into reports, with additional calculations and measures along the way. Therefore the tabular model, which does all the additional calculation and measures, needs to be processed by a user from the web application.
Click through for the script.
What you don’t see is the way you have the job step succeed or fail. When using most commands in modules, you may find that errors still cause the step to fail because of the way they report the failure (some kind of throw or a Stop condition outside your control). So if you want things to fail that normally would show red on the screen but things would continue, remember that the default ErrorAction is Continue, so even though you get an error, PowerShell will just continue.
Read on for more.
On the plane home from PAS Summit I was sat next to someone who had also attended and when he saw on my laptop that I was part of the SQL Community we struck up a conversation. He asked me how he could compare SQL Agent Jobs across availability group replicas to ensure that they were the same.
He already knew that he could use Copy-DbaAgentJob from dbatools to copy the jobs between replicas and we discussed how to set up an Agent job to accomplish this. The best way to run an Agent Job with a PowerShell script is described here
Maybe I’m out in la-la land, but wouldn’t this be a perfect use case for MSX/TSX?
It’s usually a good idea to write the output of your SQL Agent jobs to a file, so you can investigate should any issue occur.
But when you define the output file, you need to choose between Appending the output to the same file over and over, or to overwrite it, but that defeats the purpose IMHO.
On the other hand, if you forget to roll over the files, they can grow quite large and then finding any error can become a nightmare.
So some time ago, I wrote a stored procedure that rolls the files for me and place them sorted so it’s easy to find any particular date.
This is a clever solution, but read through to the bottom for a warning.