It is not unusual to find a server where some random user created a bunch of jobs to be run by SQL Agent. Sometimes, the user creating the job(s) sets themself as the owner of the job. There are certain cases where this behavior is hard to avoid like when creating a maintenance plan.
And of course, there are times when the user just doesn’t know any better. There is of course, the rare occasion when setting the job owner to be ones self makes the most sense -but that is few and far between in the grand scheme. Usually, you will want a non-expiring account such as a service account or a principal without “logon” permissions to be the owner.
The primary reason being simple – humans have an expiration date for every job they will ever have. When that expiration occurs, you may end up with any number of unwanted side effects. Unwanted side effects is exactly what we try to avoid in our jobs run via SQL Agent.
Click through for the script. It’s also a good rule of thumb not to have your name attached to too many things; that way, your successor has no way of knowing you’re to blame for whatever awful hack is in place…
Now we all know that having SQL Server Agent jobs owned by ‘Real’ users isn’t a good idea. But I don’t keep that close an eye on some of our test instances, so wasn’t surprised when I spotted this showing up in the monitoring:
The job failed. Unable to determine if the owner (OldDeveloper) of job important_server_job has server access (reason: Could not obtain information about Windows NT group/user 'OldDeveloper', error code 0x534. [SQLSTATE 42000] (Error 15404)).
Wanting to fix this as quickly and simply as possible I just wanted to bulk move them to our job owning account (let’s use the imaginative name of ‘JobOwner’).
Click through for both scripts.
When I first started playing with this version, I noticed that SQL Agent was disabled. That’s not great, since SQL Agent is a great tool for various tasks in SQL Server. I can’t start the agent from here, as the underlying implementation is different, and I’m not really a host OS admin when connecting in SSMS.
After checking which patch level I was at (CU6), I changed to my Linux console, and ran the configuration utility. For Linux, this is mssql-conf.
It’s not an overly complicated process but the process is a bit different from Windows, so check it out.
First, you need to decide how many time blocks or hours you want the jobs to run in. So let’s start with scenario one where you pick for example four time blocks. First, you declare a variable with the time block in it and we will feed in the @@SERVERNAME to let determine a value for the time block that server will run. Then we wrap our code around our time block, our example we will run Index Maintenance for a 12 period spread out for three hours. Mind you for my index process which I probably should blog about as well I am processing one index at a time have something that BREAKs out of the procedure when it exceeds the time block it is. So below we run Index Maintenace between start the index maintenance job on a server between the hours 6 PM and 5 AM based on the time block value we got back.
Click through for a sample.
Lastly, create a SQL Alert to notify you as soon as tempdb grows past the threshold you stipulate. Using the GUI to create the alert, you need to fill out every field on the General page and make sure the Enabled checkbox is marked. Create a Name for the alerts, then specify the Type as SQL Server performance condition alert. The Object should be Databases, the Counter is Data File(s) Size (KB), and the Instance will be tempdb. The alert will trigger if counter rises above the value. The Value will depend upon the cumulative size of your tempdb files. In this case each tempdb file is 12GB (or 12,288,000 KB), so the total size is 98,304,000 KB.
I liked the approach of only firing the SQL Agent job after a trigger was met, rather than running a job which queries and then creates an e-mail afterward.
I have been using dbatools heavily since I was introduced to it. I have automated processes and created new processes with it. There are new commands that come out almost daily that fill in certain gaps or enhance current commands. One way to stay current with these updates is to update your dbatools install frequently.
How better to do this than to have an auto update process that will run daily and get the latest dbatools version for you…
I have put together two ways of doing this based on your preferred method. One is via a SQL Agent Job and the other is using a Windows Task Scheduler job.
Read on for examples of both techniques.
Command shell steps in SQL Agent jobs only process the first command. (Yea, I know it’s the title of the post so this wasn’t much of a surprise. You’ll live.)
In order to prove this, I created a job with a series of steps, each one a different test. I’m using two very simple commands. whoami that tells you who is running the shell (very useful if you aren’t certain) and dir which I will assume everyone knows.
Kenneth ran through several tests to see what would trigger the “first command” rule and it turns out that pretty much everything does.
In an effort to rule out whether or not statistics are definitely a factor, I want to UPDATE STATS on all the tables in my query, and at a specific time – sometime the day before we expect our slowdown to occur. I also want to be able to easily see how long the process ran and the duration of the update for each table. I could write all this to a table, and maybe I will do this later, but viewing this from job history is what I want right now – it is easy to look at and easy for everyone else to find and see as well.
Creating a job with multiple steps can sometimes be a bit painful and tedious, if the job is going to have A LOT of steps. With PowerShell, a server name and a database, I can dynamically create this job.
One of many reasons to have at least a little bit of Powershell knowledge if you are a SQL Server DBA.
Recently, at a client, I was challenged to create a stored procedure that would process a tabular model. This stored procedure would then be executed from a web application. The process behind it being: a user enters data into a web application, which gets written to a database. That data then needs to be immediately surfaced up into reports, with additional calculations and measures along the way. Therefore the tabular model, which does all the additional calculation and measures, needs to be processed by a user from the web application.
Click through for the script.
What you don’t see is the way you have the job step succeed or fail. When using most commands in modules, you may find that errors still cause the step to fail because of the way they report the failure (some kind of throw or a Stop condition outside your control). So if you want things to fail that normally would show red on the screen but things would continue, remember that the default ErrorAction is Continue, so even though you get an error, PowerShell will just continue.
Read on for more.