What’s Managed Disks you ask? Well, just on February 8th Corey Sanders announced the GA of Managed Disks. You can read all about Managed Disks here. https://azure.microsoft.com/en-us/services/managed-disks/
The reason why Managed Disks would have helped in this outage is that by leveraging an Availability Set combined with Managed Disks you ensure that each of the instances in your Availability Set are connected to a different “Storage scale unit”. So in this particular case, only one of your cluster nodes would have failed, leaving the remaining nodes to take over the workload.
Prior to Managed Disks being available (anything deployed before 2/8/2016), there was no way to ensure that the storage attached to your servers resided on different Storage scale units. Sure, you could use different storage accounts for each instances, but in reality that did not guarantee that those Storage Accounts provisioned storage on different Storage scale units.
Read on for more details.
Now the jobs are not going to run as they are as they have CmdExec steps and this is not supported in SQL on Linux so we have to make some changes to the steps. As I blogged previously, this is really easy using PowerShell
First we need to grab the jobs into a variable. We will use Get-SQLAgentJobHistory from the sqlserver module which you need to download SSMS 2016 or later to get. You can get it from https://sqlps.io/dl As we are targeting a Linux SQL Server we will use SQL authentication which we will provide via Get-Credential and then take a look at the jobs
It’s not “point, click, done,” but Rob shows you certainly can do it.
There have been a lot of questions, posts, answers, guesses and such floating around the SQL blogs lately…most of which seem to suggest that the DBA is going away.
The DBA position is not going away. Ever. Or at least not before I retire to Utah to spend my days mountain biking 😉
That said, Kevin does point out that you shouldn’t rest on your laurels.
One fun anecdote I have about database administration: I recall some marketing for some NoSQL product about how, by adopting their software, you can get rid of those stodgy database administrators. Within a couple of years, said product’s parent company was offering developer training on “advanced” techniques, which included taking backups, tuning queries, implementing disaster recovery, and creating good indexes to help with performance. But hey, at least they don’t have DBAs!
Hidden schedulers are used to process requests that are internal to the engine itself. Visible schedulers are used to handle end-user requests. When you run the casual SELECT * query, it will utilize a visible scheduler to process the query. With this information, if I have a 64 core server and all is well, I should have 64 visible schedulers online to process requests.
However, I discovered that some of the schedulers were set to “VISIBLE OFFLINE”. This essentially means that those particular schedulers are unavailable to SQL Server for some reason. How many offline schedulers do I have? A quick query resulted in 24 schedulers currently offline. 24 logical cores means that 12 physical cores are offline.
But why would a scheduler be set to “VISIBLE OFFLINE”?
Read on for the answer, and check the comments for a helpful plug for sp_Blitz.
These plans are from a SQL Server 2008 R2 SP3 instance that runs a version of Microsoft Dynamics. The MS Dynamics team knows their product issues a bunch of really small queries and recommend a Max Degree of Parallelism of 1. In my case there is one query plan with a cost of 34,402 query bucks that a MAXDOP = 1 can’t afford. Increasing the MAXDOP from 1 while keeping the Cost Threshold at 5 will keep all of my little queries running in their single threaded paradise while allowing some infrequent biguns to spread their lovely wings across a few more threads. My Cost Threshold For Parallelism is set to 5 and I hope to never have to defend my settings in a dark alley with Erik Darling.
Different systems will behave in different ways, so it makes sense that they might require different settings. In the case of Dynamics, it appears that almost all of the queries are extremely low-cost, so relatively few plans would go parallel and the ones that do, you probably want going parallel.
One day, a manager asked me if I could help on an urgent matter: the application suddenly could no longer execute transactions on the production database and the database connection was intermittently failing. The system admin was busy with other duties, so I was the closest thing they had to a DBA. All they could tell me was the production database had crashed and they got an error message about insufficient disk space.
Click through for the rest of the story.
It could be databases on an instance, indexes on a table, columns in a table, etc. etc. And in case you were wondering, you can get the answers here.
You see this come up every once in a while. A forum question, a question on #sqlhelp, even in *shudder* your own systems. And the answer always comes back the same: Limits are not goals! Usually accompanied by a few jokes.
For more on this, Michael Swart coined Swart’s Ten Percent Rule.
Microsoft set the default value for the Cost Threshold for Parallelism back in the 1990s. They were developing SQL Server 2000. That means this value was determined as a good starting point for query plan costs over 17 years ago. In case you’re not aware, technology, T-SQL, SQL Server, and all the databases and database objects within them shifted, just a little, in the intervening 17 years. We can argue whether or not this value made sense as a starting point (and remember, the default settings are meant to be starting points covering a majority of cases, not a final immutable value) for determining your Cost Threshold for Parallelism 17 years ago. I think we can agree that it’s no longer even a good starting point.
For more thoughts, check out a prior post on figuring out the cost threshold.
The below is a link to my GitHub repo of my personal collection of scripts. BTW this isn’t all of the scripts, but is probably 50% of the ones I have lurking around.
These are either scripts I have developed in my own time over the years, or adapted from various other websites which I found handy at the time for whatever reason.
Where I have remembered that web site (like most people, which is unfortunately almost never) it was typically from sites like stack overflow etc, and if so I have tried to cite it as such to provide credits.
Click through for the link to Rolf’s repo.
Step 2: Verify if the log is full or “empty”
Verify if the log file is actually full or not. If you are backing up and the file still grew to ridiculous size…it may have just been a one time thing and you can deal with that easily. Right-click the database, go to reports, standard reports, disk usage. This will give you 2 pie charts. Left is the data file, right is the log. If the log shows almost or completely full AND the huge size, you need to backup. If the log file is huge and mostly empty, you simply need to shrink to an acceptable size.
Great read for a junior-level DBA.