A frequent complaint we receive comes from a client that has an in-house sysadmin monitoring tool like Microsoft SCOM/SCCM. They turn the tool on and it startsred-alerting because the Windows server hosting SQL Server is at 90%+ used RAM. The sysadmin (or local DBA) logs on to the server and finds that there is 5GB free (~4%), and the sqlservr.exe process is using 120GB of the 128GB on the server!
In my experience, VMware administrators tend to be most hung up about this concept.
By default when you install SQL Server the TempDB database is not optimized. For SQL Servers that use the TempDB even moderately, I recommend optimizing it for the best performance. The TempDB is the storage area that SQL Server uses to store data for a short periods of time, information that can be found in the TempDB include temporary tables, and data that is being sorted for query results. The data in the TempDB is so temporary that each time the SQL Server service restarts, the TempDB is recreated.
Good advice within.
One of the most important duties of a DBA is to ensure that CHECKDB is run frequently to ensure that the database is both logically and physically correct. So when inheriting an instance of SQL, it’s usually a good idea to check when the last CHECKDB was last run. And ironically enough, it is actually quite difficult to get this information quickly, especially if you have a lot of databases that you need to check. The obvious way is to run DBCC DBINFO against the specific database. This returns more than just the last time CHECKDB was run, and it is not especially clear which row returned tells us the last CHECKDB (FYI the Field is “dbi_dbccLastKnownGood”.)
It’s a bit of a shame that this information isn’t made available in an easily-queryable DMV.
Turns out SQL 2008R2 (where the original script worked) returns different fields than 2012 and 2014 (where it didn’t).
I figured I didn’t want to find out which version of the script to use every time I needed to run it on a server, so I told the script to figure that out by itself, and then run the appropriate hunk of code (example below)
This is a good explanation of how to back out of a complex situation.
You can change the compatibility level of an Azure SQL Database.
It’s true! I know!
OK, so I’m a little excited about this one. See, I’ve been giving this talk on cardinality for the past couple of years now, so this is a hidden gem to me. When I found out this was possible I took out my demo scripts to see if changing the compatibility level would have any effect.
This is interesting, especially given that Management Studio doesn’t give you that option. Know your T-SQL, folks.
Recently I needed to apply compression data on a particularly large table. One of the main reasons for applying compression was because the database was extremely low on space, in both the data and the log files. To make matters worse, the data and log files were nowhere near big enough to accommodate compressing the entire table in one go. If the able was partitioned then I could have done one partition at a time and all my problems would go away. No such luck.
Best way to eat an elephant, etc. etc. Read the whole thing; you might be in a similar situation someday.
The initial thought is to enable the trace flag at session level. We ran into two challenges. First, application needs code change (which they couldn’t do) to enable it. Secondly, dbcc traceon requires sysadmin rights. Customer’s application used a non-sysadmin user. These two restrictions made it seem impossible to use the trace flag.
However, we eventually came up with a way of using logon trigger coupled with wrapping the dbcc traceon command inside a stored procedure. In doing so, we solved all problems. We were able to isolate the trace flag just to that application without requiring sysadmin login.
This is the very edge of an edge case. In normal practice, change the code.
I wasn’t surprised (although a little disappointed) that out of the 9 people the answered only one person was, and of the rest 5 didn’t even know what TRUSTWORTHY is. I even had one person ask me later. That’s somewhat scary because under the right circumstances if you give me a database with TRUSTWORTHY turned on I can take over your instance. I’m NOT going to show you how but it isn’t terribly difficult.
I’ll admit that I have been a bit non-chalant about TRUSTWORTHY in the past, but turning it on is the smart move.
There was a blog post by Boris Hristov which had some good images of the various places to look in the GUI to make sure that the updates have been picked up correctly. Using these and through experimentation I was able to answer those questions.
Interesting questions and good answers.
I’m seeing lots of CXPACKETS waits, how do I fix these?
Bad Advice = Set the maximum degree of parallelism to 1, no more CXPACKET waits!
I’m seeing index fragmentation occur quite frequently on some of my indexes, what should I do?
Bad Advice = Set fill factor on the server to 70, no more fragmentation problems!
I’m worried about TempDB contention. What should I do?
Bad Advice = Set the number of files to the number of cores, no more contention issues!
Read the post for better advice.