Press "Enter" to skip to content

Category: Administration

Retaining a Few Tables From a Large Set

Jana Sattainathan has a Powershell-based solution to eliminate all but a few tables in a database:

Recently, I received a request to backup a dozen tables or so tables out of 12 thousand tables. I had to retain all the indexes, statistics etc. The goal was to hand this over to the vendor for analysis as a database backup.

I could have copied the selected tables over to a new database using the PowerShell function I had published earlier and backed that up but since the tables to backup were quite large, I skipped that route

Read on to see Jana’s solution.

Comments closed

Fixing High VLF Counts

Ajay Dwivedi shares a technique for optimizing VLF counts on log files:

DBAs! I guess everyone know that huge number of Virtual Log Files (VLFs) in SQL Server can cause Backup/Restore & Database Recovery process slow. Even in rare cases, it can introduce slowness at transaction level.
https://sqlperformance.com/2013/02/system-configuration/transaction-log-configuration

Even we all are aware of it, it has been still a challenge to remove High VLF counts from SQL Server log files since it involved log of manual effort with Shrinking and re-growing the log files.

This is where my ‘Space-Capacity-Automation‘ open source project comes to your rescue. It has a parameter option @optimizeLogFiles that can help you optimize your log files by below below tasks:-

Click through to see Ajay’s technique.

Comments closed

Beware CPU Oversubscription with SQL Server

Monica Rathbun shares a tale of terror:

Recently I had a client complain of chronic high CPU utilization. The performance of their SQL Server had degraded, and it appeared to be related to higher than normal CPU utilization in conjunction with symptoms of unresponsive user queries.  The root cause was twofold—a third party hosting provider had overallocated virtual processors on the physical host where the virtual machine (VM) running SQL Server was residing, as well as a recent upgrade from a version of VMWare that was not patched for Spectre and Meltdown. The host had 16 physical cores and was hyperthreading (making it effectively 32 cores) until the hosting provider patched from VMWare 5.5 to a newer release (we believe 6.5) which was required for Meltdown and Spectre processor vulnerabilities. This patch disabled hyperthreading from the hypervisor to mitigate the security risk from speculative execution. Note, this patch is over a year old and a critical security risk; most software vendors (including VMWare) put this out as an immediate requirement after the announcement of the vulnerabilities.

Given this was a virtual machine, it shared a physical host with many other VMs; this is a very common configuration. However, this host was VERY overallocated.  As mentioned above, there were 16 cores–however 61 additional vCPUs had been allocated to other machines. That’s 4.3 times the number of CPUs available for allocation.  The screenshot below shows this singular Host, highlighting the vCPUs allocated.

So, uh, that’s a bad thing. Monica explains in detail why exactly it’s a bad thing, which is helpful when you’re trying to explain to the server admin why it’s a bad thing. CPU oversubscription can work for things like dev boxes or web servers, where they typically aren’t anywhere near 100% utilization. It does not work at all for busy database servers.

Comments closed

When Deletes Increase Data Size

Brent Ozar shows a case where indexes can grow in size as you delete data:

Here’s one way that deletes can cause a table to grow:
– The rows were originally written when the database didn’t have Read Committed Snapshot (RCSI), Snapshot Isolation (SI), or Availability Groups (AGs) enabled
– RCSI or SI was enabled, or the database was added into an Availability Group
– During the deletions, a 14-byte timestamp was added to each deleted row to support RCSI/SI/AG reads

Click through for a demo and takeaways.

Comments closed

Baselining SQL Server with the First Responder Kit

Ajay Dwivedi has a GitHub project showing a method to collect baseline measures for SQL Server using Brent Ozar’s First Responder Kit:

With sp_BlitzFirst & sp_WhoIsActive in a SQL Agent job with scheduled execution for every 10-15 minutes, you can look back in time in terms of What was running, its execution stats, file stats, wait stats and Perfmon counters. This would help you to answer anybody as why your server was slow at a particular point in time.

Click through for the post and check out the GitHub repo as well. Baselining is extremely important for proper administration.

Comments closed

Preventing SQL Server Startup With A Simple INI File

Solomon Rutzky is a month early with this:

In the event shown directly above, towards the bottom, in the final “<Data>” element that starts with “\\?\C:\ProgramData...“, that entry does point to a folder containing a Report.wer file. It is a plain text containing a bunch of error dump info, but nothing that would seem to indicate where to even start looking to fix this. And, nothing useful for searching on, at least not as far as my searching around revealed.

Conclusion
There you have it: a nearly untraceable way to prevent SQL Server from starting.

Read on to see how what Solomon did.

Comments closed

Fixing Orphaned Users In All Databases

Max Vernon shows how we can find orphaned users across all databases in an instance and fix them using dynamic SQL:

The next piece of code helps fix orphaned users by reconnecting them to logins that have precisely the same name, but a differing SID. This code is a variant of the above code that dynamically creates ALTER USER statements. A statement is created for each orphaned user where there is a match-by-name in the list of server logins. Once the list of dynamically created ALTER USERstatements are compiled, the commands to fix orphaned users are automatically executed.

Click through for the scripts.

Comments closed

Disable Priority Boost

Randolph West wants you to disable priority boost if you have it turned on:

It turns out that on one of the benchmarks, Microsoft was able to achieve higher throughput by setting the SQL Server process to HIGH_PRIORITY_CLASS, and the thread priority level over and above that to THREAD_PRIORITY_HIGHEST. While it was extremely helpful to beat artificial benchmarks, it came at the cost of giving SQL Server higher execution context on the CPU than almost every other process on Windows.

And that’s the only good use of priority boost ever. If you’ve never heard of priority boost, just keep on ignoring it.

Comments closed

Script Update Mode Should Be Parallel

Andy Levy explains why he wants script update mode to run in parallel:

When you have 8000+ databases on an instance, this is a huge deal. You’re looking at over two and a half hours just to bring SQL Server online after installing an SP or CU. While the instance is in script update mode, incoming connections are locked down and the service remains in the Starting status. Only the Dedicated Administrator Connection can be used to connect to the instance remotely.

Taking advantage of having a Failover Cluster Instance to patch the passive node in advance is great for minimizing downtime for Windows updates. But whether you have an FCI or not, SQL Server will remain in the “Starting” state until all of your databases have been through this process after installing an update. What was once a 10-minute failover is now a multi-hour ordeal, and maintenance windows become a lot harder to negotiate.

Andy’s pretty far over on the right-hand side of that Bell curve, but I like his SQL Server suggestion because even with just a few hundred or a couple thousand databases, you’re still talking real time savings.

Comments closed

Creating Multi-Column Statistics From Missing Index DMVs

Max Vernon shows how you can use the missing index DMVs to find potential candidates for multi-column statistics:

SQL Server does have a fairly useful dynamic management view, or DMV, which provides insight that can be leveraged in this area. The DMV I’m talking about is the set of DMVs around missing indexes, consisting of sys.dm_db_missing_index_groupssys.dm_db_missing_index_details, etc. I’m not saying the missing indexes DMVs are a panacea that will enable you to fix every performance situation you run into, but they can be useful if you know where to look. This post doesn’t go into a lot of depth about how to use those DMVs for the purpose of actually creating indexes, however I will show you how you can create multi-column stats objects as an interim performance booster while evaluating the need for those indexes.

I’ve never had great luck with multi-column stats versus simply creating indexes but that could simply be a case of me doing it wrong.

1 Comment