Sharepoint And MAXDOP=1

Daniel Glenn has a public service announcement on Sharepoint maximum degree of parallelism:

For my first install test, I was using a service account that did have administrative rights on SQL as well. I looked at the setting in SQL and SharePoint 2016 did change the Maximum Degree Of Parallelism setting to 1. So, the story is, as of now anyway (we are dealing with Preview software), SharePoint Server 2016 requires MAXDOP=1.

Kidding about Sharepoint aside, if you do have Sharepoint in your environment, it’s worth knowing how to configure it correctly.

Tracking DDL Events

Kenneth Fisher has a simple database trigger to track certain data definition language events:

A couple of notes before testing the code. The event groups I’m using will pull CREATE, ALTER and DELETE events for those objects. For a more complete list of events (you might want to add service broker events for example) go here. Also I’m using ORIGINAL_LOGIN because it will return who made the change even if they are impersonating someone else.

For my test, I created a user that only has db_DDLADMIN on the database. That means it can make DDL changes but can’t insert, update, delete or even run a select against any table in the database. That’s why I grant INSERT to public for the logging table.

It’s a good way of knowing when unexpected changes happen, too.

Reading Drive Files Recursively From T-SQL

Slava Murygin shows how to use a common table expression to read a recursive drive listing:

In this blog I’ll use undocumented in SQL Server extended stored procedure “xp_dirtree“.
That procedure is well described in Patrick Keisler’s blog.

Briefly, “xp_dirtree” extended procedure returns tree of sub-directories from given folder and has three parameters:
1. Starting or root folder
2. Depth level – determines how deep SQL Server will dig into a sub-directory structure. If it set to zero or omitted whole tree will be returned.
3. If omitted or set to zero will return only folders .If set to “1” will return files as well.

If you want to be really mean to somebody, create a shortcut to the parent directory within a directory.

Managing Drives On Linux

David Klee walks through some basics of Linux administration with respect to drives and mountpoints:

We see that all four of the drives show up in the list. Because of the nature of how I presented the disks to the VM, the bootable drives (sdd) show up at the end of the chain instead of the beginning, but that’s OK. It doesn’t change how the disks are bootable.

We are going to use the Logical Volume Manager (LVM) to manage the disks for us instead of using regular partitions. On Windows-based servers, Windows has the ability to expand partitions without incurring any downtime, and we want to have the same flexibility from a Linux standpoint. With elevated priviledges (for the rest of the commands), let’s scan the drives to look at what it sees.

Read the whole thing.

Environmental Considerations With SQL Server

Jeff Mlakar wraps up his environmental diagnostics series:

Let’s start by defining terms. By environmental I mean factors that affect database performance outside of the code. The remediation of issues can be accomplished without altering SQL, no code change request, no release management necessary. Poor performance can often be addressed by doing a quick review of the overall system and ensuring there are not horrible defaults in use.

This is the low hanging fruit we can get before deep diving into the code

Before you start banging your head against the hood of the car take a walk around and look at the big picture first. It is a common thing to see junior DBAs go right for the most complex area they can identify and ignore the obvious issues which are plain to the eyes of a veteran DBA.

Read on for some resources to help you learn more.

Finding The Largest Tables

Andy Galbraith has a script to find the largest tables in a database:

In the previous post in this series “Toolbox – Where Did All My Space Go?” I shared a script for finding which database files consumed the most space and which of those files had free space in them.  The next step after finding out which databases are using the space is determining which tables in those databases are occupying that space to consider purge/archive opportunities.  In many cases you will find a single large table (often called a “mother table”) taking up most of the space in your database.

There are cases when you wouldn’t want to use the Disk Usage By Table report, so here is a viable alternative.

Narrowing Down Deprecated Feature Usage

Dave Mason now skips msdb when he looks for deprecated feature usage:

In my previous post, I took a stab at monitoring deprecation events for SQL Server. It didn’t go so well. A deprecation event occurred more than 5,000 times in a very short period of time, and I got one email for every occurrence. Not good. Here’s what I kept seeing over and over:

USER_ID will be removed from a future version of SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use the feature. Use DATABASE_PRINCIPAL_ID instead.

It turns out the system stored proc msdb.dbo.sp_send_dbmail has a USER_ID() reference. I suspect an unrelated alert/email happened once, which executed sp_send_dbmail, which generated a DEPRECATION_FINAL_SUPPORT event, which ultimately led to another execution of sp_send_dbmail, which generated yet another DEPRECATION_FINAL_SUPPORT event, and round and round we go.

Click through for examples of deprecated features that various Microsoft products, including Reporting Services and Team Foundation Server, still use.

Identifying Deprecated Features

Dave Mason provides a method for determining if you’re using deprecated functionality on your SQL Server instance:

I’ve wanted to do some Event Notification testing for SQL Server deprecation events for quite some time. The thought process here is that I could send myself an alert to identify usage of SQL Server features that will be removed from the next major version (or future version) of SQL Server. I could then forward this info to development and let them take action…or not (I kid, I kid). Today is the day I finally got around to the testing. I didn’t get very far, though.

Without rehashing the basics of event notifications (this post may help if you need some context), I created an EVENT NOTIFICATION for the TRC_DEPRECATION event group. That group includes the DEPRECATION_FINAL_SUPPORT and DEPRECATION_ANNOUNCEMENT child events. I also created a QUEUE, a SERVICE, and an activation PROCEDURE (for the QUEUE). The proc is simplistic. It takes the EVENTDATA() XML data, transforms it into an HTML <table>, and emails the info to me.

Watch out, though:  Dave discovered something quite funny when he set this up.

VMware Configuration For SQL Server

Jeff Mlakar talks about things you want to look at if you’re running SQL Server on VMware:

In a virtual data center CPU is spread across many guest VMs. This is one of the key drivers behind the effort to virtualize – CPU cores mostly sit unused. For example, we can take a host with maybe 48 cores and virtualize many machines that present logically with > 48 cores. The hypervisor can swap in and our cores as it needs based on what the guest VMs are doing. If the baseline for a guest VM is only 10% CPU usage then this is easy. However, when an intense application like SQL Server is virtualized it must have CPU available otherwise performance will suffer noticeably.

Generally for CPU on a guest VM:

  • Reservations on CPU are not often possible but consider them if you data center allows for it.

  • You want more cores than sockets. So if you are aiming for 8 cores you want something like 2 sockets with 4 cores each instead of 8 sockets with 1 core each.

  • If priority can be given to the SQL VM for CPU then change the Shares Resource Allocation from normal to high.

Click through for more helpful hints.

Disk Usage By Table Report

Kenneth Fisher shows off my favorite built-in SSMS report:

Every now and again you need to know how big a table is. Or several tables. Or all of the tables. Number of rows is frequently handy when you’re going to create a new index or otherwise modify the table. The amount of space used by the indexes can be helpful in deciding how much space you need to do a re-index. The tables with the most unused space is nice to know if you have a problem with ever growing heaps.

In the past my go to solution here was sp_spaceused. It’s a really handy procedure.

USE AdventureWorks2014;
GO
EXEC sp_spaceused 'Person.Person';
GO

Great information but it has a few problems. You can only run it for one table at a time (sp_msforeachtable is a workaround, if undocumented), the file sizes aren’t consistent (sometimes KB, sometimes MB or even GB), and it only returns the name of the object but not the schema. So if there is the same table name under multiple schemas it can get tricky.

Read on for how to access and use this report.

Categories

September 2017
MTWTFSS
« Aug  
 123
45678910
11121314151617
18192021222324
252627282930