Press "Enter" to skip to content

Category: Administration

Multi-Server Diagnostics with dbatools

Gianluca Sartori shows how we can collect diagnostic information from multiple SQL Server instances by way of dbatools:

What I really love about PowerShell is how simple it is to filter, extend and manipulate tabular data using the pipeline, in a way that resonates a lot with the experience of T-SQL developers.

The main part of the script is the one that invokes all the diagnostic queries included in the list $queries. This is done by invoking the cmdlet Invoke-DbaDiagnosticQuery, that takes care of using a version of the diagnostic query that matches the version of the target server and selecting the data. As usual with dbatools, the -SqlInstance parameter accepts a list of servers, so you can pass in the list of all the SQL Servers in your infrastructure.

Gianluca has a GitHub repo for the script and explains it in more detail as well.

Comments closed

Finding Power BI V1 Workspace Owners

Brett Powell has a process to find Power BI V1 workspace owners:

As most readers of this blog likely know, there are two very different kinds of workspaces in Power BI – V1 or ‘classic’ workspaces which are tied to Office 365 groups and V2 or ‘modern’ workspaces which are not. V2 workspaces have many advantages beyond their independence from Office 365 which you can read about elsewhere but for a bit of context you can read the GA announcement of V2 workspaces from back in April.

Since upgrading to V2 workspaces has been a manual process thus far, most Power BI tenants contain a mix of V1 and V2 workspaces. You may also have read the recent announcement of a new feature in the Power BI service available to workspace admins to upgrade their V1 workspaces. This blog post is all about identifying these V1 workspaces and their admins.

Click through for the process, as well as Brett’s recommendation regarding migration to V2 right now.

Comments closed

Partition Switching to Make Table Changes

Daniel Hutmacher shows a couple things you can change with near-zero downtime using partition switching:

Look, I’m not saying that you’re the type that would make a change in production while users are working.

But suppose that you would want to add an identity column to dbo.Demo, and change the clustered index to include that identity column, and make the index unique? Because it’s the table’s clustered index, you’re effectively talking about rebuilding the table (remember, the clustered index is the table), which involves reorganizing all of the rows into a new b-tree structure. While SQL Server is busy doing that, nobody will be able to read the contents of the table.

Daniel mentions a read-only table, though you could also do this with a read-write table as long as you have triggers to keep the two tables in sync until go time. That adds to the complexity, but it is an option if you need it.

Comments closed

Creating a Failover Cluster Instance with Shared Storage

Ryan Adams wraps up a video series on setting up a SQL Server lab environment:

You are going to create a SQL Server Failover Cluster Instance in Part 4 of our series on how to build a SQL Cluster Lab. The FCI will only be installed on Node1 and Node2. FCIs require shared storage so you will make your domain controller an iSCSI target. Last you will create your FCI using the iSCSI drives you presented to the cluster. 

Click through for links to the entire series.

Comments closed

Optimizing for Sequential Keys

Milos Radivojevic is excited about OPTIMIZE_FOR_SEQUENTIAL_KEY:

The results show that the feature should be used for tables, where latch convoys already happen or are about to happen. In a not-so-heavy workloads, it brings a small overhead. You can see that inserts with 100 parallel sessions are 16% slower for a table with the optimized sequential key. However, when you hit the convoy, the feature definitely helps.

This graf is critical: if you don’t have high enough concurrency on the table, insertion can be a little slower than otherwise, so don’t go setting this for every table.

Comments closed

Things a New DBA Should Learn

John McCormack has three things new DBAs should learn first:

Help, the SQL Server is on fire and my really important process is not working. Why is the server so slow?

This scenario will happen to you. Invariably, the person (often a manager) will stand over you and expect you to knock out one or two lines of t-sql wizardry to get things running along smoothly again. First of all, I should say that in a perfect world, you will ask them to raise a ticket and you will work on it according to its priority against your other tasks. Then you can let them know what the issue was and what can be done to prevent it happening again. But we rarely work in a perfect world. In this scenario, you need one primary line of t-sql to get started.

John’s three points are a really good starting point.

2 Comments

SQL Server 2019 Installation Enhancements

Aaron Bertrand looks at a couple things you can configure on installation with SQL Server 2019:

One of the long-standing defaults in SQL Server has been the maximum degree of parallelism (MAXDOP), which has always been 0, meaning use (up to) all cores when the engine believes that will help. For many OLTP workloads, 0 is not the optimal setting, and you may want to use a different number depending on the behavior of your workload. I don’t want to belabor the thought process here, but this will be based on settings like the number of cores exposed to SQL Server, whether they are divided into NUMA nodes, and if there are other instances, applications, or services running on the same Windows Server. Microsoft’s guidelines are published here.

Aaron shows us how to set MAXDOP as well as min and max server memory for our new instance. It’s nice to see these types of additions to the setup process—that makes it a bit more likely that the DBA who installs instances only occasionally doesn’t forget to set these afterward.

Comments closed

Error Log Files and SQL Server Upgrades

Mike Hays points out that setting the number of error log files is something which resets on SQL Server upgrade:

This morning when working with one of my SQL Server servers, I went looking for an error log from last August.  After finding that I was only able to go back six days, I checked the directory that stored SQL Server’s error logs,  and noticed a gap in the logs from October to last March.

When I checked the configuration for the error logs in SQL Server Management Studio, I saw that the setting was configured at the default setting of 6 error logs for history.

Read on to understand why this happened.

Comments closed