Press "Enter" to skip to content

Category: Administration

Error Logs with Windows Containers Running SQL Server

Jamie Wick walks us through several locations for error logs when you’re running SQL Server on a Windows container:

So far this series has covered: installing Docker for Windows, the basic commands for managing images and containers, and creating a new image. This post will cover troubleshooting containers & the Docker application using the various log files that are available.

Depending on the type of process being run in a container (interactive or non-interactive), event and error information may be collected by the host logs, application logs, or by the Docker logs (or by all 3, in the case of SQL Server and IIS for Windows).

Read on, as it’s not just “The same places as you’d see on SQL Servers outside of containers.”

Comments closed

Changing Azure SQL DB Service-Level Objectives

Monica Rathbun notes that SSMS lets you change service-level objectives for Azure SQL Databases:

Sometimes as a DBA, I am lazy and want the ability to execute all of my tasks in one place. Lucky for me I discovered the other day that I can change my Azure SQL Database Service Level Object options within SQL Server Management Studio (SSMS) without ever having to go to the Azure Portal.

Read on to learn how, as well as what you can change.

Comments closed

Creating a Fail-Safe Agent in SQL Server

Garry Bargsley wants the buck to stop somewhere:

Did you know it is possible for SQL Server Agent to  alert you of problems if something goes haywire with your Agent? Have you ever had an issue with Alerts not being sent after critical events? Then you might need to configure the SQL Server Agent Fail-Safe Operator to save the day. A Fail-Safe WHAT you might say?? This is a special SQL Agent Operator configured in the SQL Agent Alert System in the chance any of the following situations occur.

Click through for the situations as well as configuration steps using Powershell + dbatools.

Comments closed

Cleaning Up the SQL Server Error Log

Garry Bargsley does some spring cleaning:

If you are like me, you inherited variously configured SQL Servers when you took over as the DBA for your company. After almost two years, I have gotten all the standards in place where I feel that the environment is clean. One of the last things I accomplished was to standardize SQL Server Error Log configurations, Error Log Cycle schedules and cleaning up of old Error Logs.

To accomplish this there were several steps involved to get all SQL Servers into a unified set of configurations.

This is pretty easy to do and to script out.

Comments closed

Configuring Perfmon for Ongoing Data Collection

David Klee killed the radio star:

I just released a new training video (one of many to be released in the upcoming months) showing you how to set up Windows Perfmon for ongoing 24×7 collection. Recording ongoing performance information is vital to have a running history of the system state in case issues arise. More importantly, having a running stream of performance information gives you a running history that you can refer to for generating an ongoing baseline of your system’s resource consumption. Come learn how to set up Perfmon for ongoing collection!

Click through for the video and check out the Heraflux channel.

Comments closed

Killing Idle Analysis Services Sessions

Shabnam Watson shows us how to kill idle SQL Server Analysis Services sessions:

Think of this method as an emergency procedure only. As always, have database backups and try this on a development server first. Always take a backup of msmdsrv.ini before you modify any server properties. The default location of the file is this: C:\Program Files\Microsoft SQL Server\MSAS15.MSSQLSERVER\OLAP\Config

If you set the timeout values too low on server that is under resource pressure, you may not be able to get to the server properties using SSMS and change them quickly within the time you set for the timeout. For this reason, I prefer the user of XMLA in this case which makes the process faster.

Read on to see how to do this.

Comments closed

Working with Central Management Servers in dbatools

Mikey Bronowski continues a series on dbatools:

The built-in feature of the SSMS allows us to configure a group of SQL instances and run queries against multiple instances at once. With the registered servers you can also build a list of SQL Servers in one place, so everyone with access to the CMS can see them. First, we will start by creating registered servers and server groups.

This is an underrated set of functionality for SQL Server and dbatools does a good job working with it.

Comments closed

Dropping Unused Indexes in Azure SQL DB

Monica Rathbun gives an important lesson around tracking index utilization in Azure SQL Database:

If the index has not shown any utilization I investigate to determine if it is one that can be removed. However, this week something caught my attention. I was looking at a client’s indexes and noted the values for these were not as high as I would have expected. I know that these index statistics are reset upon every SQL Server Service restart, but in this case, I was working on an Azure SQL Database. which got me wondering exactly how that worked. With an Azure Virtual Machine or an on Prem SQL Server instance this is easy to figure out. But with an Azure SQL Database we do not have control over when restarts are done, and what about the Serverless offering (which pauses unutilized databases to reduce costs), how do those behave?  I really want to make sure before I remove any indexes from a database that I am examining the best data possible to make that decision. So, I did some digging.

Read on to see what Monica discovered.

Comments closed

How a 60GB Database Backup Became 1TB

Garry Bargsley points out the importance of a tiny flag:

I put on my investigator hat and begin looking around. I started with the Windows File Server to see what was actually on the drive in question. Just as I thought, three SQL backup files were in there proper folder. Although there were only three files, something else caught my attention. One backup file was 800GB and another 1TB. That was strange as I don’t think the source databases are that big. Sure enough, I look and one database is 60GB and the other is 45GB.

Something is not right here!! So, next, I run a RESTORE HEADERONLY against one of the backup files. What did I see?

Read on to learn what Garry saw, and then what Garry didn’t see.

Comments closed