Press "Enter" to skip to content

Category: Administration

Azure SQL Managed Instance Prerequisites

Frank Gill has started a series on Azure SQL Managed Instances and has two posts up already.  First, an introduction:

The drawbacks of Azure SQL Database make it difficult to migrate existing applications, because of the number of application changes required.  Azure SQL Database is designed to be used for new development in Azure and for multi-tenant environments, where each tenant requires their own copy of a database.

The benefits of SQL Server on an Azure VM make it much easier to migrate an existing application to Azure.  However, the VMs underlying the application still have to be managed by the client.  This fails to take advantage of the management of resources in Azure, and uses Azure as a VM host.

A third option, Azure SQL Managed Instance, was released at the beginning of October 2018.  Managed Instance combines the best of the previous options.  With Managed Instance, the infrastructure is fully managed and the majority of the SQL Server feature set is available.  The full list of differences between a traditional install of SQL Server and Managed Instance can be found here.  A number of the most dramatic differences are listed below.

Then a post covering pre-requisites:

Before creating an Azure SQL Managed Instance, a number of prerequisite resources must be provisioned.  These are:

  • An Azure Virtual Network

  • A dedicated subnet for Managed Instances

  • A route table

It looks like this is part of a longer series Frank is building out, so stay tuned.

Comments closed

Reading Error Logs Outside Of SQL Server

Kenneth Fisher shows us where error logs are located and how to read them outside of SQL Server:

Quick and easy post today. Hopefully you’ve opened the error log on a SQL instance. However, what happens if you don’t have the log viewer in SSMS? In fact, the instance won’t come up at all so you really need to see what went wrong.

Fortunately, the error logs in SQL are just text files, even though they don’t actually have that extension. The trick is knowing where they are.

Read on for the answer.

Comments closed

Beautiful Deadlock Graphs And Tying RIDs Back To Object Names

Josh Simar shares a deadlock graph which I have entitled The Pit Of Despair:

I can’t make heads or tails of that but I can tell you that seems like a really bad brawl for resources. It’s like a Jerry Springer show with a few extras thrown in. Since I knew that my graph wasn’t going to be helpful in this instance I went to the actual xml and tried to figure out how I could tune this to make it better in the future. I needed to know exactly where the issue was so the waitresource pointer is a good place to start.

You will see many blog articles on how to find SQL wait resources when the resource type is a key, a page, or an object (I suggest Kendra Little’s blog post) There is however a noticeable glut on articles explaining RID (a RID is a key on a table with no clustered index). I finally found how to tie a RID to an actual resource name but it was used for corruption so the details were a bit hazy at first.

Click through for this work of database art as well as a script which links RIDs back to specific object names.

Comments closed

Monitoring Apache NiFi With A Custom Dashboard

Tim Spann has started a new series on monitoring Apache NiFi:

In this little proof of concept work, we grab some of these flows process them in Apache NiFi and then store them in Apache Hive 3 tables for analytics. We should probably push the data to HBase for aggregates and Druid for time series. We will see as this expands.

There are also other data access options including the NiFi REST API and the NiFi Python APIs.

Boostrap Notifier

  • Send notification when the NiFi starts, stops or died unexpectedly
  • Two OOTB notifications
  • Email notification service
  • HTTP notification service
  • It’s easy to write a custom notification service

Reporting Tasks

  • AmbariReportingTask (global, per process group)

  • MonitorDiskUsage(Flowfile, content, provenance repositories)

  • MonitorMemory

Much of this is an overview of the tools and measures available.

Comments closed

Finding Databases With Multiple Data Or Log Files

Lori Brown has a couple of quick scripts to help find databases made up of several data or log files:

This might be kind of basic but since I am working on a comprehensive script to discover things that a DBA really needs to know about, I made a couple of queries that will produce a list of the databases that have multiple files along with the locations of the physical files.  One query finds multiple database files (mdf’s) and the other looks for multiple transaction log files (ldf’s).  This will also find the Filestream file locations.  Since I often have to take on instances without ever having seen them, it is good to know about little things like this.

This script might be helpful in finding minor performance gains by looking for places to add data files or remove log files.

Comments closed

Learning Why A Plan Was Removed From Cache

Grant Fritchey shows us that there is some limited information to tell us why an execution plan was removed from cache:

You’ll note that the second statement in the sequence is “CREATE OR AL…” in the batch_text. That’s me modifying the procedure. The very next event is sp_cache_remove. It shows the remove_method as “Compplan Remove”. This the plan being removed in an automated way from cache. The next three events are all for query_cache_removal_statistics.

What are they?

These are the statement level statistical information being removed from the DMVs. That’s right, we can observe that information getting removed from the system along with the plan from cache.

Unless I’m missing something, it seems like this is more helpful for pedagogical reasons rather than auditing reasons—I’d be concerned that on a busy production system, we’d see too many messages to correlate things all that well.

Comments closed

Generating E-Mail Alerts From Perfmon

Dave Bermingham shows how to send e-mail alerts based on Perfmon counter values:

The first thing that you need to do is write a Powershell script that when run can send an email. While researching this I discovered many ways to accomplish this task, so what I’m about to show you is just one way, but feel free to experiment and use what is right for your environment.

In my lab I do not run my own SMTP server, so I had to write a script that could leverage my Gmail account. You will see in my Powershell script the password to the email account that authenticates to the SMTP server is in plain text. If you are concerned that someone may have access to your script and discover your password then you will want to encrypt your credentials. Gmail requires and SSL connection so your password should be safe on the wire, just like any other email client.

It’s an interesting use of built-in Windows functionality to perform alerting.

Comments closed

Baslining Modern Versions Of SQL Server

Erin Stellato goes back over an older baselining article and gives us some updates in what we should consider for more recent versions:

Last week I got an email from a community member who had read this older article of mine on baselining, and asked if there were any updates related to SQL Server 2016, SQL Server 2017, or vNext (SQL Server 2019). It was a really good question. I haven’t visited that article in a while and so I took the time to re-read it. I’m rather proud to say that what I said then still holds up today.

The fundamentals of baselining are the same as they were back in 2012 when that article was first published. What is different about today? First, there are a lot more metrics in the current release of SQL Server that you can baseline (e.g. more events in Extended Events, new DMVs, new PerfMon counters,  sp_server_diagnostics_component_results). Second, options for capturing baselines have changed. In the article I mostly talked about rolling your own scripts for baselining. If you’re looking to establish baselines for your servers you still have the option to develop your own scripts, but you also can use a third-party tool, and if you’re running SQL Server 2016+ or Azure SQL Database, you can use Query Store.

Read on for more details.

Comments closed

Oddity With User Write Count In dm_db_index_usage_stats

Shaun J. Stuart looks at an oddity with the user_updates column on sys.dm_db_index_usage_stats:

This pulls both reads and writes from the sys.dm_db_index_usage_stats dynamic management view. A read is defined as either a seekscan, or lookup and a write is defined as an update. All seemed good until I noticed something strange. One of the top written to tables was, based on our naming convention, a lookup table. That seemed odd. A lookup table should have lots of reads, but only few writes. The query above showed my lookup table had almost twice as many writes as reads!

I dug around a bit and found two stored procedures that referenced that particular table. I checked them out, but nothing seemed out of the ordinary to me, so I dug a little deeper and discovered something strange: theuser_updates value of sys.dm_db_index_usage_stats can get incremented even when there is no actual update to the table!!

Read on for the explanation.

Comments closed

Safely Dropping Databases

Bob Pusateri notes a little issue when it comes to dropping databases:

At a previous employer, we had a well-defined process when dropping databases for a client. It went something like this:

  1. Confirm in writing the databases on which servers/instances to be dropped
  2. Take a final full backup of databases
  3. Take databases offline
  4. Wait at least two weeks to make sure nothing breaks in the absence of this database
  5. Drop databases

This is a pretty good and safe method. If taking the database offline causes some unforeseen system to stop working, it can be very quickly brought back online in-place, instead of having to locate the backup and restore it. But it there’s just one problem…

Read on for that problem and its solution.

Comments closed