Press "Enter" to skip to content

Day: April 25, 2018

Azure Data Lake Alerting

Jose Lara shows how to send alerts if you hit a utilization threshold:

If you want to see the step-by-step guide to create a new Log Analytics alert, check out our recent blog post on creating Log Analytics Alerts.

For the alert signal logic, use the following values:

  • Use the query from the previous step

  • Set the sum of AUs to 50 as the threshold (you can use any number that reflects your own threshold)

  • Set the trigger to 0: whenever the threshold is breached

  • Set the period and frequency for 24 hours.

Read the whole thing if you use Azure Data Lake Analytics; an unexpectedly large bill is a tough thing to swallow.

Comments closed

Creating Map Plots With ggmap

Laura Ellis shows how to use the ggmap package to create choropleth maps in R:

In the last map, it was a bit tricky to see the density of the incidents because all the graphed points were sitting on top of each other.  In this scenario, we are going to make the data all one color and we are going to set the alpha variable which will make the dots transparent.  This helps display the density of points plotted.

Also note, we can re-use the base map created in the first step “p” to plot the new map.

Check it out.  This is an introduction to creating choropleths, making it a good start.

Comments closed

Running The Azure DTU Calculator On An Older Server

Jim Donahoe shows us how to get the Azure DTU calculator running on an older server without Powershell:

I recently had to do an analysis of a client’s database workload using the Azure DTU Calculator(DTU Calculator) and thought it might be interesting to share just how I did that.  I have run this tool numerous times on other clients via the PowerShell method and the Command Line method, however this client’s environment was: Windows Server 2008R2, and SQL Server 2008R2 SP3 and had to be done differently.

Now, from the DTU Calculator page itself, it tells you how the process works.  It essentially runs a perfmon trace for an hour with the following counters:

  • Processor – % Processor Time
  • Logical Disk – Disk Reads/sec
  • Logical Disk – Disk Writes/sec
  • Database – Log Bytes Flushed/sec

My client did not have PowerShell accessible for me to use unfortunately.  I normally prefer the PowerShell script, however in this case I had to use the Command Line Interface, they both return the same results.

Click through to see how Jim did it.

Comments closed

Finding Database Files In The Wrong Folder

Matthew McGiffen has a script which shows which database log and data files are outside of the default folder for that instance:

One common issue that bugs me is where databases have been moved from one instance to another, usually through backup and restore, and the files haven’t been moved as part of the restore so they get recreated in the data\log folders for the old instance.

This has caused me various problems, e.g. working out which instance is hammering the disk or using up all the space.

Click through for the script.

Comments closed

Availability Group Improvements In SQL Server

Allan Hirt points out two improvements to Availability Groups in the latest SQL Server 2016 and 2017 patches:

First and foremost, SQL Server 2016 Service Pack 2 was just released today. There are two major improvements in it for AGs:

1. SQL Server 2016 now has full Microsoft Distributed Transaction Coordinator (DTC) support. SQL Server 2016 had partial support for DTC with one of the two scenarios (cross instance/cross platform), but not intra-instance DBs. SQL Server 2017 had both, and now that was backported so SQL Server 2016 supports all DTC scenarios with AGs. This is great news.

Click through for the other major improvement.  This is in addition to yesterday’s notice regarding the distribution database.

Comments closed

DateTime Arithmetic

Claudio Silva points out a difference between the DATETIME and DATETIME2 data types:

I’m currently working on a SQL code migration from Firebird to SQL Server and I hit an error that I haven’t seen for some time.

The error message is the following:

Msg 206, Level 16, State 2, Line 4
Operand type clash: datetime2 is incompatible with int

This ringed the bell right away! Somewhere on the code someone was trying to do an arithmetic calculation without using the proper function.

Read on for the solution.

Comments closed

Viewing Deadlock Graphs With The system_health Session

Jes Borland shows how you can avoid using trace flags 1204 and 1222 and view deadlocks from the system_health Extended Event:

This one isn’t bad, but imagine a multi-statement deadlock, or a server with several deadlocks in an hour – how do you easily see if there were other errors on the server at the same time?

With SQL Server 2012+, we have a better tool to see when deadlocks occur – and the deadlock graphs are saved by default, so we don’t have to read the text version to figure it out, or run a separate trace to capture them.

In SSMS, open Object Explorer and navigate to Extended Events > Sessions > system_health > package0.event_file. Double-click to view the data.

Click through for the entire process.

Comments closed

Copying Azure SQL Databases

Arun Sirpal noticed a problem when he tried to copy an Azure SQL Database:

Now, I was looking at the following code.

CREATE DATABASE CodeDBP1
  AS COPY OF CodeDB ( SERVICE_OBJECTIVE = 'P1' )  ;

You would think this is okay? I did, especially with the fact that it parsed and was executing. I was thinking a copy of the CodeDB database will be created as a premium P1 database regardless of what the source database service tier was. This  source database is 0.5GB in size under the basic tier and 40 minutes later the copy was still executing. It just didn’t seem right.

Click through for the solution.  If this is going to be normal behavior, I’d really like to see an error message.

Comments closed