Finding Database Growth Events

Arun Sirpal has an Extended Events session which tracks growth events on a database file:

A quick post that is hopefully useful, I wanted a quick way to find the time, size of the database file size change and who caused it.

I went down the extended events route using sqlserver.database_file_size_change event. By the way I am no Extended Events expert, I write a lot via trial and error I am trying to wean off Profiler.

Read on for the script as well as a query which shreds the XML and returns a result set.

Azure AD On Azure SQL DB

Arun Sirpal shows how to set up Azure SQL Database to use Azure Active Directory accounts:

I think it is important to highlight a couple of points, more specifically around the requirement of ADALSQL.DLL and proper setup of AD which I will highlight below and reference some links, please do this as it lays the foundation for you.

ADALSQL.DLL

You need ADALSQL.DLL which is part of the latest SQL Server Management Studio (SSMS) to test access. This stands for Active Directory Authentication Library for SQL Server.

This goes through some of the issues Arun had setting everything up and provides workarounds and explanations.

Creating An Azure Database For MySQL

Arun Sirpal shows how to create a new MySQL database in Azure:

Here you have the concept of compute units. No such thing as DTUs here but just as confusing.

Compute Units are a measure of CPU processing throughput guaranteed to be available to a single Azure Database for MySQL server. A Compute Unit is a blended measure of CPU and memory resources. In general, 50 Compute Units equate to half-core, 100 Compute Units equate to one core, and 2000 Compute Units equate to twenty cores of guaranteed processing throughput available to your server. I am not going to rehash official documentation on these concepts so I recommend reading https://docs.microsoft.com/en-gb/azure/mysql/concepts-compute-unit-and-storage

Different database product, different metric, it seems.  Check out Arun’s post as he walks you through the process step by step.

Breakpoint Extended Event

Arun Sirpal is a dangerous man of mystery and danger, but mostly danger:

I did a dangerous thing, and I want to make sure that YOU DO NOT do the same.

I was creating a couple of extended events sessions and was playing around with some actions. I ended up with the following code where I was after a guy called Shane:

The probability that you intend to set a breakpoint in SQL Server via Extended Event is quite low (low enough that if you’re doing it, you should already know what you’re doing), but click through to see exactly what damage you can do.

Comparing Query Store Plans

Arun Sirpal explains how to compare two plans in the Query Store:

A small but nice little feature I have been using recently can be found within Query Store.

Let’s say you have 2 Plan IDs for a query, naturally you want to view the execution plan for the different plans. In the past I did it a manual way, by that I mean by individually clicking on the Plan ID to see the plan then moving on to the next one.

It is much easier to explain with some images.

Click through to see those images.

Interleaved Execution

Arun Sirpal looks at how Interleaved Execution affects table cardinality estimates with multi-statement table-valued functions in SQL Server 2017:

Joe states in the article “MSTVFs have a fixed cardinality guess of “100” in SQL Server 2014 and SQL Server 2016, and “1” for earlier versions. Interleaved execution will help workload performance issues that are due to these fixed cardinality estimates associated with multi-statement table valued functions.”

This is exactly what I saw where the below is just a basic screen shot of 1 of many tests that I carried out.

Read the whole thing for more details.

Back Up Those System Databases

Arun Sirpal had gremlins infest his server, causing a service pack installation to go sideways:

I try to start the service but it fails. (I was desperate ok!)

Script level upgrade for database ‘master’ failed because upgrade step ‘msdb110_upgrade.sql’ encountered error 5846, state 1, severity 16. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous errorlog entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.

Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.

To be honest, rebuilding master would be my last option.

Read the whole thing, and then double-check that you have good copies of master & msdb somewhere.

Adaptive Query Processing In CTP 2.0

Joe Sack has a couple blog posts on adaptive query processing enhancements in SQL Server 2017 CTP 2.0.  First, Batch Mode Adaptive Joins:

We have seen numerous cases where providing a specific join hint solved query performance issues for our customers.  However, the drawback of adding a hint is that we remove join algorithm decisions from the optimizer for that statement. While fixing a short-term issue, the hard-coded hint may not be the optimal decision as data distributions shift over time.

Another scenario is where we do not know up front what the optimal join should be, for example, with a parameter sensitive query where a low or high number of rows may flow through the plan based on the actual parameter value.

With these scenarios in mind, the Query Processing team introduced the ability to sense a bad join choice in a plan and then dynamically switch to a better join strategy during execution.

That one’s really cool.  Joe also talks about interleaved execution for multi-statement TVFs:

SQL Server has historically used a unidirectional “pipeline” for optimizing and executing queries.  During optimization, the cardinality estimation process is responsible for providing row count estimates for operators in order to derive estimated costs.  The estimated costs help determine which plan gets selected for use in execution.  If cardinality estimates are incorrect, we will still end up using the original plan despite the poor original assumptions.

Interleaved execution changes the unidirectional boundary between the optimization and execution phases for a single-query execution and enables plans to adapt based on the revised estimates. During optimization if we encounter a candidate for interleaved execution, which for this first version will be multi-statement table valued functions (MSTVFs), we will pause optimization, execute the applicable subtree, capture accurate cardinality estimates and then resume optimization for downstream operations.

The goal here is to make Table-Valued Functions viable from a performance perspective.  The team has a long way to go there, but they’re working on it.  Also, Joe gives a shout out to Arun Sirpal, frequent curatee.

Azure SQL Automation

Arun Sirpal thinks about Azure automation in the context of how the job market is changing:

This ultimately maps to Query ID 297 where if you click the bar you can see the actual code.

Now, a debate occurred. This code was pretty awful, implicit conversions, GUIDs as cluster keys etc. I took the above code and analysed the execution plan and made some recommendations. I was quickly shut down; I was told to bump up the DTU of the database! Talk about masking the issue with hardware.

Check it out.

Taking Advantage Of Azure Elasticity

Arun Sirpal migrated a number of Azure SQL Databases into an elastic pool and configured a series of elastic jobs to support them:

I want to show you how I went from having multiple single SQL databases in Azure to a database elastic pool within a new dedicated SQL Server. Once setup I create and use elastic jobs. This post is long but I am sure you will find it useful.

APPROACH TAKEN

  • Create a new “logical” SQL Server.

  • Create a new elastic pool within this logical SQL Server.

  • Move the data from the old single SQL databases to the above elastic pool (couple of ways to do this but I used built-in backups).

  • Confirm application connection.

  • Decommission single SQL databases.

  • Create / setup an elastic job.

  • Check the controller database.

Definitely worth reading if you are looking at hosting multiple databases in Azure.

Categories

May 2017
MTWTFSS
« Apr  
1234567
891011121314
15161718192021
22232425262728
293031