Joe Sack has a couple blog posts on adaptive query processing enhancements in SQL Server 2017 CTP 2.0. First, Batch Mode Adaptive Joins:
We have seen numerous cases where providing a specific join hint solved query performance issues for our customers. However, the drawback of adding a hint is that we remove join algorithm decisions from the optimizer for that statement. While fixing a short-term issue, the hard-coded hint may not be the optimal decision as data distributions shift over time.
Another scenario is where we do not know up front what the optimal join should be, for example, with a parameter sensitive query where a low or high number of rows may flow through the plan based on the actual parameter value.
With these scenarios in mind, the Query Processing team introduced the ability to sense a bad join choice in a plan and then dynamically switch to a better join strategy during execution.
That one’s really cool. Joe also talks about interleaved execution for multi-statement TVFs:
SQL Server has historically used a unidirectional “pipeline” for optimizing and executing queries. During optimization, the cardinality estimation process is responsible for providing row count estimates for operators in order to derive estimated costs. The estimated costs help determine which plan gets selected for use in execution. If cardinality estimates are incorrect, we will still end up using the original plan despite the poor original assumptions.
Interleaved execution changes the unidirectional boundary between the optimization and execution phases for a single-query execution and enables plans to adapt based on the revised estimates. During optimization if we encounter a candidate for interleaved execution, which for this first version will be multi-statement table valued functions (MSTVFs), we will pause optimization, execute the applicable subtree, capture accurate cardinality estimates and then resume optimization for downstream operations.
The goal here is to make Table-Valued Functions viable from a performance perspective. The team has a long way to go there, but they’re working on it. Also, Joe gives a shout out to Arun Sirpal, frequent curatee.
This ultimately maps to Query ID 297 where if you click the bar you can see the actual code.
Now, a debate occurred. This code was pretty awful, implicit conversions, GUIDs as cluster keys etc. I took the above code and analysed the execution plan and made some recommendations. I was quickly shut down; I was told to bump up the DTU of the database! Talk about masking the issue with hardware.
Check it out.
I want to show you how I went from having multiple single SQL databases in Azure to a database elastic pool within a new dedicated SQL Server. Once setup I create and use elastic jobs. This post is long but I am sure you will find it useful.
Create a new “logical” SQL Server.
Create a new elastic pool within this logical SQL Server.
Move the data from the old single SQL databases to the above elastic pool (couple of ways to do this but I used built-in backups).
Confirm application connection.
Decommission single SQL databases.
Create / setup an elastic job.
Check the controller database.
Definitely worth reading if you are looking at hosting multiple databases in Azure.
This time we get more information. The server name, database name, when the database was created, the last good DBCC Checkdb, how long since the database was created, how long since the last known good DBCC Checkdb, a status and a Data Purity enabled flag. If you look at the image above it shows that the DBA_Admin database has a status of “New database, not checked yet” even though it has a date for the last known good DBCC CheckDb. This is because it was restored after this server was upgrade from CTP 1.3 to CTP 1.4 and there has not yet been a DBCC CheckDb run yet. The system databases have a status of “CheckDb should be performed”. This is because the last known good DBCC CheckDb is more than 7 days ago. Lets run a DBCC CheckDb and check again
Do read the caveats, and also check out a previous Arun Sirpal blog post on DBCC DBINFO.
What Microsoft classifies as IO intensive I am not so sure, personally I have not seen any sort of IOPS figure(s) for what we could expect from each service tier, it’s not like I can just run DiskSpeed and find out. Maybe the underlying storage for Premium RS databases is more geared to work with complex analytical queries, unfortunately I do not have the funds in my Azure account to start playing around with tests for Premium vs. Premium RS (I would love to).
Also and just as important, Premium RS databases run with fewer redundant copies than Premium or Standard databases, so if you get a service failure you may need to recover your database from a backup with up to a 5-minute lag. If you can tolerate 5 minute data loss and you are happy with a reduced number of redundant copies of your database then this is a serious option for you because the price is very different.
It’s a lot less expensive (just under 1/3 the cost of Premium in Arun’s example), so it could be worth checking out.
Ok great let’s check this backup file using a cool tool (XVI32). I really want the contact number of a guy called SQLDOUBLEG because I need his help tuning my SQL Servers so I go looking for a text string, hopefully the phone number will be close by.
As Arun points out, you don’t need TDE to get encrypted backups, but it does the job.
Looks like I had open transactions while my transaction log got lost during an outage. I tried switching the database online but that failed.
Msg 5181, Level 16, State 5, Line 4 Could not restart database “FAT”. Reverting to the previous status. Msg 5069, Level 16, State 1, Line 4 ALTER DATABASE statement failed.
Accessing the database is the real challenge now.
Moral of the story: have backups and have good luck.
If you ever need to move a copy of a SQL database in Azure across servers then here is a quick easy way.
So let’s say you need to take a copy of database called [Rack] within Subscription A that is on server ABCSQL1 and name it database [NewRack] within subscription B on server called RBARSQL1 (The SQL Servers are in totally different data centers too).
Read on for the answer.
Target Server Memory (KB) is the amount of memory that SQL Server is willing (potential) to allocate to the buffer pool under its current load. Total Server Memory (KB) is what SQL currently has allocated.
Using SQL Server 2014 developer edition (64 bit) my machine has 12GB RAM and maximum server memory is currently set to 8GB and for the purpose of this post I have set minimum server memory to 1GB (Lock Pages in Memory has not been set).
Read on for a nice description laden with Perfmon pictures.
My ZoraDB database clearly stuck in an encryption in progress state.
With the database being small the encryption process should not take long at all, I was confused.
I then decided to try and turn off the encryption.
Read on for the solution.