Press "Enter" to skip to content

Month: April 2017

Understanding Lock Escalation

Kendra Little explains some of the rules behind lock escalation, including which locks don’t cause escalation:

Books Online has a good article about this, which explains a lot of the details about how many locks you need to take out to trigger lock escalation. Here are the (simplified) basics:

  • The ‘magic’ number to trigger escalation for the first time is 5,000 locks on a single table reference

  • Locks do NOT escalate from row level to page level. Row locks escalate to table. Page level locks also escalate to table level. In other words, forcing row level locking will not make it less likely to escalate locks to the table level, but rather it will do the opposite.

    • Note: for partitioned tables, you have the option to enable partition level escalation
  • If you’re modifying data, the escalated table lock will be exclusive. That means nobody else can party with the table if lock escalation succeeds while you’re doing your work.

Not all locks count, though, and Kendra has provided a test to show this.

Comments closed

Taking Advantage Of Azure Elasticity

Arun Sirpal migrated a number of Azure SQL Databases into an elastic pool and configured a series of elastic jobs to support them:

I want to show you how I went from having multiple single SQL databases in Azure to a database elastic pool within a new dedicated SQL Server. Once setup I create and use elastic jobs. This post is long but I am sure you will find it useful.

APPROACH TAKEN

  • Create a new “logical” SQL Server.

  • Create a new elastic pool within this logical SQL Server.

  • Move the data from the old single SQL databases to the above elastic pool (couple of ways to do this but I used built-in backups).

  • Confirm application connection.

  • Decommission single SQL databases.

  • Create / setup an elastic job.

  • Check the controller database.

Definitely worth reading if you are looking at hosting multiple databases in Azure.

Comments closed

Generating R Services Stored Procedures From R

David Smith describes sqlrutils, an R function to generate SQL Server R Services stored procedures:

If you’ve created an R function (say, a routine to clean up missing values in a data set, or a function to make forecasts using a machine learning model), and you want to make it easy for DBAs to use it, it’s now possible to publish R functions as a SQL Server 2016 stored procedure. The sqlrutils package provides tools to convert an existing R function to a stored procedure which can then be executed by anyone with authenticated access to the database — even if they don’t know any R.

To use an R function as a stored procedure, you’ll need SQL Server 2016 with R Services installed. You’ll also need to use the sqlrutils package to publish the function as a stored procedure: it’s included with both Microsoft R Client (available free) and Microsoft R Server (included with SQL Server 2016), version 9.0 or later.

Compare this against R Tools for Visual Studio, with which you can generate stored procedures from the IDE.

Comments closed

Handling Runbook Alerts

Grant Fritchey shows how to set up alerting when an Azure automation job fails:

Believe it or not, there’s not an immediately obvious “Oh, you had an error in your Automation script, here’s how you alert someone” setting in the Azure portal. Now, you could simply put error handling in your PowerShell script. In fact, it’s probably not at all a bad idea to do that as well. However, what you would not get setting things up that way is a mechanism for managing the alerts, history, additional possible responses (like firing off another Runbook, although there is way to do that from the PowerShell too). Instead, what I want is way to manage alerts through the Azure fabric.

If you do a search, there is an Azure Alert service. However, it didn’t seem to be really what I was looking for. Further, I found it extremely difficult (OK, I couldn’t make it work) to connect the alerts directly to the Jobs related to my Runbooks. Instead, after quite a bit of research, what I found is a combination of Azure Log Analytics with the Operations Management Suite (OMS) will do exactly what I’m looking for.

Click through to read how to set this up.

Comments closed

Last Known Good DBCC CHECKDB In Powershell

Rob Sewell shows off a cmdlet to check DBCC DBINFO for each database to get the last known good CHECKDB run:

This time we get more information. The server name, database name, when the database was created, the last good DBCC Checkdb, how long since the database was created, how long since the last known good DBCC Checkdb, a status and a Data Purity enabled flag. If you look at the image above it shows that the DBA_Admin database has a status of “New database, not checked yet” even though it has a date for the last known good DBCC CheckDb. This is because it was restored after this server was upgrade from CTP 1.3 to CTP 1.4 and there has not yet been a DBCC CheckDb run yet. The system databases have a status of “CheckDb should be performed”. This is because the last known good DBCC CheckDb is more than 7 days ago. Lets run a DBCC CheckDb and check again

Do read the caveats, and also check out a previous Arun Sirpal blog post on DBCC DBINFO.

Comments closed

Troubleshooting Cluster Creation Errors

Mark Broadbent diagnoses an error which seems misleading at first:

One such problem is when you use the New-Cluster command to add all your nodes in one go.

New-Cluster -Name magrathea -node server5,server6,server7
-staticaddress 192.168.1.70

Simple right? Well no. In this instance I ran into the following error:

New-Cluster : There was an error adding node 'server7' to the cluster
the node cannot be contacted. Ensure that the node is powered on and is connected to the network.

Read on for an example of piecemeal debugging.  Mark’s advice is to keep things simple, as in this case at least, you can’t count on the error messages coming back to be completely accurate.

Comments closed

The Central Limit Theorem

Mala Mahadevan explains the Central Limit Theorem with an example:

The central limit theorem states that the sampling distribution of the mean of any independent,random variable will be normal or nearly normal, if the sample size is large enough. How large is “large enough”? The answer depends on two factors.

  • Requirements for accuracy. The more closely the sampling distribution needs to resemble a normal distribution, the more sample points will be required.
  • The shape of the underlying population. The more closely the original population resembles a normal distribution, the fewer sample points will be required. (from stattrek.com).

The main use of the sampling distribution is to verify the accuracy of many statistics and population they were based upon.

Read on for an example and to see how to calculate this in T-SQL.

Comments closed

Indexed Views On Clustered Columnstore Indexes

If you’re using SQL Server 2014 and want to create a non-clustered rowstore index on top of your clustered columnstore index, Niko Neugebauer has you covered:

Here we have a beautiful and a simple execution plan, which delivers what we need – great performance with a relative low overall cost. And in this way we enjoy the possibility to get the best out of the 2 worlds – Columnstore & Rowstore.
Should we need to run a similar query but agains the whole dataset, it will be redirected to our Columnstore Index which will deliver great performance:

Alternatively, upgrade to SQL Server 2016 and you get this without introducing an indexed view into the mix.

Comments closed

Columnstore Indexes On Cloned Databases

Parikshit Savjani has a script to update columnstore index statistics before running DBCC CLONEDATABASE:

Unlike traditional Btree indexes, when a columnstore index is created, there is no index statistics created on the columns of the columnstore indexes. However, there is an empty stats object created with the same name as columnstore index and an entry is added to sys.stats at the time of index creation. The stats object is populated on the fly when a query is executed against the columnstore index or when executing DBCC SHOW_STATISTICS against the columnstore index, but the columnstore index statistics aren’t persisted in the storage. The index statistics is different from the auto created statistics on the individual columns of columnstore indexes which is generated on the fly and persisted in the statistics object. Since the index statistics is
not persisted in storage, the clonedatabase will not contain those statistics leading to inaccurate stats and different query plans when same query has run against database clone as opposed to production database.

Click through for the script.

Comments closed

Trick Co-Workers With This Extended Property

Kenneth Fisher shows how to use extended properties to hide a table from SQL Server Management Studio:

FYI I’ve tried this at the column and schema levels and it didn’t work.

Using this you can hide the object from SSMS object explorer without restricting its use in any way.

I’m curious if there are any other hidden uses of extended properties. I haven’t been able to find any documentation so if you’ve seen any please let me know!

I don’t think I’ve ever had cause to hide objects from Management Studio, but if you’re looking for next year’s April Fools prank, maybe?

Comments closed