Press "Enter" to skip to content

Category: Administration

Why Increase Cost Threshold For Parallelism

Randolph West explains why the default value of cost threshold for parallelism is too low at 5:

Unfortunately, the default setting of 5 means that queries are likely to hit that threshold more often than not on modern hardware, and the optimizer is forced to look at parallel plans unnecessarily. A side-effect to this is that queries running in parallel will block queries running on a single thread, so short-running queries will be delayed by long-running queries needing the same resources.

Therefore, if I am setting up a SQL Server instance from scratch, I will set this value to 50 by default, and monitor the performance counters. Once a query cost exceeds 50, then it can use all the cores up to the limit defined by the max degree of parallelism.

50 is a reasonable first stab at a default. I’ll be honest, though: I’m not sure I know exactly where to set the cost threshold for parallelism. I agree that 5 is too low and 50 is better, but I don’t have a great feel for when it should move up or down short of “everything’s going parallel in your database, so increase the value” or “nothing’s ever going parallel, so maybe decrease it a skosh.”

Comments closed

One More Data Gateway Is All You Need

Meagan Longoria explains when you might need data gateways when implementing an Azure BI architecture:

Let’s start with what services may require you to use a data gateway.

You will need a data gateway when you are using Power BI, Azure Analysis Services, PowerApps, Microsoft Flow, Azure Logic Apps, Azure Data Factory, or Azure ML with a data source/destination that is in a private network that isn’t connected to your Azure subscription with a VPN gateway. Note that a private network includes on-premises data sources and Azure Virtual Machines as well as Azure SQL Databases and Azure SQL Data Warehouses that require use of VNet service endpoints rather than public endpoints.  

There are a few of them so check out Meagan’s post and take notes.

Comments closed

Azure VM Boot Diagnostics

John Morehouse shows us how to enable and use boot diagnostics on Azure VMs to troubleshoot why that server isn’t coming up the way you’d expect:

The next blade will show you an active console of the virtual machine.  From here you are able to determine what the current status of the virtual machine might be.  You will also noticed that you can gain access to the serial log (shown below), which will give you more detailed information about the boot process.
Once we click on Boot Diagnostics, we will then see the initial startup screens of the server:

This is useful if you have some huge misconfiguration and the server’s failing for some reason.

Comments closed

BPE: The Answer To A Question Nobody Asked

Randolph West is not a big fan of Buffer Pool Extension in practice:

Unfortunately there are some practical problems with the Buffer Pool Extension in 2019.
Firstly, let’s talk about the licensing contradiction with SQL Server Standard Edition. Since 2016 Service Pack 1, we have been able to access 128 GB of memory for the buffer pool, plus additional RAM for in-memory OLTP and Columnstore indexes. The practical limits of Standard Edition have been — for the most part — all but lifted. By the same token, if we can afford Enterprise Edition, we can afford more physical RAM. In other words, if we need more than 128 GB RAM for the buffer pool, perhaps we should move away from Standard Edition.

Title aside, Randolph makes a really good point: Buffer Pool Extension was one of those features which sounded great during development, but hardware quickly overtook it and made it all but irrelevant. On the whole, that’s a positive for us as SQL Server users.

Comments closed

Monitoring SQL Server with Telegraf

I have a post up on monitoring SQL Server instances with Telegraf:

Not too long ago, I had the opportunity to put into place a free solution for monitoring SQL Server instances. I saw Tracy’s series on collecting performance metrics InfluxDB + Telegraf + Grafana, and then I saw her talk on the topic (Collecting Performance Metrics), but until I implemented it myself, I couldn’t believe how easy it was. I thought it was going to take two or three days of hard work to get done, but I had everything going within a few hours.

Let’s walk through the process together.

I keep saying this in the post, but it’s much easier than I expected. There are still more steps than a commercial off-the-shelf product but part of what you’re paying for there is convenience, so that had better be easier.

Comments closed

Monitoring Entity Framework

Grant Fritchey loves Entity Framework:

Yes, Entity Framework will improve your job quality and reduce stress in your life.

With one caveat, it gets used correctly.

That’s the hard part right? There is tons of technology that makes things better, if used correctly. There are all sorts of programs that make your life easier, if used correctly. Yet, all of these, used incorrectly, can make your life a hell.

One nit that I’ve always had with Entity Framework is that it’s very difficult to tell what part of the code the call was coming from. You really have no idea. So when my friend, Chris Woodruff, asked me on Twitter what would be the best way to monitor TagWith queries in Entity Framework, well, first, I had to go look up what TagWith was, then I got real excited, because, hey, here’s a solution.

That “I love Entity Framework” is the lead-in to a one-act play of mine with people with pitchforks, tar, and feathers. Nevertheless, Grant shows us how we can tag code in C# and capture that data in extended events. I’d read it but I’m too busy sharpening my pitchfork.

Comments closed

Automated Query Capture With Logic Apps

Arun Sirpal shows how we can use Azure Logic Apps to automate periodic capture of running queries in Azure SQL Database:

Have you ever wanted to capture the T-SQL, waits, sessions IDs (etc) at a specific time for Azure SQL Database? Sure there are a few ways to do this. Extended Events comes to mind but I wanted to do something different.

For this blog post I decided to use Brent Ozar’s famous sp_BlitzWho command (in expert mode) coupled with Azure Logic Apps. At a high level it is simple. At a specific time trigger the execution of sp_BlitzWho stored procedure and query it for later use.

Click through to see how to set this up.

Comments closed

Creating A Big Data Cluster

Chris Adkin continues a series on big data clusters in SQL Server 2019:

This post post will focus on creating a big data cluster so that you can get up and running as fast as possible, as such the storage type used will be ephemeral, this perfectly acceptable for “Kicking the tyres”. For production grade installations integration with a production grade storage platform is required via a storage plugin. Before we create our cluster, with the assumption we are doing this with an on premises infrastructure, the following pre-requisites need to be met:

Read the whole thing, but wait until part 4 before putting anything valuable in it.

Comments closed

Apps To Manage SQL Server On Azure VMs

Kevin Chant has a list of tools you can use to manage SQL Server on Azure VMs:

From experience I know it’s important to know what applications you can use locally with Azure to manage SQL Server solutions. So you have the right tools for the job.

For instance, I was talking with some people at a client’s site the other day about deciding what application to use to future proof themselves.

In this post I will cover applications for use with Windows, MacOS and Linux distributions. 

I don’t think I’m spoiling too much in saying that about 80% of these are the same tools you would use for on-prem work.

Comments closed