Press "Enter" to skip to content

Author: Kevin Feasel

Balancing SQL Server Core Licenses Across NUMA Nodes

Glenn Berry explains how to get yourself out of a pickle in Standard Edition:

Ok, that is bad enough, but it gets worse. Unless you fix it, SQL Server 2019 Standard Edition will use 32 logical cores on NUMA node 0, but only 16 logical cores on NUMA node 1. This can have a significant negative effect on performance. What you want is for SQL Server to use 24 logical cores on each NUMA node.

Glenn then explains how to pull this off.

Comments closed

Creating a Time Dimension with Time Bands in Power BI

Soheil Bakhshi shares how you can create a time dimension with a granularity of seconds in Power BI and SSAS Tabular:

I wrote some other posts on this topic in the past, you can find them here and here. In the first post I explain how to create “Time” dimension with time bands at minutes granularity. Then one of my customers required the “Time” dimension at seconds granularity which encouraged me to write the second blogpost. In the second blogpost though I didn’t do time bands, so here I am, writing the third post which is a variation of the second post supporting time bands of 5 min, 15 min, 30 min, 45 min and 60 min while the grain of the “Time” dimension is down to second. in this quick post I jump directly to the point and show you how to generate the “Time” dimension in three different ways, using T-SQL in SQL Server, using Power Query (M) and DAX. Here it is then:

Click through for the code, which includes several sample bands (e.g., 5 minutes, 15 minutes) that you can also control.

Comments closed

CTEs Don’t Control Plan Shape

Erik Darling dispels a myth:

I’ve heard many times incorrectly over the years that CTEs somehow materialize data.

But a new one to me was that CTEs execute procedurally, and you could use that to influence plan shapes by always doing certain things first.

Unfortunately, that’s not true of them either, even when you use TOP.

Read the whole thing. Though I do chain common table expressions for readability’s sake, but that’s usually because I’m performing a series of repetitive calculations that I can’t simplify via APPLY.

Comments closed

Publishing Azure Data Factory via Azure DevOps

Kamil Nowinski shares how to deploy Azure Data Factory flows via Azure DevOps:

Struggling with #ADF deployment? adf_publish branch doesn’t suit your purposes? Don’t have skills with PowerShell? I have good news for you. There is a new tool in the market. It’s a task for Azure DevOps Release Pipeline to deploy whole ADF from code (JSON files) to ADF instance in Azure. Behind the scenes, it runs the PowerShell module which does all job for you.
Sounds unbelievable? But it’s real! Check it out for yourself.

Click through for a video.

Comments closed

Thoughts on Snowflake Database Provisioning

David Stelfox takes us through some thoughts on provisioning instances of Snowflake:

For this example, I’ve chosen an open dataset of 2017 taxi rides in New York City. There are a few options for interacting with Snowflake: a dialog box approach in the web-based GUI, using SQL statements in the Worksheets tab in the GUI or a CLI called SnowSQL. For this example, I used SQL statements as I find them easier to follow what’s happening. Once you have set up your account (or trial) and logged in, you need to create your first database.

Click through for some how-to as well as thoughts about cost and performance.

Comments closed

Using INLA for Spatial Regression in R

Lionel Hertzog continues a series on spatial regression:

INLA is a package that allows to fit a broad range of model, it uses Laplace approximation to fit Bayesian models much, much faster than algorithms such as MCMC. INLA allows for fitting geostatistical models via stochastic partial differential equation (SPDE), a good place for more background informations on this are these two gitbooks: spde-gitbook and inla-gitbook.

This is not the gentlest introduction, so if you’re new to the concept go back and read part 1.

Comments closed

Accelerated Database Recovery and tempdb Usage

Jason Hall takes a look at how much tempdb space Accelerated Database Recovery might use:

You might have heard me talk about tempdb parasites in the past, or maybe you’ve read my blog post on the same topic, “Be Mindful of SQL Server Tempdb Use (aka Tempdb Parasites!).” I know that at least one person did, because they recently asked a great question.

In that blog post, I reviewed how triggers use the version store in tempdb for access to the “special” trigger tables we can use from within the trigger code to access the previous and new versions of data being modified. One person on Twitter was wondering if that is still the case for triggers on databases using Accelerated Database Recovery (ADR) in SQL Server 2019. I really wasn’t sure, so I decided to find out.

Click through for the answer.

Comments closed

Creating a New Container from a SQL Server on Windows Dockerfile

Jamie Wick continues a series on SQL Server and Windows containers:

The docker build command sends the contents of the working directory, along with a dockerfile, to the Docker daemon, as a build context, to create the new image. A dockerfile is a plain text file that contains the name of a (base) image, along with a set of instructions for modifying the image. By default, the dockerfile is assumed to be in the root of the working directory, but a separate location can be specified using the -f parameter in the build command. Additionally, the -t parameter can be used to specify a repository and tag for the new image. Finally, the working directory can be specified using a Path or URL. In the example below, the current directory (.) is being used as the working directory (the docker build command is being run at the root level of the working directory).

Read on for examples.

Comments closed

Distributed Transactions Across Multiple Log Files

Eric Cobb notices something strange:

When the new log files were added, SQL Server immediately filled them to match the fullness percentage of the original log files. As I manually increased the size of the log file, SQL Server again shifted the transactions around to keep both log files at the exact same fullness percentage. So, if the first log was 95% full, the second log was 95% full, regardless of the actual log file sizes. If the first log was 80% full, so was the second. The more I expanded the second log, the more transactions SQL Server would move to it, always keeping the fullness percentage the same on both log files. The larger the second log became, the more space it freed up on the first log, but the 2 logs were always exactly in sync in terms of fullness.

This is some interesting behavior, especially because transaction log files don’t use proportional fill.

Comments closed