Press "Enter" to skip to content

Author: Kevin Feasel

Accelerated Database Recovery and tempdb Usage

Jason Hall takes a look at how much tempdb space Accelerated Database Recovery might use:

You might have heard me talk about tempdb parasites in the past, or maybe you’ve read my blog post on the same topic, “Be Mindful of SQL Server Tempdb Use (aka Tempdb Parasites!).” I know that at least one person did, because they recently asked a great question.

In that blog post, I reviewed how triggers use the version store in tempdb for access to the “special” trigger tables we can use from within the trigger code to access the previous and new versions of data being modified. One person on Twitter was wondering if that is still the case for triggers on databases using Accelerated Database Recovery (ADR) in SQL Server 2019. I really wasn’t sure, so I decided to find out.

Click through for the answer.

Comments closed

Creating a New Container from a SQL Server on Windows Dockerfile

Jamie Wick continues a series on SQL Server and Windows containers:

The docker build command sends the contents of the working directory, along with a dockerfile, to the Docker daemon, as a build context, to create the new image. A dockerfile is a plain text file that contains the name of a (base) image, along with a set of instructions for modifying the image. By default, the dockerfile is assumed to be in the root of the working directory, but a separate location can be specified using the -f parameter in the build command. Additionally, the -t parameter can be used to specify a repository and tag for the new image. Finally, the working directory can be specified using a Path or URL. In the example below, the current directory (.) is being used as the working directory (the docker build command is being run at the root level of the working directory).

Read on for examples.

Comments closed

Distributed Transactions Across Multiple Log Files

Eric Cobb notices something strange:

When the new log files were added, SQL Server immediately filled them to match the fullness percentage of the original log files. As I manually increased the size of the log file, SQL Server again shifted the transactions around to keep both log files at the exact same fullness percentage. So, if the first log was 95% full, the second log was 95% full, regardless of the actual log file sizes. If the first log was 80% full, so was the second. The more I expanded the second log, the more transactions SQL Server would move to it, always keeping the fullness percentage the same on both log files. The larger the second log became, the more space it freed up on the first log, but the 2 logs were always exactly in sync in terms of fullness.

This is some interesting behavior, especially because transaction log files don’t use proportional fill.

Comments closed

Calculating Business Hours with DAX

Matt Allington combines DAX and a calendar table to calculate business hours:

I was helping a client this past week to calculate the total business hours between a start date/time and an end date/time, taking into account the working days, public holidays and non-working weekends, etc.  As is often the case, I thought it would be a great blog article that I could share with my readers.  This is a pretty involved problem with lots of moving parts, so as such I have decided to record a video showing you how I solved the problem, 1 step at a time.

Click through for the video as well as a description and the code.

Comments closed

Shredding XML for Service Broker

Chris Johnson continues a series on Service Broker:

So, what we have here is an element called people. This contains 2 person elements with personID attributes attached to them. These both contain firstName and lastName elements, but personID 124 also has a middleName and two pet elements. These pet elements each hold a name element, but only fluffles has an animalType element.

In this way, we can use XML to hold only the data we actually know. Unless we have specified an XML Schema (outside the scope of this class) there is no validation on what elements or attributes an element should contain.

When Service Broker was launched, XML was the primary markup language people pretended was easily readable but really wasn’t.

Comments closed

Data Compression with Elasticsearch

Hakan Altindag takes us through compression options when working with Elasticsearch:

18th of June 2020 Elastic has released Elasticsearch 7.8 with their java library which makes handling compressed data easier, see here the release notes: Elasticsearch 7.8 release notes

Even though you enabled Elasticsearch to send you compressed data, Elasticsearch will only compress it when the client is requesting for it. The java client can request for it by sending additional request options within the http request, see below for an example:

Read on to see how to enable this, as well as how clients can use it.

Comments closed

Optimizing Read Performance of Heaps

Uwe Ricken continues a series on heaps in SQL Server:

Heaps are not necessarily the developer’s favourite child, as they are not very performant, especially when it comes to selecting data (most people think so!). Certainly, there is something true about this opinion, but in the end, it is always the workload that decides it. In this article, I describe how a Heap works when data are selected. If you understand the process in SQL Server when reading data from a Heap, you can easily decide if a Heap is the best solution for your workload.

Uwe hits on a couple of the (few) use cases where heap performance can match and sometimes surpass clustered index performance.

Comments closed