Press "Enter" to skip to content

Author: Kevin Feasel

Distributed Transactions Across Multiple Log Files

Eric Cobb notices something strange:

When the new log files were added, SQL Server immediately filled them to match the fullness percentage of the original log files. As I manually increased the size of the log file, SQL Server again shifted the transactions around to keep both log files at the exact same fullness percentage. So, if the first log was 95% full, the second log was 95% full, regardless of the actual log file sizes. If the first log was 80% full, so was the second. The more I expanded the second log, the more transactions SQL Server would move to it, always keeping the fullness percentage the same on both log files. The larger the second log became, the more space it freed up on the first log, but the 2 logs were always exactly in sync in terms of fullness.

This is some interesting behavior, especially because transaction log files don’t use proportional fill.

Comments closed

Calculating Business Hours with DAX

Matt Allington combines DAX and a calendar table to calculate business hours:

I was helping a client this past week to calculate the total business hours between a start date/time and an end date/time, taking into account the working days, public holidays and non-working weekends, etc.  As is often the case, I thought it would be a great blog article that I could share with my readers.  This is a pretty involved problem with lots of moving parts, so as such I have decided to record a video showing you how I solved the problem, 1 step at a time.

Click through for the video as well as a description and the code.

Comments closed

Shredding XML for Service Broker

Chris Johnson continues a series on Service Broker:

So, what we have here is an element called people. This contains 2 person elements with personID attributes attached to them. These both contain firstName and lastName elements, but personID 124 also has a middleName and two pet elements. These pet elements each hold a name element, but only fluffles has an animalType element.

In this way, we can use XML to hold only the data we actually know. Unless we have specified an XML Schema (outside the scope of this class) there is no validation on what elements or attributes an element should contain.

When Service Broker was launched, XML was the primary markup language people pretended was easily readable but really wasn’t.

Comments closed

Data Compression with Elasticsearch

Hakan Altindag takes us through compression options when working with Elasticsearch:

18th of June 2020 Elastic has released Elasticsearch 7.8 with their java library which makes handling compressed data easier, see here the release notes: Elasticsearch 7.8 release notes

Even though you enabled Elasticsearch to send you compressed data, Elasticsearch will only compress it when the client is requesting for it. The java client can request for it by sending additional request options within the http request, see below for an example:

Read on to see how to enable this, as well as how clients can use it.

Comments closed

Optimizing Read Performance of Heaps

Uwe Ricken continues a series on heaps in SQL Server:

Heaps are not necessarily the developer’s favourite child, as they are not very performant, especially when it comes to selecting data (most people think so!). Certainly, there is something true about this opinion, but in the end, it is always the workload that decides it. In this article, I describe how a Heap works when data are selected. If you understand the process in SQL Server when reading data from a Heap, you can easily decide if a Heap is the best solution for your workload.

Uwe hits on a couple of the (few) use cases where heap performance can match and sometimes surpass clustered index performance.

Comments closed

Backing Up Databases to Azure Blob Storage

David Fowler shows how you can back up databases to Azure Blob Storage:

SQL Server has given us the option to backup our databases directly to Azure BLOB storage for a while now but it’s not something that I’ve had all that much call to use until recently.

So this is just going to be a quick walk through on how you can backup your on premise SQL Servers to Azure BLOB storage. I’m going to assume that you’ve already got an Azure account, if you haven’t, you get set up a free trial which will see you good for this demo.

Performance typically won’t be as good as backing up locally to disk, so if you need the fastest backup performance and cloud storage, the best route would be to write backups to disk and have a separate process which migrates them to Blob Storage, S3, or wherever. But in many cases, doing this directly can work out just fine, especially if you are already using an Azure-based VM.

Comments closed

OpenShift and SQL Server Big Data Clusters

Chris Adkin explains why support for OpenShift is important for SQL Server Big Data Clusters:

One thing that should become immediately apparent when installing and administering an OpenShift cluster, is that it is a lot more prescriptive and opinionated that vanilla Kubernetes. The simple reason for this is that OpenShift is intended to be deployed to environments that require enterprise grade levels of hardening and security. For example, Red Hat mandates the operating system distributions you must use, to the extent that when deploying a cluster on VMware – Red Hat’s documentation recommends the use of OVA’s, compressed files containing install-able virtual machines.

Read on for the full story.

Comments closed

Partitioning Tricks

Raul Gonzalez shows us five things you can do with partitioning in SQL Server:

Once we have rebuilt that old data to minimise its footprint and moved it to a cheaper storage tier, if we know no one will have to modify it, it’d be a good idea to make it READ_ONLY.

By making the data READ_ONLY, we can not only prevent accidental deletion or modification, but also reduce the workload required to maintain it, because as we’ve seen before, we can action index maintenance only on the READ_WRITE parts (partitions) of the data where fragmentation might still happen.

Read on for the rest of the tips and note that none of these are directly of the “Make your queries faster” variety, though a couple can have positive performance implications.

Comments closed