Press "Enter" to skip to content

Author: Kevin Feasel

Creating a New Container from a SQL Server on Windows Dockerfile

Jamie Wick continues a series on SQL Server and Windows containers:

The docker build command sends the contents of the working directory, along with a dockerfile, to the Docker daemon, as a build context, to create the new image. A dockerfile is a plain text file that contains the name of a (base) image, along with a set of instructions for modifying the image. By default, the dockerfile is assumed to be in the root of the working directory, but a separate location can be specified using the -f parameter in the build command. Additionally, the -t parameter can be used to specify a repository and tag for the new image. Finally, the working directory can be specified using a Path or URL. In the example below, the current directory (.) is being used as the working directory (the docker build command is being run at the root level of the working directory).

Read on for examples.

Comments closed

Calculating Business Hours with DAX

Matt Allington combines DAX and a calendar table to calculate business hours:

I was helping a client this past week to calculate the total business hours between a start date/time and an end date/time, taking into account the working days, public holidays and non-working weekends, etc.  As is often the case, I thought it would be a great blog article that I could share with my readers.  This is a pretty involved problem with lots of moving parts, so as such I have decided to record a video showing you how I solved the problem, 1 step at a time.

Click through for the video as well as a description and the code.

Comments closed

Shredding XML for Service Broker

Chris Johnson continues a series on Service Broker:

So, what we have here is an element called people. This contains 2 person elements with personID attributes attached to them. These both contain firstName and lastName elements, but personID 124 also has a middleName and two pet elements. These pet elements each hold a name element, but only fluffles has an animalType element.

In this way, we can use XML to hold only the data we actually know. Unless we have specified an XML Schema (outside the scope of this class) there is no validation on what elements or attributes an element should contain.

When Service Broker was launched, XML was the primary markup language people pretended was easily readable but really wasn’t.

Comments closed

Data Compression with Elasticsearch

Hakan Altindag takes us through compression options when working with Elasticsearch:

18th of June 2020 Elastic has released Elasticsearch 7.8 with their java library which makes handling compressed data easier, see here the release notes: Elasticsearch 7.8 release notes

Even though you enabled Elasticsearch to send you compressed data, Elasticsearch will only compress it when the client is requesting for it. The java client can request for it by sending additional request options within the http request, see below for an example:

Read on to see how to enable this, as well as how clients can use it.

Comments closed

Optimizing Read Performance of Heaps

Uwe Ricken continues a series on heaps in SQL Server:

Heaps are not necessarily the developer’s favourite child, as they are not very performant, especially when it comes to selecting data (most people think so!). Certainly, there is something true about this opinion, but in the end, it is always the workload that decides it. In this article, I describe how a Heap works when data are selected. If you understand the process in SQL Server when reading data from a Heap, you can easily decide if a Heap is the best solution for your workload.

Uwe hits on a couple of the (few) use cases where heap performance can match and sometimes surpass clustered index performance.

Comments closed

OpenShift and SQL Server Big Data Clusters

Chris Adkin explains why support for OpenShift is important for SQL Server Big Data Clusters:

One thing that should become immediately apparent when installing and administering an OpenShift cluster, is that it is a lot more prescriptive and opinionated that vanilla Kubernetes. The simple reason for this is that OpenShift is intended to be deployed to environments that require enterprise grade levels of hardening and security. For example, Red Hat mandates the operating system distributions you must use, to the extent that when deploying a cluster on VMware – Red Hat’s documentation recommends the use of OVA’s, compressed files containing install-able virtual machines.

Read on for the full story.

Comments closed

Backing Up Databases to Azure Blob Storage

David Fowler shows how you can back up databases to Azure Blob Storage:

SQL Server has given us the option to backup our databases directly to Azure BLOB storage for a while now but it’s not something that I’ve had all that much call to use until recently.

So this is just going to be a quick walk through on how you can backup your on premise SQL Servers to Azure BLOB storage. I’m going to assume that you’ve already got an Azure account, if you haven’t, you get set up a free trial which will see you good for this demo.

Performance typically won’t be as good as backing up locally to disk, so if you need the fastest backup performance and cloud storage, the best route would be to write backups to disk and have a separate process which migrates them to Blob Storage, S3, or wherever. But in many cases, doing this directly can work out just fine, especially if you are already using an Azure-based VM.

Comments closed

Partitioning Tricks

Raul Gonzalez shows us five things you can do with partitioning in SQL Server:

Once we have rebuilt that old data to minimise its footprint and moved it to a cheaper storage tier, if we know no one will have to modify it, it’d be a good idea to make it READ_ONLY.

By making the data READ_ONLY, we can not only prevent accidental deletion or modification, but also reduce the workload required to maintain it, because as we’ve seen before, we can action index maintenance only on the READ_WRITE parts (partitions) of the data where fragmentation might still happen.

Read on for the rest of the tips and note that none of these are directly of the “Make your queries faster” variety, though a couple can have positive performance implications.

Comments closed