Press "Enter" to skip to content

Day: June 24, 2020

Data Compression with Elasticsearch

Hakan Altindag takes us through compression options when working with Elasticsearch:

18th of June 2020 Elastic has released Elasticsearch 7.8 with their java library which makes handling compressed data easier, see here the release notes: Elasticsearch 7.8 release notes

Even though you enabled Elasticsearch to send you compressed data, Elasticsearch will only compress it when the client is requesting for it. The java client can request for it by sending additional request options within the http request, see below for an example:

Read on to see how to enable this, as well as how clients can use it.

Comments closed

Optimizing Read Performance of Heaps

Uwe Ricken continues a series on heaps in SQL Server:

Heaps are not necessarily the developer’s favourite child, as they are not very performant, especially when it comes to selecting data (most people think so!). Certainly, there is something true about this opinion, but in the end, it is always the workload that decides it. In this article, I describe how a Heap works when data are selected. If you understand the process in SQL Server when reading data from a Heap, you can easily decide if a Heap is the best solution for your workload.

Uwe hits on a couple of the (few) use cases where heap performance can match and sometimes surpass clustered index performance.

Comments closed

Backing Up Databases to Azure Blob Storage

David Fowler shows how you can back up databases to Azure Blob Storage:

SQL Server has given us the option to backup our databases directly to Azure BLOB storage for a while now but it’s not something that I’ve had all that much call to use until recently.

So this is just going to be a quick walk through on how you can backup your on premise SQL Servers to Azure BLOB storage. I’m going to assume that you’ve already got an Azure account, if you haven’t, you get set up a free trial which will see you good for this demo.

Performance typically won’t be as good as backing up locally to disk, so if you need the fastest backup performance and cloud storage, the best route would be to write backups to disk and have a separate process which migrates them to Blob Storage, S3, or wherever. But in many cases, doing this directly can work out just fine, especially if you are already using an Azure-based VM.

Comments closed

OpenShift and SQL Server Big Data Clusters

Chris Adkin explains why support for OpenShift is important for SQL Server Big Data Clusters:

One thing that should become immediately apparent when installing and administering an OpenShift cluster, is that it is a lot more prescriptive and opinionated that vanilla Kubernetes. The simple reason for this is that OpenShift is intended to be deployed to environments that require enterprise grade levels of hardening and security. For example, Red Hat mandates the operating system distributions you must use, to the extent that when deploying a cluster on VMware – Red Hat’s documentation recommends the use of OVA’s, compressed files containing install-able virtual machines.

Read on for the full story.

Comments closed

VM Firmware and Windows Secure Boot

David Klee gives us the lowdown on firmware specifications in virtual machines:

The Register is reporting that future versions of Windows Server OS is going to require the TPM 2.0 chip and Secure boot enabled by default. Secure boot is quite helpful to validate that servers boot into trusted environments. It sounds basic and straightforward, but if your VM administrators are not preparing for this change now, a much-overlooked setting in the hypervisor might backfire and you might not be able to enable this setting. That scenario would be a disaster if your security team suddenly issued a decree stating that you must enable this setting by some date.

Read on to see what this means if you’re using Hyper-V or VMware.

Comments closed

Partitioning Tricks

Raul Gonzalez shows us five things you can do with partitioning in SQL Server:

Once we have rebuilt that old data to minimise its footprint and moved it to a cheaper storage tier, if we know no one will have to modify it, it’d be a good idea to make it READ_ONLY.

By making the data READ_ONLY, we can not only prevent accidental deletion or modification, but also reduce the workload required to maintain it, because as we’ve seen before, we can action index maintenance only on the READ_WRITE parts (partitions) of the data where fragmentation might still happen.

Read on for the rest of the tips and note that none of these are directly of the “Make your queries faster” variety, though a couple can have positive performance implications.

Comments closed