Press "Enter" to skip to content

Category: Virtualization

Cost versus Performance Optimization for SQL Server on VMs in Azure

Pam Lahoud takes a look at multi-constraint optimization:

So how do you get the best price-performance possible when configuring your SQL Server on Azure VM? In this blog, we’re going to cover three key aspects to right-sizing (and right-configuring) your Azure VM for SQL Server that are based on some common pitfalls customers face when migrating their on-premises workloads to Azure VM:

– Choosing the best VM series and size for your workload
– Configuring storage for maximum throughput and lower cost
– Leveraging unique to Azure features such as host caching to boost performance at no additional cost

One key point of the article is that there are several factors which can make a big difference in price and performance, but which you might not think about on-premises. It’s definitely worth taking the time to research this. It’s also a great example of how administrators are still important in a cloud-based world—having an admin who understands these settings and can get the most out of a given server can save a lot of money very quickly.

Comments closed

Early Thoughts on Dremio

Meagan Longoria gives us a review of Dremio:

I’ve been working on a project for the last few months with a client who has chosen to implement Dremio in Azure. Dremio is a data lake engine that creates a semantic layer and supports interactive queries.

It uses Apache Arrow, Gandiva, and Parquet files under the hood. It runs on either Linux VMs or Kubernetes containers. Like most big data systems, there is at least one coordinator node and one or more executor nodes. These nodes communicate and are managed using Apache Zookeeper. Client applications connect to Dremio via ODBC, JDBC, REST APIs, or Arrow Flight. Dremio can read from storage accounts, external databases, and a few other sources.

Read on for good and bad aspects of the product.

Comments closed

Preparing an Availability Group for VM-Level Replication

David Klee takes us through an interesting scenario:

If you have a SQL Server Availability Group (AG) and the VMs are being replicated to a disaster recovery site (cloud or on-prem), chances are the networking topology is not the same at the second site. These replication technologies can include VM replication, SAN LUN replication, or replicating server-level backups to the second site. It is quite complex to have the same network subnet existing at both sites, so usually, the secondary site contains a different networking subnet structure. It means that the servers being brought up at the secondary site are going to receive different IP addresses.

The Availability Group architecture, especially with its dependency on the Windows Server Failover Cluster (WSFC) layer, are quite intolerant of having these IP addresses changed. The utilities performing the failover might not even be aware of the WSFC-specific components that need to be adjusted.

Click through to see what you can do.

Comments closed

Launching Linux VMs with Firecracker

Julia Evans gives us an introduction to Firecracker:

Firecracker says this about performance in their specification:

It takes <= 125 ms to go from receiving the Firecracker InstanceStart API call to the start of the Linux guest user-space /sbin/init process.

So far I’ve been using Firecracker to start relatively large VMs – Ubuntu VMs running systemd as an init system – and it takes maybe 2-3 seconds for them to boot. I haven’t been measuring that closely because honestly 5 seconds is fast enough and I don’t mind too much about an extra 200ms either way.

That’s pretty fast. Click through for more info on installation and configuration.

Comments closed

Recommendations for Hosting SQL Server on VMware

Michelle Gutzait walks us through recommendations on hosting SQL Server in Windows on VMware:

VMware has created a very detailed best-practice document for us, specifically for SQL Server. You may find the latest one here.

In case the link doesn’t work for you, or you have a different version of VMware, you can search for the proper SQL Server best practices on the VMware site.

Here are the main best practices VMware recommends, and the most important based on Pythian’s experience (SQL Server on Windows):

Click through for a detailed checklist.

Comments closed

The Problem with VM Backups of SQL Server

Sean Gallardy turns a problem on its head:

Now let’s get to the main point, which is how long the VM stays paused or stunned – remember, this is a “small” or “short” amount of time, one might even say “trivial”. When it is kept this short to where it’s “trivial” as in less than a second then all is good and you most likely won’t notice it except in very high workloads… but we should be running with VSS integration and not VM level so it’s still incorrect, but hey. When this time is not short of trivial then GOOD things start to happen, most notably that high availability kicks in.

I appreciate the framing of this post, as the failover wasn’t a problem; it merely exposes the actual problem.

Comments closed

VirtualBox Network Configuration for Kubernetes

Praveen Sripati looks at some VirtualBox network settings:

From the feature matrix and the required features, the only options left around the VirtualBox networking are NAT Network and Bridged Networking. The problem with the Bridged networking is that as mentioned above, it always requires connection to the network and switching to a different network changes the IP of the K8S master and breaks down the entire setup. The certificates during the K8S setup are tied to a specific IP and need to generated again each time the IP address of the master changes (1). This is not impossible, but is tedious every time we change the network and the IP address of the master changes. So, the only optimal option left is to use the NAT Network.

Read on for more advice.

Comments closed

Kubernetes on Virtualized Hardware

Chris Adkin gives us the pros and cons of running Kubernetes on virtual hardware:

A full discussion on Kubernetes security is beyond the scope of this blog post. However, the Mitre Att&ck Framework provides a comprehensive matrix of security attack patterns. Microsoft have produced a similar style of matrix to cover Kubernetes in this blog. As per the blog, resource hijacking and lateral movement have ramifications for multi-tenant platforms and Kubernetes application delivery techniques via things such as GitOps – where you may have one Kubernetes cluster per code branch. Putting nodes in their own virtual machines, provides an extra layer of defense that can reduce the impact of pods that might become malicious as the result of an attack. VMware vSphere 7.0 (more on this later) takes this concept further by running each pod in its own light weight virtual machine.

Click through for a breakdown of each side’s arguments.

Comments closed

VM Firmware and Windows Secure Boot

David Klee gives us the lowdown on firmware specifications in virtual machines:

The Register is reporting that future versions of Windows Server OS is going to require the TPM 2.0 chip and Secure boot enabled by default. Secure boot is quite helpful to validate that servers boot into trusted environments. It sounds basic and straightforward, but if your VM administrators are not preparing for this change now, a much-overlooked setting in the hypervisor might backfire and you might not be able to enable this setting. That scenario would be a disaster if your security team suddenly issued a decree stating that you must enable this setting by some date.

Read on to see what this means if you’re using Hyper-V or VMware.

Comments closed

Hyperthreading and VMs

David Klee shares some thoughts on hyperthreading in virtual environments:

I recommend leaving the hyper-threaded logical cores enabled in the host BIOS, but not depending on them for performance gains. Hyperthreaded CPU cores, or logical cores, should not be factored into CPU overcommitment rations as if they are full processor cores.

Every task that is triggered inside a virtual machine must be scheduled to run on a physical compute resource. These scheduled tasks must be placed into a scheduling queue inside the hypervisor layer before it gets its time on the physical compute resource. If the hypervisor is overloaded, or if the vCPU scheduling queues are imbalanced from an incorrect vCPU configuration, these queues can grow, and the performance impact on the vCPU performance can suffer.

Click through for an explanation of hyperthreading and David’s guidance on the topic.

Comments closed