Press "Enter" to skip to content

Category: Containers

Combining On-Demand and Spot VMs in AKS

Prakash P covers a topic near and dear to my heart—saving money by using spot instances:

While it’s possible to run the Kubernetes nodes either in on-demand or spot node pools separately, we can optimize the application cost without compromising the reliability by placing the pods unevenly on spot and OnDemand VMs using the topology spread constraints. With baseline amount of pods deployed in OnDemand node pool offering reliability, we can scale on spot node pool based on the load at a lower cost.

I like this idea a lot, as spot instances trade off saving a lot of money (up to 90%) for unreliability: you lose the spot instance as soon as someone else comes in willing to pay more. This gives you the best of both worlds with AKS: emphasize spot instances for the money savings but include the ability to use on-demand pricing for VMs when spot isn’t available. If I’m understanding the post correctly, this also reduces the downside risk of service instability that you get when spot instances are bought out from under you, as Kubernetes will automatically spin up and down services within a pod to keep a consistent number of instances available across the nodes to users.

Comments closed

Notifying when MCR Has New SQL Server Images

Andrew Pruski builds an alert:

A while back I wrote a post on how to retrieve the SQL Server images in the Microsoft Container Registry (MCR).

It’s pretty simple to check the MCR but what about automating that check to alert if there are new images present? Let’s have a look at one method of doing just that with a powershell script using the BurntToast module, and a scheduled task.

Click through for the process and keep those Docker images up to date.

Comments closed

Registering AKS Endpoints on Azure DNS

Denny Cherry notes that the DNS server is in another castle:


If you have an Azure environment when you have your DNS servers in a separate vNet from your new AKS environment you’ll notice that you get an error when deploying the AKS environment which looks something like this.

Agents are unable to resolve Kubernetes API server name. It’s likely custom DNS server is not correctly configured, please see https://aka.ms/aks/private-cluster#hub-and-spoke-with-custom-dns for more information.

The fix for this is actually pretty straightforward, but I’m going to give you a little background on why this happens.

Click through for the answer.

Comments closed

PolyBase and S3 Integration in SQL Server 2022 on Containers

Amit Khandelwal combines a bunch of things together:

One of the new features introduced with SQL Server 2022 is the ability to connect to any S3-compatible object storage and SQL Server supports both Backup/Restore and data lake virtualization with Polybase integration.  In this blog, we will demonstrate both of these features for SQL Server 2022 Containers running on Kubernetes. As usual, I will use the Azure Kubernetes Service as my Kubernetes environment

Most of the work is in the container configuration, which is good on net, as it means you only have to do it once.

Comments closed

Business Continuity with Arc-Enabled Data Services

Warwick Rudd continues a series on Azure Arc-Enabled Data Services. Part 11 covers high availability:

So far in this series of posts, you have been able to deploy and configure your newly provisioned Azure Arc-enabled SQL MI environment. Out of the box you get High Availability without having to do or implement anything.

The Recovery Time Objective (RTO) that is achievable with Azure Arc-enabled Data Services is dependent on the tier you choose to deploy. But regardless of that, this post is only concerned about informing you what you get out of the box with this technology.

Part 12 turns to disaster recovery:

In the previous post, we introduced you to how Azure Arc-enabled SQL MI provides High Availability based on the tier you have deployed.  If your environment requires disaster recovery, regardless of the tier level you have deployed, Azure Arc-enabled Data Services covers the job for you.

Read on to learn more about what options are available and what you need to do.

Comments closed

Running SQL Server on an M2 Processor

Anthony Nocentino operates a Mac:

Last week I purchased a shiny new MacBook Air with an M2 processor. After I got all the standard stuff up and running, I set out to learn how to run SQL Server containers on this new hardware. This post shows you how to run SQL Server on Apple Silicon using colima.

Colima is a container runtime that runs a Linux VM on your Mac. This Linux VM runs using the Virtualization framework hypervisor native in MacOS. Your containers will run inside this virtual machine.

Read on to see what you’d need for the task.

Comments closed

Shiny App Dockerfile Automation

Jamie Owen and Colin Gillespie don’t have time to write dockerfiles:

For creating a production deployment of a {shiny} application it is often useful to be able to provide a Docker image that contains all the dependencies for that application. Here we explore how one might go about automating the creation of a Dockerfile that will allow us to build such an image for a {shiny} application.

There are some neat tricks in here.

Comments closed

Portworx and Kubernetes Storage Failover

Andrew Pruski digs into a problem:

In a nutshell, the issue is that the attachdetach-controller in Kubernetes won’t detach storage from an offline node until that node is either brought back online or is removed from the cluster. What this means is that a pod spinning up on a new node that requires that storage can’t come online.

Aka, if you’re running SQL Server in Kubernetes and a node fails, SQL won’t be able to come back online until someone manually brings the node online or deletes the node.

Not great tbh, and it’s been a blocker for my PoC testing.

However, there are ways around this…one of them is by a product called Portworx which I’m going to demo here.

After a short disclaimer, there’s plenty of good content.

Comments closed

Using the DAC with Dockerized SQL Server

Joey D’Antoni needed to use the Dedicated Administrator Connection:

Because as shown in the image above, the table in question is a system_table, in order to query it directly, you need to use the dedicated administrator connection (DAC) in SQL Server. The DAC is a piece of SQL Server that dedicates a CPU scheduler, and some memory for a single admin session. This isn’t designed for ordinary use–you should only use it when your server is hosed, and you are trying to kill a process, or when you need to query a system table to answer a twitter post. The DAC is on by default, with a caveat–it can only be accessed locally on the server by default. This would be connected to a server console or RDP session on Windows, or in the case of a container, by shelling into the container itself. However, Microsoft gives you the ability to turn it on for remote access (and you should, DCAC recommends this as a best practice), by using the following T-SQL.

Read on to see how, as well as what else you’d need to do to get it working.

Comments closed

Deploying SQL Server via AKS

Rajendra Gupta needs to deploy a SQL Server container:

This article uses Azure Kubernetes Service (AKS) to deploy and manage the Kubernetes cluster. It is a fully managed service that offers serverless Kubernetes with integrated CI/CD solutions, enterprise-grade security, and governance.

You can navigate to https://azure.microsoft.com/en-in/services/kubernetes-service/#overview and try Azure Kubernetes Service (AKS).

Read on for an overview of Azure Kubernetes Service and how you can get a SQL Server on Linux container running atop it.

Comments closed