Press "Enter" to skip to content

Category: Containers

Decoding Helm Secrets with a kubectl Plugin

Andrew Pruski didn’t want to type that much:

The post goes through deploying a Helm Chart to Kubernetes and then running the following to decode the secrets that Helm creates in order for it to be able to rollback a release: –

kubectl get secret sh.helm.release.v1.testchart.v1 -o jsonpath="{ .data.release }" | base64 -d | base64 -d | gunzip -c | jq '.chart.templates[].data' | tr -d '"' | base64 -d

But that’s a bit long winded eh? I don’t really fancy typing that every time I want to have a look at those secrets. So I’ve created a kubectl plugin that’ll do it for us!

Click through to see the code, how you install the plugin, and how you use it.

Comments closed

Release Rollback with Helm

Andrew Pruski shows the secret of how Helm lets you roll back releases even when deployments are deleted:

If we rollback with kubectl rollout undo the pods in the newest replicaset are deleted, and pods in an older replicaset are spun back up, rolling back the upgrade.

But there’s a potential problem here. What happens if that old replicaset is deleted?

If that happens, we wouldn’t be able to rollback the upgrade. Well we wouldn’t be able to roll it back with kubectl rollout undo, but what happens if we’re using Helm?

Read on to learn how the whole thing works.

Comments closed

Using Docker Desktop on WSL2

Chris Taylor walks us through updating Docker Desktop for Windows to support Windows Subsystem for Linux 2:

I won’t go too much into what this is as you can read the article in the links above but to summarise, this will improve the experience of docker on windows:

– Improvements in resource consumption
– Starting up docker daemon is significantly quicker (Docker says 10s as opposed to ~1min previously)
– Avoid having to maintain both Linux and Windows build scripts
– Improvements to file system sharing and boot time
– Allows access to some cool new features for Docker Desktop users.

Some of these are improvements we’ve been crying out for over the last couple of years so in my opinion, they’re a very welcome addition.

In order to get started using WSL2, there’s a couple of steps you need to run through which I’ll try and show below with a few screen shots.

Read on for the process.

Comments closed

Docker Compose and SQL Server

Andrew Pruski makes it easy to launch a fully-featured Docker container running SQL Server:

The solution here is to create a custom image with the volume created and permissions set.

But wouldn’t it be easier to just have to run one command to spin up a custom 2019 image, with volumes created and permissions set?

Enter Docker Compose.

Andrew has a GitHub repo with everything set up and includes plenty of screenshots to demonstrate.

Comments closed

Kubernetes, SQL Server, and Kerberos

Raul Gonzalez walks us through one problem with configuring SQL Server to run in an Availability Group over Kubernetes:

The problem of Kerberos is that is not easy to configure and multiple times results in the well known Anonymous Logon Error, aka Double Hop.

That’s why you will find plenty of IIS and other applications out there, using SQL logins (Impersonation Users), because Windows Authentication can be really frustrating and applications won’t be able to connect to SQL Server otherwise.

There are multiple resources in the internet that explain the Double-Hop issue, so that won’t be the scope of this post, but I will show how to correctly configure SPNs to SQL Server Availability Groups, which is the first link in the Kerberos chain.

Kubernetes isn’t the only place where you’ll find the need to set SPNs, either.

Comments closed

Using Docker Volumes to Hold SQL Server Databases

John Morehouse shows how to use volumes to expose data—such as SQL Server data and log files—to a Docker container:

Over the past couple of blog posts, I have been talking about the versatility of deploying SQL Server with Docker.  This combination is a great way to quickly and easily spin up local SQL Server instances.  In the most recent post, I talked about a method to copy and restore a sample database into a Docker container that is running SQL Server.   In this post, I am going to talk about an easier way to accomplish this by attaching a persistent volume to the container.   With this method you don’t have to copy any files into the container, and it makes the overall process easier and repeatable.

First, before we get into the code, let’s talk about what a volume is.  Essentially, a volume is a location on the host machine that can be referenced by the container.  I think of this as a shared folder that the container can see.  Once attached to the container, it can then read or write to the volume.   You can easily declare the volume when you create the container with a simple switch in the command.

John’s examples are on a Mac, but the concepts are essentially the same for Windows or Linux.

Comments closed

Kubernetes on Virtualized Hardware

Chris Adkin gives us the pros and cons of running Kubernetes on virtual hardware:

A full discussion on Kubernetes security is beyond the scope of this blog post. However, the Mitre Att&ck Framework provides a comprehensive matrix of security attack patterns. Microsoft have produced a similar style of matrix to cover Kubernetes in this blog. As per the blog, resource hijacking and lateral movement have ramifications for multi-tenant platforms and Kubernetes application delivery techniques via things such as GitOps – where you may have one Kubernetes cluster per code branch. Putting nodes in their own virtual machines, provides an extra layer of defense that can reduce the impact of pods that might become malicious as the result of an attack. VMware vSphere 7.0 (more on this later) takes this concept further by running each pod in its own light weight virtual machine.

Click through for a breakdown of each side’s arguments.

Comments closed

Using Specific R Package Versions in Docker Images

Roman Lustrik shares how to fix package versions in Docker images:

Using package in R is easy. You install from CRAN using install.packages("packagename"), it resolves dependencies and you’re good to go. What R natively doesn’t handle so well is installing a particular package version without jumping through hoops. Technically you need the source file of the package version you want to install AND all source files of the dependencies (in the correct version, of course). This has been made almost seamless with packages packrat and recently, renv.

This comes handy when you are constructing a Docker file to run in production. Usually you want to run this defensively and do not want things to change from one image build to another. To get there, you can save all your package names and version into a file (renv.lock) and use that to reconstruct the now defined package structure with predictable versions (see renv vignette here).

This is quite useful as R package developers tend not to covet backwards compatibility, and one of the key benefits of containers is to have the option to keep the same code base and configuration in all environments.

Comments closed

Creating a New Container from a SQL Server on Windows Dockerfile

Jamie Wick continues a series on SQL Server and Windows containers:

The docker build command sends the contents of the working directory, along with a dockerfile, to the Docker daemon, as a build context, to create the new image. A dockerfile is a plain text file that contains the name of a (base) image, along with a set of instructions for modifying the image. By default, the dockerfile is assumed to be in the root of the working directory, but a separate location can be specified using the -f parameter in the build command. Additionally, the -t parameter can be used to specify a repository and tag for the new image. Finally, the working directory can be specified using a Path or URL. In the example below, the current directory (.) is being used as the working directory (the docker build command is being run at the root level of the working directory).

Read on for examples.

Comments closed

OpenShift and SQL Server Big Data Clusters

Chris Adkin explains why support for OpenShift is important for SQL Server Big Data Clusters:

One thing that should become immediately apparent when installing and administering an OpenShift cluster, is that it is a lot more prescriptive and opinionated that vanilla Kubernetes. The simple reason for this is that OpenShift is intended to be deployed to environments that require enterprise grade levels of hardening and security. For example, Red Hat mandates the operating system distributions you must use, to the extent that when deploying a cluster on VMware – Red Hat’s documentation recommends the use of OVA’s, compressed files containing install-able virtual machines.

Read on for the full story.

Comments closed