Press "Enter" to skip to content

Category: Containers

Containerizing a Shiny App

Peter Solymos takes us through the process of running a Shiny app in a Docker container:

Docker provides isolation to applications. Images are immutable. Running multiple instances of the same image can serve many users at the same time. All these general advantages of containerized applications apply to Shiny apps too.

All the general advantages of containerized applications apply to Shiny apps. Docker provides isolation to applications. Images are immutable: once build it cannot be changes, and if the app is working, it will work the same in the future. Another important consideration is scaling. Shiny apps are single threaded, but running multiple instances of the same image can serve many users at the same time. Let’s dive into the details of how to achieve this.

Click through for a walkthrough. Containerizing these sorts of apps has been a boon for my team, as it lets us spin up appropriately-sized servers on the cheap. H/T R-Bloggers

Comments closed

Aligning Kubernetes Nodes to Physical Infrastructure

Frank Denneman has some advice for us:

With the new VM service and the customizable VM classes, you can help the developer align their nodes to the infrastructure. Infrastructure details are not always visible at the Kubernetes layers, and maybe not all developers are keen to learn about the intricacies of your environment. The VM service allows you to publish only the VM classes you see fit for that particular application project. One of the reasons could be the avoidance of monster-VM deployment. Before this update, developers could have deployed a six worker node Kubernetes cluster using the guaranteed 8XLarge class (each worker node equipped with 32 vCPUs, 128Gi all reserved), granted if your hosts config is sufficient. But the restriction is only one angle to this situation. Long-lived relationships are typically symbiotic of nature, and powerplays typically don’t help build relationships between developers and the InfraOps team. What would be better is to align it with the NUMA configuration of the ESXi hosts within the cluster.

Click through for more detail. This is aimed particularly at operations people running Kubernetes clusters over VMware.

Comments closed

Kubernetes Alone is Insufficient

Chris Adkin does some explaining:

Someone I know had worked at an organization that needed to scale out their OpenShift clusters/footprint, they were constrained by the speed of their procurement department and were wondering if they could get by with vanilla Kubernetes. Following on from this I posted a thread on twitter as to why Kubernetes on its own is not enough, much to my pleasant surprise it generated a lot of interest, as such I wanted to do this subject justice in the form of a blog post.

Read on for the full argument as well as some objection-handling.

Comments closed

Installing Kubernetes on EC2

Praveen Sripati eschews EKS:

There are tons of ways of setting up K8S on AWS. Today we will see one of the easiest way to get started with K8S on AWS. The good thing is that we would be using t2.micro instance type, which falls under the AWS free tier. This configuration is good enough to get started with K8S and not for production setup. It’s with the assumption that the reader is familiar with the basic concepts of AWS.

Click through for the process.

Comments closed

Running a Docker Container as a WSL2 Distribution

Andrew Pruski has a wacky idea that just might work:

I’ve been playing around a bit with WSL2 and noticed that you can import TAR files into it to create your own custom distributions.

This means that we can export docker containers and run them as WSL distros!

So, let’s build a custom SQL Server 2019 docker image, run a container, and then import that container into WSL2…so that we have a custom distro running SQL Server 2019.

Read on to see how.

Comments closed

Deploying an Azure Arc Enabled Data Services Controller

Chris Adkin continues a series:

If you have been following this series, you should have:

– a basic understanding of Terraform
– a Kubernetes cluster that you can connect to using kubectl
– a basic understanding of Kubernetes services
– a working metalLB load balancer
– a basic understanding of how storage works in the world of Kubernetes
– a Kubernetes storage solution in the form of PX Store, alternatively you can use any solution (for the purposes of this series) which supports persistent volumes, however to use the backup solution in part 9 of the series you will need to use something that supports CSI

From here, Chris explains the importance of the data controller and then deploys one.

Comments closed

Deploying a Storage Solution to a Kubernetes Cluster

Chris Adkin continues a series:

Before we dive into deploying a storage solution to our Kubernetes cluster, we need to understand the basics of storage in the world of Kubernetes, which can appear to be both exotic and mysterious to the uninitiated. To dispel some confusion around Kubernetes and storage, the storage IO path is exactly the same as that with common garden vanilla variety Unix or Linux. The Kubernetes storage ecosystem introduces two extra things we need to concern ourselves with above and beyond conventional Unix/Linux storage, firstly there are some extra layers of abstraction between the physical storage and filesystems that pods use, what I like to refer to as . . .

Read the whole thing. And that was a particularly mean cut-off point on my part, if I do say so.

Comments closed

Deploying Azure Data Services via Terraform

Chris Adkin has two additional parts of a series. Part 3 shows us how to deploy a virtual machine on VMware:

To do this you require an Ubuntu virtual machine, I’ve tested this with Ubuntu 18.04 LTS and I will get around to testing it with Ubuntu 20.10 at some stage. If for example the virtual machine was created with a user called azuser, the deployment server should also have an azuser account under which all Terraform commands are executed. 

Part 4 takes those VMs and set up a Kubernetes cluster across them:

Whatever you do when deploying a Kubernetes cluster, somewhere along the line you have to use kubeadm. There is a wealth of material available on blog posts and on the internet in general in which people roll there own scripts using kubeadm. I often suspect that many of these efforts are the result of Kelsey Hightower’s: Kubernetes the hard wayIn this post we are emphatically going to do things the easy way, […]

And now we’re caught up on the series…for the moment, at least.

Comments closed

Deploying Azure Data Services via Terraform

Chris Adkin has started a series on deploying Azure Arc enabled Data Services. Part 1 serves as an introduction

:One of the most significant things to change the landscape for Azure data professionals will be general release of Azure Arc enabled Data Services. To provide an expedient means of experiencing all that Azure Arc has to offer, Microsoft has come up with Jumpstart – a collection of GitHub repos for deploying Arc in different scenarios. Last Christmas I had a few vacation days and took the opportunity to try out Jumpstart for Azure Arc enabled data services on AWS. AWS was my choice because it made a certain amount of sense to try out Azure Managed SQL Server instances and Postgres Hyperscale on a cloud that they are not natively available on. After all, the whole point of Azure Arc enabled Data Services is to bring Azure to you on your terms if for any reason you cannot use the Azure cloud. 

Part 2 gives us an introduction to Terraform:

Before diving into what the various Terraform modules do that make up the Arc-PX-VMware-Faststart repo, I’m going to provide an introduction to Terraform in this blog post. Terraform comes from Hashicorp, it is a tool that works on the principle of infrastructure-as-code. Resources are specified in what are called configuration files using Hashicorp Control Language in a declarative manner, i.e. you state what you want and to the best of its ability Terraform attempts to create those resources for you. ‘Providers’ are used to create resources for particular types of entity, for example you might use local file, helm (the Kubernetes package manager), Azure, VMware providers etc. etc. . . . Using providers requires plugins, most of which are provided by Hashicorp, but third parties can write their own plugins also.

Check out the first two posts in what promises to be an interesting series.

2 Comments

A Mini-Lab: SQL Server 2019 on Docker

Ayman El-Ghazali has a three-parter for us. Part one involves installing Docker for Desktop on Windows and creating a SQL Server container:

Recently, with the help of a colleague at work, I’ve started to dabble a little with containers. I had a customer that requested some specific code to be tested, and I realized that I didn’t have my own local instance of SQL running (always good to have a local one). I decided to try to make this process easier instead of going the traditional route of creating a Virtual Machine and also to help me learn a new technology. In these series of posts, I’m going to document my process of creating a Mini Data Lab for SQL Server on my desktop using Docker. It is intended to be for beginners and in no way is an article for best practices or production deployments.

Part two includes persistent storage and some of the other niceties of hosting a database in a container:

Let’s first take a look at the way I have my disk/folder structure laid out. Again, this is on my personal computer so it’s not a best practice for production and more suitable for development environments.

For each container, I’m creating a separate folder with the MSSQL paths that I need to put my databases, transaction log, and backup files on. Additionally, under the DockerMount folder I have a folder called sqldockershared (which I will put some shared content in later).

Part three is about configuration in existing containers and deploying a second container side-by-side:

For those that are more curious in changing other Instance level properties here is a list of configurable properties for SQL Server on Linux via Bash.
https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-configure-mssql-conf?view=sql-server-ver15

Now we have a great foundation to create another container, so let us go and do that now. The code is similar to the previously created container with the exception of the file path for the data, log, and back files and the port number. The SQLShare path will be the same so that we can run our initialization script from there.

I’m bought-in on containers. There are still some pains around containers for production databases, but “some pain” is a much better experience than a few years ago, when the answer to the question of whether you want to use containers in production for databases was “Are you mad?”

Comments closed