Press "Enter" to skip to content

Category: Containers

Running Dask on AKS

Tsuyoshi Matsuzaki sets up Dask as a distributed service:

In my last post, I showed you tutorial for running Apache Spark on managed kubernetes, Azure Kubernetes Service (AKS).
In this post, I’ll show you the tutorial for running distributed workloads of Dask on AKS.

By using Dask, you can run Scikit-Learn compliant functions and jobs for data which cannot fit in memory, or run in distributed manners. For simplicity, here I’ll use built-in Dask ML function (dask_ml.linear_model.LinearRegression) in this tutorial. (With the same manners, you can also run regular sklearn functions.)
Cloud managed kubernetes will make you speed up this large ML workloads.

Click through for the process. I’ve had some positive experiences with Dask as a dashboarding tool. It’s definitely one of the better ones if you’re big into Python.

Comments closed

Running SQL Server Containers from Scratch

Andrew Pruski tells us there is no spoon:

I’ve been interested (obsessed?) with running SQL Server in containers for a while now, ever since I saw how quick and easy it was to spin one up. That interest has led me down some rabbit holes for the last few years as I’ve been digging into exactly how containers work.

The weirdest concept I had to get my head around was that containers aren’t actually a thing.

Containers are just processes running on a host that implement a set of Linux constructs in order to achieve isolation.

So if we know what constructs are used…shouldn’t we be able to build our own container from scratch?

Read on as Andrew breaks out the three necessary constructs and dives into it.

Comments closed

Containerizing a Shiny App

Peter Solymos takes us through the process of running a Shiny app in a Docker container:

Docker provides isolation to applications. Images are immutable. Running multiple instances of the same image can serve many users at the same time. All these general advantages of containerized applications apply to Shiny apps too.

All the general advantages of containerized applications apply to Shiny apps. Docker provides isolation to applications. Images are immutable: once build it cannot be changes, and if the app is working, it will work the same in the future. Another important consideration is scaling. Shiny apps are single threaded, but running multiple instances of the same image can serve many users at the same time. Let’s dive into the details of how to achieve this.

Click through for a walkthrough. Containerizing these sorts of apps has been a boon for my team, as it lets us spin up appropriately-sized servers on the cheap. H/T R-Bloggers

Comments closed

Aligning Kubernetes Nodes to Physical Infrastructure

Frank Denneman has some advice for us:

With the new VM service and the customizable VM classes, you can help the developer align their nodes to the infrastructure. Infrastructure details are not always visible at the Kubernetes layers, and maybe not all developers are keen to learn about the intricacies of your environment. The VM service allows you to publish only the VM classes you see fit for that particular application project. One of the reasons could be the avoidance of monster-VM deployment. Before this update, developers could have deployed a six worker node Kubernetes cluster using the guaranteed 8XLarge class (each worker node equipped with 32 vCPUs, 128Gi all reserved), granted if your hosts config is sufficient. But the restriction is only one angle to this situation. Long-lived relationships are typically symbiotic of nature, and powerplays typically don’t help build relationships between developers and the InfraOps team. What would be better is to align it with the NUMA configuration of the ESXi hosts within the cluster.

Click through for more detail. This is aimed particularly at operations people running Kubernetes clusters over VMware.

Comments closed

Kubernetes Alone is Insufficient

Chris Adkin does some explaining:

Someone I know had worked at an organization that needed to scale out their OpenShift clusters/footprint, they were constrained by the speed of their procurement department and were wondering if they could get by with vanilla Kubernetes. Following on from this I posted a thread on twitter as to why Kubernetes on its own is not enough, much to my pleasant surprise it generated a lot of interest, as such I wanted to do this subject justice in the form of a blog post.

Read on for the full argument as well as some objection-handling.

Comments closed

Installing Kubernetes on EC2

Praveen Sripati eschews EKS:

There are tons of ways of setting up K8S on AWS. Today we will see one of the easiest way to get started with K8S on AWS. The good thing is that we would be using t2.micro instance type, which falls under the AWS free tier. This configuration is good enough to get started with K8S and not for production setup. It’s with the assumption that the reader is familiar with the basic concepts of AWS.

Click through for the process.

Comments closed

Running a Docker Container as a WSL2 Distribution

Andrew Pruski has a wacky idea that just might work:

I’ve been playing around a bit with WSL2 and noticed that you can import TAR files into it to create your own custom distributions.

This means that we can export docker containers and run them as WSL distros!

So, let’s build a custom SQL Server 2019 docker image, run a container, and then import that container into WSL2…so that we have a custom distro running SQL Server 2019.

Read on to see how.

Comments closed

Deploying an Azure Arc Enabled Data Services Controller

Chris Adkin continues a series:

If you have been following this series, you should have:

– a basic understanding of Terraform
– a Kubernetes cluster that you can connect to using kubectl
– a basic understanding of Kubernetes services
– a working metalLB load balancer
– a basic understanding of how storage works in the world of Kubernetes
– a Kubernetes storage solution in the form of PX Store, alternatively you can use any solution (for the purposes of this series) which supports persistent volumes, however to use the backup solution in part 9 of the series you will need to use something that supports CSI

From here, Chris explains the importance of the data controller and then deploys one.

Comments closed

Deploying a Storage Solution to a Kubernetes Cluster

Chris Adkin continues a series:

Before we dive into deploying a storage solution to our Kubernetes cluster, we need to understand the basics of storage in the world of Kubernetes, which can appear to be both exotic and mysterious to the uninitiated. To dispel some confusion around Kubernetes and storage, the storage IO path is exactly the same as that with common garden vanilla variety Unix or Linux. The Kubernetes storage ecosystem introduces two extra things we need to concern ourselves with above and beyond conventional Unix/Linux storage, firstly there are some extra layers of abstraction between the physical storage and filesystems that pods use, what I like to refer to as . . .

Read the whole thing. And that was a particularly mean cut-off point on my part, if I do say so.

Comments closed

Deploying Azure Data Services via Terraform

Chris Adkin has two additional parts of a series. Part 3 shows us how to deploy a virtual machine on VMware:

To do this you require an Ubuntu virtual machine, I’ve tested this with Ubuntu 18.04 LTS and I will get around to testing it with Ubuntu 20.10 at some stage. If for example the virtual machine was created with a user called azuser, the deployment server should also have an azuser account under which all Terraform commands are executed. 

Part 4 takes those VMs and set up a Kubernetes cluster across them:

Whatever you do when deploying a Kubernetes cluster, somewhere along the line you have to use kubeadm. There is a wealth of material available on blog posts and on the internet in general in which people roll there own scripts using kubeadm. I often suspect that many of these efforts are the result of Kelsey Hightower’s: Kubernetes the hard wayIn this post we are emphatically going to do things the easy way, […]

And now we’re caught up on the series…for the moment, at least.

Comments closed