In the previous blog we explored on setting up an K8S Cluster on the AWS Cloud without using any additional softwares or tools. The Cloud Providers make it easy to create a K8S Cluster in the Cloud. The tough part is securing, fine tuning, upgradation, access management etc. Rancher provides a centralized management of the different K8S Clusters, these can be in any of the Cloud (AWS, GCP, Azure) or In-Prem. More on what Rancher has to offer on top of K8S here. The good thing about Rancher is that’s it’s 100% Open Source and there is no Vendor Lock-in. We can simply remove Rancher and interact with the K8S Cluster directly.
I like this kind of tooling because it reduces cloud lock-in. For something like Kubernetes, where the whole point is orchestration of ephemeral containers, there’s a lot of benefit in being able to shift between services as needed.
When I deploy SQL Server to Kubernetes I usually create a load balanced service so that I can get an external IP to connect from my local machine to SQL running in the cluster. So how about creating a plugin that will grab that external IP and drop it into mssql-cli?
Let’s have a go at creating that now.
Click through for two demos including the appropriately-named
Now that we know, some basic definitions. It’s time we ask the main question! Why do I care?
There are many reasons you might wanna use Docker. I will give my perspective on why I started to learn about Docker.
I had to test my Kafka producers and consumers locally instead of deploying my code in DEV/QA even before I was sure things are working fine but also be sure that the same code, when deployed in other environments, should behave the same.
There are a few really good reasons for containers and testing is one of them.
A question that often crops up is “Can I use local storage”, the answer is “It depends”. Kubernetes is essentially a container scheduler at its most basic and fundamental level. The ‘Pod’ is the unit of scheduling, containers in the same pod share the same life cycle and always run on the same node. For stateless pods life is reasonably simple and straight forward, for state-full pods, life is a bit more nuanced. If for any reason a node fails, the pods that ran on that node have to be rescheduled to run on a working node, and their storage needs to follow them. This involves un-mounting the volume from the failed node and then mounting it on the node the pod(s) are rescheduled to run on. With basic vanilla hyper-converged storage, i.e. storage and compute in the same chassis, this will ultimately lead to scheduling problems. However, software defined solutions exist that enable this kind of infrastructure to be turned into a storage cluster which allows state to follow pods around the cluster. Some people automatically associated HDFS with local storage, the reason for this is probably because “Back in the day”, the most cost efficient way for Google to scale out its infrastructure was via commodity servers with local disks.
Read the whole thing.
Andy Leonard continues a series on Integration Services in Docker. Part 5 shows how you can deploy a package to a containerized SSIS instance:
Returning to Matt Masson’s PowerShell script – combined with the docker volume added earlier – I have a means to deploy an SSIS Project to the SSIS Catalog in the container.
An aside regarding attempting SSIS package execution from SSMS connected to an instance of SQL Server in a container (using the runas /netonly trick shared earlier: It appears to work, but doesn’t. The package execution is created but “hangs” in Pending Execution status:
Read both to learn more about Andy’s travails in getting this working.
In this article we will look at how to inform Kubernetes about pods’ resources and how we can optimize for different scenarios.
A scenario that typically comes up is when a cluster has a bunch of pods where a lot of them are dormant, i.e. they don’t consume CPU or memory. Do we have to carve them a space they won’t use most of the time? The answer is no. As usual, it’s safer to provision capacity for a workload than relying on optimistic heuristic that not all workloads will require resources at the same time. So, we can configure Kubernetes optimistically or pessimistically.
Read the whole thing.
Andy Leonard takes two shots at adding the SSIS Catalog to a Docker container. First up is the version which doesn’t work:
I have been working on getting an SSIS Catalog running in a container for a couple years.
I share this post not to discourage you.
I share it to let you know one way I failed.
I thought I had succeeded when the PowerShell in this post worked. The PowerShell works, by the way – just not in a container configured thus.
This is but one failure.
I failed more than once, I promise.
I can hear some of you thinking, “How do we accomplish this, Andy?”
I’m glad you asked.
The answer is “We modify our container.”
Disclaimer: I’m about 100% certain there’s another way to do this and about 99% sure there’s a better way. I’m going to show you what I did. Cool?
Read on to see how Andy did it.
Containers are perfectly suited for testing, meant to fast deployment of a solution, and can be easily deployed to the cloud. It’s cost effective!
Very important to understand! Containers disk data only exist while the container is running. If the container is removed, that data is gone.
So, you got to find the way to properly configure your container environment to make the data persist on disk.
Click through for an example.
In this post, I show my next step: executing an SSIS package in a container. Spoilers:
1. It’s more work than you think;
2. This is merely one way to do it; and
3. This is not my ultimate goal.
Read on to see how to do this.
I was reading his latest blog post Using docker named volumes to persist databases in SQL Server and decided to give it a try.
His instructions worked perfectly and I thought I would try them using a docker-compose file as I like the ease of spinning up containers with them.
Read on for Rob’s travails, followed by great success. And never go into something named “the spooky basement;” that’s just good life advice.