The steps it performs are: –
Installs the Docker Community Edition
Installs the SQL Server command line tools
Pulls the latest SQL Server on Linux image from the Docker Hub
Read on for more details and some limitations.
One of the problems that I’ve encountered since moving my Dev/QA departments to using SQL Server within containers is that the host machine is now a single point of failure.
Now there’s a whole bunch of posts I could write about this but the one point I want to bring up now is…having to start up all the containers after patching the host.
I know, I know…technically I shouldn’t bother. After patching and bouncing the host, the containers should all be blown away and new ones deployed. But this is the real world and sometimes people want to retain the container(s) that they’re working with.
It’s pretty easy to set up a restart policy, as Andrew shows.
The databases that I store in my container image are updated on a weekly basis and currently, the process to update our containers is manual. Once the updated image has been created, the existing running containers are dropped and new ones created from the updated image.
But what if we could automatically refresh our containers with the updated image? If we could do that then the only process that’s manual is updating the image. We would no longer have to worry about any containers running SQL instances with databases that are out of date.
Luckily, there’s a way to do this and it’s accessible via an image on the Docker Hub called Watchtower. What Watchtower does is monitor the Docker Hub and if there’s an update to an image it will automatically refresh all running containers that are on the same host.
Read on for a step-by-step solution.
This post is a step-by-step guide to getting Linux containers running on your Windows 10 machine. The first thing to do is install the Docker Engine.
Installing Docker on Windows 10 is different than installing on Windows Server 2016, you’ll need to grab the Community Edition installer from the Docker Store.
Once installed, you’ll then need to switch the engine from Windows Container to Linux Containers by right-clicking the Docker icon in the taskbar and selecting “Switch to Linux Containers…”
Andrew walks us through step by step, so check it out.
As SQL Server people we’re only going to be interested in one application but that doesn’t mean we can’t use compose to our advantage.
What I’m going to do is go through the steps to spin up 5 containers running SQL Server, all listening on different ports with different sa passwords.
Bit of prep before we run any commands. I’m going to create a couple of folders on my C:\ drive that’ll hold the compose and dockerfiles: –
Andrew also explains a couple of common errors as he walks us through the process.
Last week in Part Two I went through how to create named volumes and map them to containers in order to persist data.
However, there is also another option available in the form of data volume containers.
The idea here is that create we a volume in a container and then mount that volume into another container. Confused? Yeah, me too but best to learn by doing so let’s run through how to use these things now.
Read through for the step-by-step description.
Awesome stuff! We’ve got a database that was created in another container successfully attached into another one.
So at this point you may be wondering what the advantage is of doing this over mounting folders from the host? Well, to be honest, I really can’t see what the advantages are.
The volume is completely contained within the docker ecosystem so if anything happens to the docker install, we’ve lost the data. OK, OK, I know it’s in C:\ProgramData\docker\volumes\ on the host but still I’d prefer to have more control over its location.
It’s worth reading the whole thing, even though this isn’t the best way to keep data long-term. It’s important to know about this strategy even if only to keep it from accidentally ruining your day later.
Andrew Pruski has started a series on persisting data in Docker containers. He starts off the series with an easy method of keeping data around after you delete the container:
Normally when I work with SQL instances within containers I treat them as throw-away objects. Any modifications that I make to the databases within will be lost when I drop the container.
However, what if I want to persist the data that I have in my containers? Well, there are options to do just that. One method is to mount a directory from the host into a container.
Full documentation can be found here but I’ll run through an example step-by-step here.
Statefulness has been a tough nut to crack for containers. I’m interested in seeing what Andrew comes up with.
There’s a switch that you can use when starting up the docker service that will allow you to specify the container/image backend. That switch is -g
Now, I’ve gone the route of not altering the existing service but creating a new one with the -g switch. Mainly because I’m testing and like rollback options but also because I found it easier to do it this way.
Read the whole thing.
We start with a 16.04 image, we run some upgrades, install python, upgrade pip, install our requirements and expose port 8888 (jupyter’s default port).
Here is our requirements.txt file
Notice how Jupyter is in there, I also added a few other things that I very commonly use including numpy, pandas, plotly, scikit-learn and some azure stuff.
The big benefit to doing this is that your installation of Jupyter can exist independently from your notebooks, so if you accidentally mess up Jupyter, you kill and reload from the image in a couple commands.