This post is a step-by-step guide to getting Linux containers running on your Windows 10 machine. The first thing to do is install the Docker Engine.
Installing Docker on Windows 10 is different than installing on Windows Server 2016, you’ll need to grab the Community Edition installer from the Docker Store.
Once installed, you’ll then need to switch the engine from Windows Container to Linux Containers by right-clicking the Docker icon in the taskbar and selecting “Switch to Linux Containers…”
Andrew walks us through step by step, so check it out.
As SQL Server people we’re only going to be interested in one application but that doesn’t mean we can’t use compose to our advantage.
What I’m going to do is go through the steps to spin up 5 containers running SQL Server, all listening on different ports with different sa passwords.
Bit of prep before we run any commands. I’m going to create a couple of folders on my C:\ drive that’ll hold the compose and dockerfiles: –
Andrew also explains a couple of common errors as he walks us through the process.
Last week in Part Two I went through how to create named volumes and map them to containers in order to persist data.
However, there is also another option available in the form of data volume containers.
The idea here is that create we a volume in a container and then mount that volume into another container. Confused? Yeah, me too but best to learn by doing so let’s run through how to use these things now.
Read through for the step-by-step description.
Awesome stuff! We’ve got a database that was created in another container successfully attached into another one.
So at this point you may be wondering what the advantage is of doing this over mounting folders from the host? Well, to be honest, I really can’t see what the advantages are.
The volume is completely contained within the docker ecosystem so if anything happens to the docker install, we’ve lost the data. OK, OK, I know it’s in C:\ProgramData\docker\volumes\ on the host but still I’d prefer to have more control over its location.
It’s worth reading the whole thing, even though this isn’t the best way to keep data long-term. It’s important to know about this strategy even if only to keep it from accidentally ruining your day later.
Andrew Pruski has started a series on persisting data in Docker containers. He starts off the series with an easy method of keeping data around after you delete the container:
Normally when I work with SQL instances within containers I treat them as throw-away objects. Any modifications that I make to the databases within will be lost when I drop the container.
However, what if I want to persist the data that I have in my containers? Well, there are options to do just that. One method is to mount a directory from the host into a container.
Full documentation can be found here but I’ll run through an example step-by-step here.
Statefulness has been a tough nut to crack for containers. I’m interested in seeing what Andrew comes up with.
There’s a switch that you can use when starting up the docker service that will allow you to specify the container/image backend. That switch is -g
Now, I’ve gone the route of not altering the existing service but creating a new one with the -g switch. Mainly because I’m testing and like rollback options but also because I found it easier to do it this way.
Read the whole thing.
We start with a 16.04 image, we run some upgrades, install python, upgrade pip, install our requirements and expose port 8888 (jupyter’s default port).
Here is our requirements.txt file
Notice how Jupyter is in there, I also added a few other things that I very commonly use including numpy, pandas, plotly, scikit-learn and some azure stuff.
The big benefit to doing this is that your installation of Jupyter can exist independently from your notebooks, so if you accidentally mess up Jupyter, you kill and reload from the image in a couple commands.
As of CTP 2.1 for SQL Server 2017 a set of new environment variables are available. These variables allow us to configure each SQL Server container as we spin them up. The first version of SQL Server on Linux came with:
These had to be set for the container to start. The SA_PASSWORD has be a complex password or the container will not start. CTP 2.1 introduced:
Read on for the new variables and an example on how to use them.
Last week I was having an issue with a SQL install within a container and to fix I needed to copy the setup log files out of the container onto the host so that I could review.
But how do you copy files out of a container?
Well, thankfully there’s the docker cp command. A really simple command that let’s you copy whatever files you need out of a running container into a specified directory on the host.
I’ll run through a quick demo but I won’t install SQL, I’ll use an existing SQL image and grab its Summary.txt file.
Read on for the demo.
Now we are ready to attach the database using the TSQL below. For this demo, I used Management Studio from my Laptop, to connect to SQL Server.
In the TSQL we need to use the FOR ATTACH_REBUILD_LOG argument as we have no log file to attach. It will create a 1MB log file in the default log file directory.
It’s better to restore a full backup, but there’s more than one way to connect a database.