liftr 📦 by Nan Xiao
— Mara Averick (@dataandme) October 15, 2017
liftr aims to solve the problem of persistent reproducible reporting. To achieve this goal, it extends the R Markdown metadata format, and uses Docker to containerize and render R Markdown documents.
Click through for those resources as well as an addictive 8-bit animated GIF.
In only a few seconds you have a SQL 2017 instance up and running (Take a look at Andrews blog at dbafromthecold.com for a great container series with much greater detail)
Now that we have our container we need to connect to it. We need to gather the IPAddress. We can do this using docker command docker inspect but I like to make things a little more programmatical. This works for my Windows 10 machine for Windows SQL Containers. There are some errors with other machines it appears but there is an alternative below
Read the whole thing.
Jan said that he had gotten the SQL Agent running in Linux containers so I asked if he could send on his code and he very kindly obliged.
So, the disclaimer for this blog post is that I didn’t write the code here, Jan did. All I’ve done is drop it into a dockerfile so that an image can be built. Thank you very much Jan!
Click through for Jan’s code and Andrew’s presentation of the process.
The flagship of the OpenCPU system is the OpenCPU server: a mature and powerful Linux stack for embedding R in systems and applications. Because OpenCPU is completely open source we can build and ship on DockerHub. A ready-to-go linux server with both OpenCPU and RStudio can be started using the following (use port 8004 or 80):
docker run -t -p 8004:8004 opencpu/rstudio
Now simply open http://localhost:8004/ocpu/ and http://localhost:8004/rstudio/ in your browser! Login via rstudio with user:
opencpu) to build or install apps. See the readme for more info.
This is in the context of one particular product, but the reasons fit other scenarios too. H/T R-Bloggers
I have been using SQL Server 2017 running on Linux for a while now (blog post pending) and use the official images from:
To get the latest I used to run
docker pull microsoft/mssql-server-linux:latest
However today I noticed that the :latest tag had been removed:
Click through to see the tag you probably want to use.
What I’ve done here is use the cpus and memory switches to limit that container to a maximum of 2 CPUs and 2GB of RAM. There are other options available, more info is available here.
Simple, eh? But it does show something interesting.
I’m running Docker on my Windows 10 machine, using Linux containers. The way this works is by spinning up a Hyper-V Linux VM to run the containers (you can read more about this here).
Read on to learn more.
When running demos and experimenting with containers I always clear down my environment. It’s good practice to leave a clean environment once you’ve finished working.
To do this I blow all my containers away, usually by running the docker stop command.
But there’s a quicker way to stop containers, the docker kill command.
Sending SIGTERM isn’t particularly polite and doesn’t let processes clean up, which could leave your process in an undesirable state during future runs. But if you’re just re-deploying a container, you don’t really care about the prior state of the now-disposed container.
But not only can existing objects be viewed, new ones can be created.
In my last post I created a single pod running SQL Server, I want to move on from that as you’d generally never just deploy one pod. Instead you would create what’s called a deployment.
The dashboard makes it really simple to create deployments. Just click Deployments on the right-hand side menu and fill out the details:
Check it out; this looks like a good way of managing Kubernetes on the small, or getting an idea of what it can do.
Looks pretty good to me! SQL is up and has accepted the config value within our yaml file to change the SA password. But how are we going to connect to it?
What we need to do now is define a Kubernetes service. A service is a level of abstraction over pods to allow connections to the containers within the pods regardless of where the pod is located in the cluster. So let’s setup a service.
Andrew does a good job of taking us through the process step by step.
The FROM statement declares that we want to lay some instructions on top of the microsoft/mssql-server-windows image. The beauty of this approach is that when I pull down a new version of the microsoft/mssql-server-windows image, my image will be updated too. The microsoft/mssql-server-windows Dockerfile does the same thing with the microsoft/windowsservercore image.
The rest of the Dockerfile sets some meta data, downloads the installer and adds the Advanced Analytics feature.
SSIS, SSAS, SSRS or any other SQL Server feature could be added to a containerised SQL Server deployment in the same way.
With this approach, you do run the risk that upstream changes will break your image, but for something like this, it’s a very useful approach.