How ’bout CoreOS as your Cloud base?

I’ve heard the name CoreOS around a little bit over the last two months or so, but it hadn’t really jumped out at me until last week when Mark McCahill mentioned it in a meeting. He’d read some pretty cool things about it: minimal OS, designed for running Docker containers, easy distributed configuration, default clustering and service discovery. In particular the use of etcd to manage data between clustered servers caught my eye – we’ve been struggling at $WORK with how to securely get particular types of data into Docker containers in a way that will scale out well if we ever need to start bringing up containers on hosts that don’t physically reside in our datacenters.

CoreOSI haven’t even gotten into the meat of CoreOS yet, but just now, I accomplished a task that was surely lifted out of science fiction. With the assistance of Cobbler as a PXE server and a Docker container (what else!) as a quick host for a cloud-config file, I was able to install CoreOS in seconds and SSH into it with an SSH public-key that I provided it in the cloud-config file. I was legitimately shocked by how quick and easy it was.

Docker containers start instantly – it’s one of their best features. It allows us to ship them around with impunity; perform seamless maintenance. CoreOS hosts live in the same timescale, meaning we an PXE boot and configure new hosts, specifically designed for hosting Docker containers, in seconds, and from anywhere. CoreOS offers support for installing onto bare metal, and that would surely give you the best performance, but take a moment to comprehend the flexibility given to you by using virtual machines instead.

Make an API call to your VMWare or Xen/KVM cluster in your local or remote datacenters to create and start a virtual machine. Or do the same thing with an Amazon host. Or Google. The VM PXE boots into CoreOS, within seconds and joins its cluster, and begins getting data from the rest of the cluster. Within minutes, Docker images are downloaded and built, and containers are spinning up into production. It doesn’t get any more flexible than that. At this point scaling and disaster recovery are hindered only by your ability to produce applications that can handle it. It doesn’t matter if a particular Docker container; something happens to it, you just bring up another somewhere else. Along those lines, it doesn’t matter if an entire host is up or down. That’s what it means to be in the same timescale. Containers and their hosts can be brought up and down with impunity, with no impact to the service.

Another benefit to abstracting the CoreOS away from the bare metal is the freeing of ties to a particular technology. If you can design your systems to use the APIs of your local VM solution and the remote APIs from various cloud vendors, then you can move your services wherever you need them. As long as you can control the routing of traffic in some way (load balances, DNS, Hipache, some of the cool new things being done by Cisco), and the DHCP PXE options for your host servers, then your services are effectively ephemeral and not tied to a particular location or vendor.

For now this is all still very beta, both Docker and CoreOS, but the promise being shown is real. Everyone, from the largest internet giants to the smallest one-room startups, will benefit from the coming revolution in computing.

Docker "Best Practices" (that don’t exist yet)

I’ve noticed the ways in which I set up new Docker images have shifted the more I work with the technology. For example, when I first started with Docker, I put almost all my configurations into the Dockerfile. This is easy – and the way Docker suggests it on their site – and the biggest benefit is how each command ends up being it’s own layer, and they can be cached for quick building if you make a mistake. However, it gets kind of tedious trying to manage tons of bash commands or copy a bunch of files using the RUN and ADD commands. Also, each line counts against the layer limit (though hopefully that will be fixed in a newer version of Docker). The kicker though is that complex bash commands are just hard to pull off inside the Docker file, and lack the real flexibility that running a script offers. Summary: complex bash in a Dockerfile is complex.

So next I moved to having my Dockerfile copy in a large bash script, and run just run that script. This method has the advantage of easy configuration (Hey – 2 whole lines!) and a lot of flexibility. Unfortunately, there’s really only a single checkpoint layer to use with the Docker cache – any single change to the script and the entire image has to be rebuilt from scratch. This makes development time considerably longer, and quick development is one of the big selling points with Docker. Don’t get me wrong – spinning up a container is much quicker than configuring a whole server, but with this method of building, I end up spending a lot of time staring at the screen as my images build. The big, single script also contained a lot of code that ended up being very similar between different images. I’d copy whole swaths of useful base configs out into another giant script to run in another container.

So that brought me to the current way of doing things: copying in the entire image directory (the one that contains the Dockerfile) to /build on the container, and then having the Dockerfile run a few scripts to build the image (3, in fact) Each of THOSE scripts, in turn, run other scripts specifically designed to setup one aspect of the image (one for email, if needed, or syslog, etc). This has the advantage of letting me drop in only those scripts I need for that particular image, and makes the scripts themselves a little friendlier to look at. Unfortunately, it doesn’t solve the long build times, though. The gotcha is that I’m copying everything into /build, so I don’t have to enumerate every file I’m using withing the Dockerfile. But that means that each script I change rewinds Docker back to the ADD where it’s copied in, and every script has to run from scratch without cache.

This is now slowly leading me to have a separate script for every piece except that which distinguishes the container’s main function. This makes for very portable, easy to organize configurations, but requires that each image has dozens of scripts with it. Managing all these scripts is usually accomplished (in other technology worlds) with a central version control repository like git. That way you can clone the scripts you need, and they’re all maintained in one location for easy updating. This method has an inherent drawback too, though. Each Dockerfile and it’s resulting image are dependent on an entirely separate repo for them to work, or even build.

I thought about making a base image, similar to what the Phusion guys are doing with phusion/baseimage-docker. This would allow me to put all the bits that I copy into each image every time (the email, syslog, etc) into a single image, and then add on only the relevant bits for the each container’s main function. This makes managing each image easier, but also makes them less portable, since you have to have both the Dockerfile for the image you want, and the base image for it to pull from when it builds.

It’s not yet clear what the best way to accomplish this will be. I imagine there will be best practices for any given situation.

Fully generic, basic demo image shared with the world? All in the Dockerfile.
Complicated, multi-service images shared with the world? Many little scripts, or a single large one.
Many images with custom configs more easily managed for $WORK? Base Image.

I am looking forward to seeing how other people use Docker, and what evolves as the technology, and the community using it, matures.

Email and Docker-based Drupal Containers

Mailbox

Got email support working in my DockerDemos Drupal Docker image. SSMTP is the way to go with these containers. There’s no running daemon to have to manage, or to take up resources.

The image is setup to either do nothing with mail, use a default SSL setup if you pass your own SMTP server as an environmental variable (perfect for $WORK!) or let you use your own custom ssmpt.conf file.

Yeah, EMAIL! Lost password messages galore!


I’m beginning to copy over my technology-related posts from Google+ to this blog, mostly so I have an easy-to-read record of them. This one was originally published on 19 May 2014: Email and Docker Drupal Containers

"Cloud-style" Docker Demo Container

Completed a first pass at a minimal “Cloud-style”#Docker container. It’s sort of like an EC2 instance. You generate an ssh pem file, and pass the public key in as an environmental variable at docker run:

sudo docker run -i -t -d -P \
-e PUBKEY="$(cat ~/.ssh/my.pem.pub)" cloudbase

You end up with a CentOS container, and a user “clouduser” that has sudo w/no password rights.

I think this would be a good way to get some folks interested in Docker – perhaps offering something like this as a playground/sandbox to build interest.

Visit a website, get a Docker CentOS container!

Code: https://github.com/DockerDemos/CloudBase


I’m beginning to copy over my technology-related posts from Google+ to this blog, mostly so I have an easy-to-read record of them. This one was originally published on 19 May 2014: Cloud-style Docker Container

Quick Bash Script to Update Docker

Quick Docker Tip

To run the latest version of Docker each time you start it, it’s as easy as creating and running this script:

#!/bin/bash
OPTS="-d"

if [[ -f ~/dockerbin/docker ]] ; then
rm docker
fi
if [[ ! -d ~/dockerbin/ ]] ; then
 mkdir ~/dockerbin
fi

wget https://get.docker.io/builds/Linux/x86_64/docker-latest \
-O ~/dockerbin/docker

chmod +x ~/dockerbin/docker
sudo ~/dockerbin/docker $OPTS &

If you need to add special options (like -g to change the location of the Docker install directory), you can edit the variable at the top of the file.

I don’t recommend using this for production systems, but while Docker is under heavy development, this is an easy way to stay up-to-date and get the bugfixes.


I’m beginning to copy over my technology-related posts from Google+ to this blog, mostly so I have an easy-to-read record of them. This one was originally published on 13 May 2014: Quick Docker Tip

Ramble on Docker and Open Source Learning

Warning: Ramble about Docker and Open Source Learning incoming!

I spent ALL DAY creating demo +Docker containers to share with the Docker community at $WORK, and elsewhere. I’m trying to drum up support for Docker as a technology in general, and I think the best way is to give folks some really easy images that they can just clone from the repo and build. (You can check out what I have so far here: https://github.com/DockerDemos)

In order to build the widest possible user base, I need a pretty large variety of images for folks to play with. I’m working on converting very $WORK-specific images I have been testing into more generic and out-of-the-box images for the public. That should take care of the web-y folks, since that’s what I do. I also created a container with a demo of my Pyku app, just for fun, and FullScreenMario , because it’s cool.

DockerOne of the more challenging areas at the moment, and one that Mark DeLong has been really interested in, is using Docker for Research Computing. (Update: ┬áMark has sponsored two “Duke Docker Days” since I wrote this article, with great success.) To that end, I’ve started to create an image for the Berkeley Open Infrastructure for Network Computing ( BOINC ) client software. That’s slower going because there are some small bugs in the latest branch of their software. I also looked at FoldingAtHome as a possible Docker container app. That would be more of a proof of concept because it wants to do more GPU computing. Not that you CAN’T with Docker – I just don’t know how yet.

I’m looking for other examples of self-contained apps that could be Docker-ized. WordPress will be a no-brainer (and will join the Drupal one I have in progress at the moment). What about a Dogecoin mining app? That’s probably lumped in there with [email protected] because of the GPU dependencies. On the “Just for Fun/Get Attention” front, I think a Minecraft server is on the horizon.

I’ve also got this hair-brained (hare-brained?) idea about getting some auto-updating containers that will checkout the latest stable branch of some pre-compiled code or website on a scheduled basis, making them completely locked down, scalable nodes that just run forever or until they’re no longer needed.

On more of Mark’s end of things, I’ve been thinking a lot of single-use disposable Linux servers for use in teaching beginners the basics of Linux. You’ll recall (or you won’t, but trust me, it happened) that I and a few other folks (Drew Stinnett, Jimmy Dorff) taught an Intro to Unix class a few weeks ago. We were, for lack of a better word, ABSOLUTELY PLAGUED with little bugs or inconsistencies between the lab computers we were trying to use and the course book. It reminded me of when I was a sysadmin-wannabe and a previous boss let me take a course offered online by Illinois University in Linux System Administration. They were no doubt using little Linux servers installed on blades somewhere, but it allowed me to log in with some predetermined credentials and work along with the book. I think we could improve on that concept with Docker.

I’d almost forgotten about that when Danny Williford contacted me and asked about more Into to Unix courses. Danny works with teens, trying to get them involved in computing. He made a comment that resonated with me about how, for some students, learning that they can control every aspect of their computing experience [if they use Linux] really excites them into delving deeper into the technology. Now, of course, I’m completely obsessed with trying to come up with the best possible way of making an Intro to Linux class available, free and openly, to anyone, and using Docker containers to do so. The beauty here is that we could host containers for people to use that will just kill themselves and re-spawn on log out – or offer these Docker images for them to use locally on their own!

(Edit: Since I wrote this article, we did something similar and setup a more formal Intro To Linux course, using on-demand created VMs for students to practice with.)

I’m a HUGE proponent of Open Source learning, and I think technology is going to both inform the way it works (taking it’s cues from the Open Source world) and provide the platform for this new type of learning. Just finding more ways to bring these two worlds together is both challenging and exciting. In my opinion, there should be more full time positions at both institutions of learning and technology companies dedicated to this cause. Schools and Universities because, despite the ever-present lack of funds, their primary function is to teach the world. Technology companies because the investment in education will pay dividends for them when these students complete their schooling and get jobs, or adult employees take advantage of free learning to improve themselves and their skills and knowledge.

These aren’t even long-term investments, if you think about it. Adult employees are learning on the job; the education pays back immediately. Let’s take a look at the longest case: a high school freshman starts to learn computer science from some Open Source learning platform. In just eight years, that student is in the workforce using their skills. Most companies commit to their marketing campaign for longer than that.

Kudos to Red Hat (with whom we met recently on these very topics) for recognizing this and getting involved. I look forward to this taking off around the world.


I’m beginning to copy over my technology-related posts from Google+ to this blog, mostly so I have an easy-to-read record of them. This one was originally published on 08 May 2014: Ramble on Docker and Open Source Learning

A Dream of Docker

Those of you who know me from $WORK know (oh, you know) that I’m currently heavily involved in testing out Docker for use here, and that I’ve fully drunk not only the Kool-Aid, but also any other Docker-related beverage that might be out there.

I’ve presented the Docker concept to a couple of our groups here, evangelized to individual co-workers, spent hours testing, developing, deploying, re-testing, evaluating, etc, etc, all the things about Docker. I’m working on processes, policies, infrastructure and code all related to how Docker can potentially work in our environment.

I literally cannot think of a single thing that my small team – part of the larger Unix team at $WORK, itself a part of the Systems Infrastructure department – …I can’t think of a single thing that my team does that could not be entirely encompassed in Docker containers. I can’t think of anything that we do that wouldn’t be improved, that wouldn’t be automated, that wouldn’t be scaled, that wouldn’t be made better in at least ONE way by moving it to Docker.

I am so on-board with Docker that I literally dreamed about it. I dreamed about presenting Docker to a group of co-workers last night. I, fully asleep in my bed, laid out all the good things about Docker, all the challenges we would face, all the policies and procedures that would change, and all the processes that would be streamlined.

At this point, I woke up and thought perhaps, perhaps, I’d been working a little too much with Docker recently, and maybe I needed to take a break and work on something else.

Naaaaahhhhhhh…it’s too cool! Vive la Docker!


I’m beginning to copy over my technology-related posts from Google+ to this blog, mostly so I have an easy-to-read record of them. This one was originally published on 26 March 2014: A Dream of Docker