The Road to kubernetes goes through Swarm ...
Until the recent announcement of Docker inc. last week at Dockercon Europe it didn’t hit me (or at least not hard enough), the The path to Kubernetes (in many cases) goes through Swarm. Don’t get me wrong I am not looking at Kubernetes as something that complex, I + & others just recently published the title “KUBERNETES IMPLEMENTATIONS: THE GOOD, THE BAD AND THE UGLY” the “complexity” aura around kubernetes is a little overrated! In this post I would like to try and emphasize the important steps towards wstablishing any Docker Orchestration Engine, and which I think are achievable by going through Swarm (regardless of the fact if you end up sticking with it or not).
If your’e a Professional Services Engineer like yours truly deciding on K8s on behalf of a customer, always seems to be a very big step to companies just starting off with Docker.
The first step(s) - “getting started” :
Starting off The first “Docker Daemon / Docker Machine / Docker Host” (and why for god sake, for that “small little nifty detail” there are 3 names which all mean the same damn thing …)
So your dockresization journey begins and you first meet the Docker toolbox and learn all about the different tools from docker-machine to docker-compose, and the tweaking & playing of the dockerd a.k.a docker daemon and it’s certain flags and configurations (which also had its evolution…)
As a Developer, the learning curve is enormous going from: “I don’t care where you stick that war file” to: “I need it to be exactly that way !”
As a Ops Engineer it’s most likely something like
“I will just run that tomcat docker image and stick the war file in it and get over with it” … with a big smile of course …
Again, as a Professional Services Engineer sometimes we need to accompany the customer step by step in adopting a certain tool/methodology, especially if the end user has no idea how a Dockerfile looks like, or how his development environment looks like, and start by explaining Kubernetes & Helm Charts (and all that comes with it…)
The 2nd step(s) - “Scaling up” :
So, you’ve climbed the first ladder and converged your initial Docker steps and now you know how to:
docker-machine create my_docker
If you’re lucky, the requirements you have are as simple as using a base image and pushing your code on top of it … If that’s the case, even if you go with Kubernetes this is something you need to do anyway.
The Docker file
This is just an example (not a best practice):
FROM ubuntu:16.04 LABEL maintainer="Haggai Philip Zagury" VOLUME ["/var/cache/apt-cacher-ng"] RUN apt-get update && apt-get install -y apt-cacher-ng EXPOSE 3142 CMD chmod 777 /var/cache/apt-cacher-ng && \ /etc/init.d/apt-cacher-ng start && \ tail -f /var/log/apt-cacher-ng/*
The above is the most fundamental part of Docker. It enables us to do things we dreamed doing in the physical world, and learned how to do in the Virtual world, with many complexities though, until tools like Packer (and others) evolved (tools which traditionally were mainly preserved for the OPS guys to deal with….)
Another side note I see implementations where the development team isn’t aware that “Dockerzation is taking place” in the ci, staging or production environment, and they have no idea how to grasp that yet.
At the stage you start tying your application with other components the natural step is using something like docker-compose
A simple example:
version: '3' services: web: build: . ports: - "5000:5000" redis: image: "redis:alpine"
This is where you start “learning” the API versions and compatibilities, where compose scales (and where it fails), stuff like “depends_on” or “container links” or “overlay networks” etc etc. IMHO those are also the things you need under your belt, when you want to debug an application, no matter if it’s running in
a cluster or a
bunch of nodes running docker ...
baby steps are vital for any Docker implementer, no matter the orchestration engine he/she end up choosing …
In any case (Kubernetes, SWARM, Mesos, RacnhOS), your Docker daemon will most definitely go through some changes, when you add a Private Registry, or configure various docker drivers (network/storage ), etc.
At this stage you might even wanna experiment (or
go production with a service or two) using DaaS -> Docker as a service with ECS, “GCS” or “ACS”If you choose this path you’ll and you most definitely learn another thing or two on how to run containers.
By the end of the day(s) you will have your entire application running on one single docker-machine on prem/cloud we don’t really care at this point! we just wanna say - YaY!!!
The 3rd step(s) - scaling out :
So at this stage you are most definitely a more pragmatic Docker user! You understand it benefits, and how it helps you accomplish your goals, and you are now in front of a few more important decisions you need to start making …
A few examples might be:
- Persistence in/out of Docker
if "docker with persistence" "A load of storage options ..." else "Easy docker life - no persistence, restart all you like ..., docker rm all you like" endif
Logging / Monitoring
Hmm need to start configuring logging in daemon.json … docker log driver support.
Need to start composing our apps with logging configurations, what do we do with “Infra services” which exist before the logging infrastructure is deployed, is deployed as part of the cluster? Or on bare metal alongside it ???
LaaS/MaaS - many logging/monitoring as a service out there too.
Is the ease of management worth it ?!
Is IOPS a real concern for your applicaiton?
DB As A Service starts to sound very appealing :)
Before we even deal with the above, when a single docker-machine doesn’t scale up or at least not enough the natural cluster around the corner, which any one can spin-up with no additional software installed is Docker SWARM especially since docker 1.12.0 it’s as simple as:
Creating a swarm:
docker swarm init --advertise-addr "swarm_manager_ip:swarm_manager_port "
Joining a swarm:
docker swarm join "swarm_manager_ip:swarm_port --token "swarm_worker_join_token"
And almost instantly you have yourself a swarm! And you now start treating your containers / container groups as “services”. You the developer now need to setup a Load-Balancer and setup SSL termination and many more setup tasks.
As a Professional services engineer this journey could span over hours/days/weeks, depending on the organization and it’s tools and infrastructure. Does this necessarily mean we need to choose the right path immediately?
The “Final” step(s) - choosing || && experimenting:
At some point in time you’ve ‘standardized’ enough to want Kubernetes. Considering you understand overlay networks and service discovery and how it works in Docker, you will most probably find it easier to find your way around, using various implementations of Kubernetes (there’s more than a handfull of them - more about that in another time).
If you have a hybrid infrastructure you might go with DC/OS with Kubernetes on top of it as the container orchestration. This could be a logical solution to combine the best of both worlds … isn’t it?! I’ll let you be the judge of that ;)
For Docker Veterans like yourselves, after going through this journey (it’s been a 3+ long years with Docker…), or for companies which already run Docker in production, for let’s say, since it’s Beta versions, well you can consider any one of the options mentioned in this post (and you won’t get fiered for it !). Will it necessarily mean we will end up running our containers on Kubernetes? I’m not 100% certain myself, but I now have a very good feeling about it …
Hope to hear your feedback on this … HP