Kubexperience for developers

Kubernetes is much more than just a container orchestration platform … alongside The Cloud Native Landscape Kubernetes is the equivalent to Linux’s kernel with an ecosystem of apps/util which enriched it.

This was today’s meetup which you can catch on YouTube live, this session was in Hebrew ;) if there’s a request we will be more than happy to set one up in English (let us know via community@tikalk.com).

What’s this “Kubexperience” we’re babbling off about?

As a consulting agency that meets tens of customers using Kubernetes, we see the need to understand all (or at least the majority) of the features this platform has to offer, since, in many cases, there are available solutions which may replace writing / owning code which does “infra” stuff.

It has changed the way we experiment/plan, and run our ci/cd and, of course, our production workloads.

In one of my first projects where Kubernetes was mentioned (way back in version 0.2 if I recall correctly), you had to manually write your own replica-set(!). Nowadays, you get all these cool standards out of the box in the form of the “deployment” resource definition.

What is most incredible is that the ecosystem evolved into a platform enabling extensibility — very similar to the way we would modify/extend our Linux.

From Server-Rack via Virtualization & Cloud to a common way of managing workloads

A platform, not just a tool

Whilst there are many “batteries included” in Kubernetes standards resources,

OpenSource community Icons https://github.com/kubernetes/community/tree/master/icons

There are many extensions you may find in handy ranging from a “dns-controller” which registers your services/ingress in DNS all the way to “sidecar” applications.

Addons such as external-DNS, ingress-Nginx, cert-manager, Prometheus-operators powered by CRD’s

These `extensions` of the Cloud OS, are implemented in the form of operators, and there are quite a few. From log aggregator and monitoring stacks to certificate management and others that enrich your application(s).

Microservices / 12Factor apps and Kubernetes

By the author of 12factor app Adam Wiggins https://prezi.com/8uldpq91vm4e/the-twelve-factor-app/

When I was first introduced to the 12factor app principles it seemed like an ideal that was suitable only for microservices. It seemed fitting to Heroku’s offering which was yet another PasS provider. I was consulting for SAP at the time, A huge company, with many technologies such as OpenStack, Docker & Kubernetes. I was surprised to hear the term 12factor just around the time Kubernetes was first released and took me about a year to realize how compatible it is with the 12 Factor app principles.

What changed was the way Kubernetes’s resource definitions helped define a consistent process throughout all workloads — because all apps look the same …

On a broader scale, it brought companies to adopt microservice architectures because the platform started to offer much much more than just “runtime”. It’s the standardization of configuration and you can opt-in / out of standards. As an example, standard Kubernetes secrets are nothing more than an encoding. Many companies have this secrets manager in place (maybe not even running on Kubernetes)…Kubernetes is extensive enough to accommodate these types of changes (i.e. “opt-in”).

A standard way of managing workloads

Another distinguished example is GitOps (which my colleagues say we’ve been doing for years). It is now matured and has changed the way we all operate, but more on GitOps that in another topic…

So, What does all this mean for developers / isn’t this a DevOps thingy?

No, it's definitely not just a DevOps thingy!!!
What it means is we continue thinking distributed! We’re multi-cloud by design. The simplest form is that our laptop is a mini cloud considering it can run tools like minikube k3s or kind the same source can propagate to any cloud provider hosting the same k8s version that may vary is the capacity/volume that the cluster can endure.

This kind of control is important for your evolvement as a developer and how is simple by doing!

For newbies wishing to get your hands dirty you can follow along the hands-on part below and we’ll attach a git repo shortly.

In our meetup page we already have the links + the slides are available here:https://www.slideshare.net/hagzag/kubexperience-intro-session

How? -> By doing!

If you're a total newbie, following the how-to, notice a few small basic features such as:

  • A better definition of done — a.k.a -> consistency/reproducibility — no more “it works on my machine “ jokes — it actually works on any machine, anywhere!

There are great tools out there already like katacoda which enable you to experiment with Kubernetes in the browser which would get you through the first level but what you really need is: kubectl


If you're ready for some “kubectl magic”, make sure you have docker and kubectl installed and you can follow along writing a simple service and using standard kubernetes tools.

Simplest use case - application scaling


  • docker 17.8+
  • kubectl 14.1+
  • Any k8s cluster minikube / other -> il’l be using kind
  1. Start a cluster with kind
kind create cluster --name kubexperience
Creating cluster "kubexperience" ...
✓ Ensuring node image (kindest/node:v1.17.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kubexperience"
You can now use your cluster with:
kubectl cluster-info --context kind-kubexperience
Thanks for using kind! 😊

2. Create a node.js application for our demo

cat << EOF > package.json
"name": "Kubexpereince-podid",
"version": "1.0.0",
"description": "A demo application for Kubexpereince intro session",
"main": "index.js",
"scripts": {
"start": "node index.js"
"license": "MIT"

3. Create your application in this use case the app will return the hostname of the OS it runs on, maybe your localhost, the docker container or pod hostname when running inside k8s:

cat << EOF > index.js
var http = require("http");
var os = require("os");
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain",
"Request processed by http-server on: " + os.hostname() + "\n"
console.log("Listening on port 8080");

4. Create a Docker container to host your app:

cat << EOF >./Dockerfile 
FROM node:14.2-alpine
RUN mkdir -p /app 
WORKDIR /app  
COPY package*.json /app/
RUN npm install  
COPY . /app
CMD [ "npm", "start" ]

5. Build it

docker build . -t hagzag/nodejs-http-demo:latest

6. Test it locally

docker run --rm -p 8080:8080 hagzag/nodejs-http-demo:latest
curl localhost:8080
Request processed by http-server on: bba4e0cd9b5a

7. Push it to docker hub (or any container registry you have in handy):

docker push hagzag/nodejs-http-demo:latest

8. Create your Kubernetes deployment + service

cat << EOF >./deployment.yaml 
apiVersion: apps/v1
kind: Deployment
name: ke-podid
replicas: 1
- name: ke-podinfo
image: hagzag/nodejs-http-demo:latest

9. Create a small kustomization file which will help us manage this app in different stages/environment:

cat <<EOF >./kustomization.yaml 
namePrefix: dev-
app: ke-podinfo
- deployment.yaml

10. Applying this to our cluster:

# make sure you got the right kubectl context using kind / minikube
# kind export kubeconfig --name kubexperience
kubectl create -k ./

Should yield:

deployment.apps/dev-ke-podinfo created

11. Testing your app now with kubectl is another command away:

kubectl port-forward dev-ke-podinfo-5747b77fdf-pqs5r 8080
curl localhost:8080
Request processed by http-server on: dev-ke-podinfo-5747b77fdf-pqs5r

See the replicasetId podId

12. Scaling your app

kubectl scale deployment --replicas=8 dev-ke-podinfo
kubectl get po
dev-ke-podinfo-5747b77fdf-nsrnz 0/1 ContainerCreating 0 3s
dev-ke-podinfo-5747b77fdf-p2tjz 0/1 ContainerCreating 0 3s
dev-ke-podinfo-5747b77fdf-pqs5r 1/1 Running 0 29m
dev-ke-podinfo-5747b77fdf-xd5fl 0/1 ContainerCreating 0 3s
The short experiment above implements a set of standards from setting up DNS records and allocating ports using our port-forward command.
All the above are standards and we haven’t even started to work with service, ingresses and much more complex resources.

If you're a developer and you want to get to know Kubernetes, fill out the contact form here or email us at kubexperience@tikalk.com. The next course starts June 1st, 2020.

<hr><p>Kubexperience for developers was originally published in Everything Full Stack on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>

Thank you for your interest!

We will contact you as soon as possible.

Send us a message

Oops, something went wrong
Please try again or contact us by email at info@tikalk.com