Life with HELM

Hi all,

It’s been a while … & I know the title gave me away, today I’ll introduce my experience with helm and what got me helming, HELM has become the first thing I install after I deploy a Kubernetes cluster.

This post named “Life with HELM” is a small glimpse of my relatively “short life” with it [ ~1 year ].

This post will cover:

# Topic Estimated time
1. Why there was no question I needed some kind of helmer ~1 min
2. Create Kubernetes Resource Definitions for the “msa-demo-app” and point out point some architectural decisions which need helming ! ~10 min
3. Deploy the “msa-demo-app” the native way ! ~10 min
4. Validate the native way ! ~2 min
5. How would HELM make this easier ? ~1 min
6. Cleanup before we start helming our own … ~1 min
7. perquisites ~1 min

Future parts of the series will cover:

# Topic Link
1. Installing HELM (so we are all on the same page) link
2. Working with existing charts. basically bisect charts parts which seem simple yet important when you write your own. link
3. Building your own msa-demo-app chart link following some best practices I came across in helm’s docs, bitnami and honestbee and others, which today help define best practices. - still WIP link
4. How will my CI/CD workflow look like with all of the above - still WIP link

Please note If plan you plan on following this post I saved you the trouble by designing this as a walkthrough so make sure you comply to the perquisites at the end of this post (which will also be mentioned in the separate parts of this series when needed)

1. Why there was no question I needed some kind of helmer

This was the simple part …

As a DevOps consultant i consider myself a chef, puppet, ansible veteran I was likely to make templates out of any deployment/service/ingress/role Resouece Definition - this wasn’t even a question, the question was do I need helm or good old “one of the above” would suffice ?

Due to popularity and a vast community and of course the joining the cloud native foundation has it’s impact helm is by far the most talked about topic after / before istio im not sure :smiley: So the helm with it …. let’s see what it can do for me to improve my Continuous Delivery experience.

In order to do that let’s deploy our msa-demo-app which does a “complex” task :smiley: as described below:

enter image description here

Before helm (and after, but different) we needed to manage each of the 4 components:

  1. redis - our key value store
  2. msa-api - provides an api for storing pings in the pings key
  3. msa-pinger - increment the pings key every n seconds
  4. msa-poller - shows the current pings count They all need standard Kubernetes Resource Definitions … and of course provide the glue so they work with each other in form of labels and selectors.

In our example we would need: enter image description here

To keep things simple we will use pure kubectl to create all manifests - please follow the steps below:

2. Generate Kubernetes Manifests

Once you’ve done this part your current working directory should include the following Resource Definitions:

  • Namespace -> msa-demo-ns.yml
  • Deployment + Service -> redis.yml
  • Deployment + Service -> msa-api.yml
  • Deployment -> msa-pinger.yml
  • Deployment -> msa-poller.yml

2.1 Create msa-demo namespace (using kubectl –dry-run)

kubectl create namespace msa-demo --dry-run -o yaml > msa-demo-ns.yml

This is a file we will use to create the msa-demo namespace and ensure isolation of our app you could choose to deploy it to any namespace e.g default

2.2 Create deployment + service for redis (using kubectl –dry-run)

kubectl run redis \
--image=redis \
--port=6379 \
--expose \
--dry-run -o yaml > redis.yml

which yields:

apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: redis
spec:
ports:
- port: 6379
  protocol: TCP
  targetPort: 6379
selector:
  run: redis
status:
loadBalancer: {}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
  run: redis
name: redis
spec:
replicas: 1
selector:
  matchLabels:
    run: redis
strategy: {}
template:
  metadata:
    creationTimestamp: null
    labels:
      run: redis
  spec:
    containers:
    - image: redis
      name: redis
      ports:
      - containerPort: 6379
      resources: {}
status: {}

This is pretty straight forward we want redis:latest with a service exposing redis on port 6379.

2.3 Create deployment + service for msa-api

kubectl run msa-api --image=shelleg/msa-api:config  --port=8080 --image-pull-policy=Always --expose --dry-run -o yaml > msa-api.yml

which yields:

apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: msa-api
spec:
ports:
- port: 8080
  protocol: TCP
  targetPort: 8080
selector:
  run: msa-api
status:
loadBalancer: {}
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
  run: msa-api
name: msa-api
spec:
replicas: 1
selector:
  matchLabels:
    run: msa-api
strategy: {}
template:
  metadata:
    creationTimestamp: null
    labels:
      run: msa-api
  spec:
    containers:
    - image: shelleg/msa-api:config
      imagePullPolicy: Always
      name: msa-api
      ports:
      - containerPort: 8080
      resources: {}
status: {}

Similarly to how we deployed redis this is pretty forward too. On thing to highlight is the kubectl run is reflected in the labels.run: msa-api and the selector.run: msa-api but note (for later) how there is no affiliation between redis and msa-api there is no “application” awareness (at least not natively).

Please note when developing and using “latest” tag or specifying “–image-pull-policy=Always” results in forcing the docker daemon to pull the image even if there is one on the underlaying host, in our case we are using the config tag which I might be updating with changes hence this is set to Always.

2.4 Create deployment for msa-pinger

kubectl run msa-pinger \
    --image=shelleg/msa-pinger:latest \
    --env="API_URL=msa-api:8080" \
    --env="DEBUG=true" \
    --dry-run -o yaml > msa-pinger.yml

Explained:

2.4.1. passing env vars:

in our use case --env="API_URL=http://msa-api:8080" could be using the MSA_API_SERVICE_HOST and MSA_API_SERVICE_PORT to construct the API_URL variable which is expected to be set by the msa-pinger service. Considering we know we have a service names msa-api we could choose to assume that we have that info laying around … As an example if msa-api is deployed before the msa-pinger or msa=poller you could have an “easy life” using environment variables: As an example I am testing an existing deployment like so:

kubectl exec -it `kubectl -n msa-demo get po | grep msa-pinger | awk '{print $1}'` -- printenv | grep MSA

Which should yield something like:

API_URL=${MSA_API_SERVICE_HOST}:${MSA_API_SERVICE_PORT}
MSA_API_SERVICE_HOST=100.68.245.88
MSA_API_SERVICE_PORT=8080
MSA_API_PORT=tcp://100.68.245.88:8080
MSA_API_PORT_8080_TCP_PORT=8080
MSA_API_PORT_8080_TCP_ADDR=100.68.245.88
MSA_API_PORT_8080_TCP=tcp://100.68.245.88:8080
MSA_API_PORT_8080_TCP_PROTO=tcp`

Hence we can use these environment variables in our deployment … which could look like somthing like the following:

kubectl run msa-pinger ֿ
  --image=shelleg/msa-pinger:latest ֿ
  --env="API_URL=\${MSA_API_SERVICE_HOST}:\${MSA_API_SERVICE_PORT}" ֿ
  --env="DEBUG=true" ֿ
  --dry-run -o yaml > msa-pinger.yml

2.4.2 Additional env vars via --env=

--env="DEBUG=true" by default there will be no log unless this environment variable is set so I guess this also shows how to pass an arbitrary environment variable to a pod (via deployment).

So if we replace our msa-api and 8080 with the environment vars we expect to have present MSA_API_SERVICE_HOST and ${MSA_API_SERVICE_PORT} our deployment should look like the following: which looks like:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
  run: msa-pinger
name: msa-pinger
spec:
replicas: 1
selector:
  matchLabels:
    run: msa-pinger
strategy: {}
template:
  metadata:
    creationTimestamp: null
    labels:
      run: msa-pinger
  spec:
    containers:
    - env:
      - name: API_URL
        value: ${MSA_API_SERVICE_HOST}:${MSA_API_SERVICE_PORT}
      - name: DEBUG
        value: "true"
      image: shelleg/msa-pinger:latest
      name: msa-pinger
      resources: {}
status: {}

Note the - env: above.

I have to say this seemed like a hack to me from the start and I could use dns names (well I did the first time - but i’ll discuss later on) - i’m sure to mention this when we start helming.

2.5 Create deployment for msa-poller

kubectl run msa-poller \
--image=shelleg/msa-poller:latest \
--env="API_URL=\${MSA_API_SERVICE_HOST}:\${MSA_API_SERVICE_PORT}" \
--dry-run -o yaml > msa-poller.yml

which yields:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
  run: msa-poller
name: msa-poller
spec:
replicas: 1
selector:
  matchLabels:
    run: msa-poller
strategy: {}
template:
  metadata:
    creationTimestamp: null
    labels:
      run: msa-poller
  spec:
    containers:
    - env:
      - name: API_URL
        value: ${MSA_API_SERVICE_HOST}:${MSA_API_SERVICE_PORT}
      image: shelleg/msa-poller:latest
      name: msa-poller
      resources: {}
status: {}

3. Deploy to kubernetes the ‘native’ way

Considering we now have all we need to deploy our demo-app let’s use kubectl to deploy our manifests like so:

kubectl create -f msa-demo-ns.yml
kubectl create -n msa-demo -f redis.yml
kubectl create -n msa-demo -f msa-api.yml
kubectl create -n msa-demo -f msa-pinger.yml -f msa-poller.yml

This would yield:

namespace/msa-demo created
service/redis created
deployment.apps/redis created
service/msa-api created
deployment.apps/msa-api created
deployment.apps/msa-pinger created
deployment.apps/msa-poller created

4. Verify our deployment

4.1 validate services
kubectl -n msa-demo get svc

should yield:

NAME      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
msa-api   ClusterIP   100.68.245.88   <none>        8080/TCP   4m
redis     ClusterIP   100.64.28.115   <none>        6379/TCP   4m
4.2 validate pods

Get all pods with label “run” (the default when we ran kubectl run)

kubectl get po -l run --all-namespaces

should yield:

NAMESPACE   NAME                          READY     STATUS    RESTARTS   AGE
msa-demo    msa-api-6cb7c9c6bf-rz6fd      1/1       Running   0          32m
msa-demo    msa-pinger-599f4c5bf9-s8sc7   1/1       Running   0          20m
msa-demo    msa-poller-7cfcb4c8d-wlrr6    1/1       Running   0          15m
msa-demo    redis-685c788858-dw7nl        1/1       Running   0          1h
4.3 validate redis

Test redis is working:

kubectl -n msa-demo exec -it \
`kubectl -n msa-demo get pod | grep redis | awk '{print $1}'` \
redis-cli KEYS '*'

should yield:

1) "pings"

Get the value of pings:

kubectl -n msa-demo exec -it \
`kubectl -n msa-demo get pod | grep redis | awk '{print $1}'` \
redis-cli GET pings

should yield some number:

"2152"
4.4 validate msa-api

Test msa-api is working:

kubectl -n msa-demo logs \
`kubectl -n msa-demo get pod | grep msa-api | awk '{print $1}'`

should yield:

> msa-api@1.0.0 start /opt/tikal
> node api.js

loading ./config/development.json
Connecting to cache_host: redis://10.110.76.53:6379
Server running on port 8080!

Perquisites

  • minkube or an existing kubernetes cluster with administrative privileges
  • helm-cli installed
  • tiller component is covered in the part 1
  • kubectl installed
  • recommended:
  • default context (to simplify the steps mentioned throughout the series) or
  • set the context to your liking and omit the --namespace from the provided command examples.
Thank you for your interest!

We will contact you as soon as possible.

Send us a message

Oops, something went wrong
Please try again or contact us by email at info@tikalk.com