Kubernetes is declarative, in other words, every component is described in the kube-apiserver as manifests. It makes sense to store this configuration in a version control system like Git. This idea brought us Gitops, where every change is done via a git push command. Everything as code is the new normal.

This article describes the advantages of Gitops with ArgoCD

When first starting with Kubernetes you’ll learn to apply manifests using kubectl. A very popular approach is to use an URL as a manifest, this allows very complex configurations to be installed with a single command. Everything required for the installation is included in this URL, it’s almost magic.

Installation is fine, but what about managing the state of the cluster? How do you keep track of versions etc.?

The problem with static manifests

Sooner or later you’ll realise that static manifests are hard to manage, how do you know that the installation is still in the original state? What if we add stuff that is not in the manifest? Recept for disaster right?

Wouldn’t it be nice if the manifest in the URL would be checked with the state of the cluster every 5 minutes?

ArgoCD manages Kubernetes manifests

With ArgoCD you can configure a Git repository as the source for your manifests. ArgoCD will notice changes in this Git repository and it can be configured to perform actions when this occurs. Better yet, a new cluster could use this same Git repository and configuration would take minimal effort.

Git as a central point of truth for your Kubernetes manifests is rock solid

We appreciate the Gitops workflow for this reason, it’s very easy to create a new cluster that conforms to your configuration and keeping it in sync with changes is done automatically. Combine this with Helm and you’ll start to see the added value.

But Mike, are there any Gitops challenges?

ArgoCD Gitops
ArgoCD logo

Glad you ask! There are some challenges when working with Gitops. Mostly are related to storing secrets in Git. Since Kubernetes ‘secrets’ are not encrypted, you’ll need to use inline encryption like SOPS, this requires an additional step in the workflow.

Another weak point is the fact that it’s currently hard to define manifests with parameters. Imagine you want to apply a manifest 4 times with 4 different values, this would require that you define values in each manifest. (Flux does this a little bit better than ArgoCD)

Flux or ArgoCD?

Another fine Gitops tool is Flux. Flux however, comes without a web interface and is perhaps a bit more strict in terms of Gitops principles (installing Flux is done via Gitops). For this reason we recommend ArgoCD, it will get you started with Gitops in no-time!

Stay tuned

Next post will contain a hands-on session with ArgoCD.

In a previous post we described howto setup Traefik 2 on Kubernetes, please check it out if you are not familiair with Traefik’s custom CRD’s like IngressRoutes etc.

This is part 2 of the series, it will focus on TCP TLS support using Letsencrypt. We will create a simple TCP service in an isolated namespace and expose this service in Traefik securely with TLS.


Introduction

A major feature of Treafik 2 is it’s support for TCP TLS. This is exciting because Traefik will manage the certificates required for valid TLS connections and offload the secure TLS connection to a normal TCP based service like a NodeJS backend. No more reloads for daemons to activate a new certificate, yay! And finally easy gRPC and HTTP2 connections without the complexity of TLS.

Let’s see how we can create a simple service and then add TLS support for this service using Traefik 2.

Preparation

In this example we will use Mosquitto as a simple TCP based service, a MQTT broker that works as a pub/sub system for IoT metrics and such. Easy to setup and it does support TLS, however in this case we secure it via Traefik 2 and skip all that hassle.

Step 1) Deploy mosquitto in a namespace

kubectl create namespace mosquitto
kubectl -n mosquitto create deployment mosquitto --image=eclipse-mosquitto

Step 2) Expose the mosquitto deployment

We’ll expose the deployment in the namespace.

kubectl -n mosquitto create service clusterip mosquitto --tcp=1883:1883

Step 3) Create an IngressRouteTCP object for Traefik

Save the contents below to a file named mosquitto-traefik-ingress.yaml and apply to the correct namespace. It will create an IngressRouteTCP object for Traefik2 and also apply TLSoptions required for normal operation.

Please note that the TLSoptions are bound to a namespace, so in this example we refer to this object as you can see.

Make sure you change the HostSNI below to a valid hostname that can be used for SSL. As you can see we enable this service on an entrypoint named “mqtt”, make sure you specify this entrypoint in your Traefik deployment!

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
  name: mosquitto
spec:
  entryPoints:
  - mqtt
  routes:
  - match: HostSNI(`mqtt.canhaz.domain`)
    services:
    - name: mosquitto
      port: 1883
  tls:
    passthrough: false
    options:
      name: default
      namespace: mosquitto

---

apiVersion: traefik.containo.us/v1alpha1
kind: TLSOption
metadata:
  name: default
spec:
  cipherSuites:
  - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
  - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
  - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  minVersion: VersionTLS13
  sniStrict: true

Step 4) Test the configuration

In one window run this command to subscribe to the wildcard “#” channel using TLS. Remember to give Traefik some time to generate the certificates.

mosquitto_sub -h mqtt.canhaz.domain -t \# -V mqttv5 -v -p 8883 --capath /etc/ssl/certs

In another window run the following command to publish random payloads to the mqtt broker, which should appear in the other window.

while :; do mosquitto_pub -h mqtt.canhaz.domain -p 8883 -t $RANDOM/$RANDOM/$RANDOM \
-V mqttv5  -m $RANDOM --capath /etc/ssl/certs; sleep 2; done

Treafik 2 TLS with Mosquitto

That’s it, you now have a working setup using Traefik 2 and Mosquitto. It’s secured with the latest TLS1.3 ciphers without ever touching a single .csr, .crt or .pem file!

You can easily test another service with the same approach, the basics are the same, just remember that entrypoints in Traefik are attached to ports and can be configured using the IngressRouteTCP object.

Next post will dive deeper into Traefik with network policies.

Stay tuned..

This document describes how to install and use Traefik 2 as an edge router on Kubernetes combined with Network Policies for enhanced security.

This is part 1 of the series, it will focus on installing and understanding Traefik 2.


Introduction

We love Traefik, it’s an amazing edge router that does so many things right. Gone are the days of static configuration files and manual reloads on changes. Traefik 2 brings many nice features like HTTP2 and TLS TCP support with Letsencrypt, a new dashboard and a much more extensible way of configuration. It is one of the best tools to have as a hosting company. Traefik you rock!

This document describes a working setup for Traefik 2 with Kubernetes, I will explain basic configuration options to get you started.

Installation

Installation of Traefik 2 is pretty simple but the documentation is not very helpful at the moment, it is overwhelming with options that might misguide you. Don’t worry, this howto describes all steps needed for a working setup. Once you’ll understand the basics things should be easy to get going.

Step 1) prepare CRD, RBAC and a ServiceAccount for Traefik 2

Ready for Kubernetes bingo! Here we go..

CRD stands for Custom Resource Definition, a term used by Kubernetes to describe a way to extend features in Kubernetes. Traefik 2 requires CRD objects for configuration and this is a much better approach compared to the original Traefik where you would have to “embed” config options using Ingress annotations. Every CRD has it’s own scope that allows an admin to set options without interfering with other CRD’s.

RBAC stands for Role Based Access Control, a system to describe permissions. In short RBAC increases security by limiting access based on roles. Traefik 2 needs permissions to read services and pods for discovery etc.

A ServiceAccount is required to bind the permissions defined in RBAC to Traefik via a deployment.

Save the contents below to a file named traefik2-crd-rbac.yaml and apply it to your cluster. (you don’t need to specify a namespace since it’s a cluster resource.) As an alert admin you should not trust this site but refer to the source, https://docs.traefik.io/user-guides/crd-acme/.

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressroutes.traefik.containo.us
spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRoute
    plural: ingressroutes
    singular: ingressroute
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressroutetcps.traefik.containo.us
spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRouteTCP
    plural: ingressroutetcps
    singular: ingressroutetcp
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: middlewares.traefik.containo.us
spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: Middleware
    plural: middlewares
    singular: middleware
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tlsoptions.traefik.containo.us
spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TLSOption
    plural: tlsoptions
    singular: tlsoption
  scope: Namespaced

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - traefik.containo.us
    resources:
      - middlewares
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - ingressroutes
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - ingressroutetcps
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - tlsoptions
    verbs:
      - get
      - list
      - watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
  - kind: ServiceAccount
    name: traefik-ingress-controller
    namespace: traefik-ingress

Step 2) Install Traefik 2 as a deployment

Save the contents below to a file named traefik2.yaml and apply it. It will create a namespace and deploy Traefik in a dedicated namespace. Deploying Traefik to it’s own namespace is a good security practise that allows fine grained Network Policies in a later stage. Note that this deployment has one replica for testing purpose, in production you would have to optimize this settings with affinity etc.

Make sure you read the deployment and change the email address!

apiVersion: v1
kind: Namespace
metadata:
  labels:
    ingress-controller: traefik
  name: traefik-ingress

---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
  labels:
    app: traefik
  name: traefik
  namespace: traefik-ingress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: traefik
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: traefik
    spec:
      containers:
      - name: traefik
        image: traefik:v2.0
        imagePullPolicy: Always
        args:
        - --api.debug=true
        - --api.insecure=true
        - --log.level=debug
        - --entrypoints.http.Address=:80
        - --entrypoints.https.Address=:443
        - --entrypoints.traefik.Address=:8080
        - --entrypoints.mqtt.Address=:8883
        - --providers.kubernetescrd
        - --ping=true
        - --certificatesresolvers.default.acme.tlschallenge=true
        - --certificatesresolvers.default.acme.email={{EMAIL}}
        - --certificatesresolvers.default.acme.storage=acme.json
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8080
          name: admin
          protocol: TCP
        - containerPort: 8883
          name: mqtt
          protocol: TCP
        livenessProbe:
          failureThreshold: 2
          httpGet:
            path: /ping
            port: admin
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 5
          successThreshold: 1
          timeoutSeconds: 1
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /ping
            port: admin
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: 250m
            memory: 128Mi
          requests:
            cpu: 100m
            memory: 64Mi
        securityContext:
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
      restartPolicy: Always
      serviceAccount: traefik-ingress-controller
      serviceAccountName: traefik-ingress-controller

We try to secure this deployment as much as possible by using a securitycontext setting, unfortunately it is not possible to run Traefik as a normal user AFAIK.

Step 3) Understand what’s happening

We created a dedicated namespace “traefik-ingress” to allow Network Policies in a later stage. (make sure to label this namespace, since they are required for Network Policies.) Note that this deployment makes use of the serviceaccount that we created earlier to gain permissions required for operation. We also define a minimal set of options as arguments, this tells Traefik where to look for configuration options.

In this example we create a Traefik 2 deployment in the namespace “traefik-ingress” with the arguments to create several entrypoints.

let’s look at the arguments defined.

  • –api.debug=true, allow this to see logging turn off in production obviously
  • –log.level=debug, show debug output, disable in production
  • –api.insecure=true, we want to allow a dashboard without a SSL certificate
  • –entrypoints.http.Address=:80, http entrypoint on po80
  • –entrypoints.https.Address=:443, https entrypoint on port 443
  • –entrypoints.traefik.Address=:8080, traefik dashboard entrypoint on port 8080
  • –entrypoints.mqtt.Address=:8883, mqtt entrypoint on port 8883
  • –providers.kubernetescrd, look for Kubernetes configuration options
  • –ping=true, respond to readiness and liveness probes on the admin interface
  • –certificatesresolvers.default.acme.tlschallenge=true, use TLS validation for Letsencrypt, make sure incoming port 443 is reachable to Traefik!
  • –certificatesresolvers.default.acme.email={{EMAIL}}, enter your Letsencrypt email here
  • –certificatesresolvers.default.acme.storage=acme.json, file used for letsencrypt certificates

The original documentation was unclear to me how to change the Traefik dashboard port from 8080 to something else. The dashboard port for Traefik is an entrypoint named “traefik” which by default binds to port 8080, now you know, spread the word :)

Step 4) Configure the Traefik 2 Service

Save the contents below to a file named traefik2-svc.yaml and apply it. This will create the services required for Traefik, as you can see it creates several services for HTTP, HTTPS, Admin and a mqtt port we will use later to demonstrate TLS options.

apiVersion: v1
kind: Service
metadata:
  annotations:
  name: traefik
  namespace: traefik-ingress
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: admin
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  - name: mqtt
    port: 8883
    protocol: TCP
    targetPort: 8883
  selector:
    app: traefik
  type: LoadBalancer

Using Traefik 2

You should now see a Traefik 2 pod running on your cluster in the desired namespace, let’s check the logs and the service.

$ kubectl -n traefik-ingress logs -f -l app=traefik
$ kubectl -n traefik-ingress get svc -o yaml

Before proceeding we can test the Traefik 2 dashboard which is available on port 8080, since we allowed insecure API it’s available on plain HTTP. Go ahead and open http://{{EXTERNAL-IP}}:8080, where {{EXTERNAL-IP}} is found with;

kubectl -n traefik-ingress get svc traefik

The service ports are used for the IngressRoute objects we are going to define, it might be a good idea to disable the admin service on a public connection! Think about this..

Testing a simple HTTP service

Once the dashboard is working we can proceed with the next step. Create a simple Nginx deployment in the default namespace and create a HTTP service for this deployment on port 80 which we later will config for Traefik 2 using the IngressRoute object.

kubectl create deployment nginx --image=nginx
# create a service for this deployment
kubectl create service clusterip nginx --tcp=80:80

Expose a service via a IngressRoute

To be able to access the nginx deployment, we need to create a IngressRoute object. An IngressRoute object is part of the Traefik 2 CRD’s so this will only work on a Kubernetes setup with Traefik 2 installed. A simple example shows the minimal config required for this task. In this case we setup a simple IngressRoute that responds to the hostname “test.k8s” on every plain HTTP request.

Before proceeding, make sure to setup a hostname matching EXTERNAL-IP in your DNS or local hostfile.

Traefik 2 is watching every namespace for IngressRoute objects, creating this object immediately instructs Traefik to take action, no reloads needed!

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: ingressroute
spec:
  entryPoints:
    - http
  routes:
    - match: Host(`test.k8s`)
      kind: Rule
      services:
        - name: nginx
          port: 80

Test the IngressRoute

We created a nginx deployment and exposed it’s service via an IngressRoute, you should now be able to access this service. Let’s see

curl http://test.k8s

This should show the default Nginx page, indicating a working setup. Every request will go through Traefik 2, increasing the replica count in Kubernetes will automatically update Traefik with the corresponding configuration without any manual action. Now that’s neat!

Take a look at the dashboard for some interesting information on pods and endpoints.

Middlewares, TLS, Network Policies and more..

That’s it, you now have a working setup using Traefik 2 and Nginx. As you can see the IngressRoute object is the most important object to define access to your services. In the next part I will show you the various options to modify requests using Middlewares and secure a namespace using Network Policies. We will also address IngressRouteTCP which allows TLS encrypted TCP streams.

Stay tuned..

This article will guide you setting up Container Linux (CoreOS) for Kubernetes.

Introduction

Kubernetes can run on almost any host supporting Docker, even on devices like the Raspberry PI. Personally I like Container Linux because it’s lightweight and practically maintenance free after setup. (compared to classic OS’es like Ubuntu or Debian). It’s an OS created specifically for container workloads and it has been a huge contributor to the succes of Kuberentes with techniques like “etcd”.


Hardware

Let’s run this cluster on at least 2 nodes, make sure they have at least 3G RAM each, both should have at least 20G disks and have basic network connectivity, make sure both nodes can connect to the internet. VMware is nice for this setup, it’s what I used in this example, but you can also run this on Digital Ocean or any other cloud service that offers Container Linux images.

The great thing about Kubernetes is that it doesn’t care about the hardware, it could be AWS combined with a Raspberry PI and VMware. In this article we focus on x64 architecture since Container Linux doesn’t run on ARM.

Starting Container Linux

There are a few ways to install Container Linux, but you’ll need to set a password at least to be able to login. I run Container Linux booted via iPXE with an ignition file. It allows me to configure basic features like the hostname and openssh public keys etc. I think you can paste this file on a webserver and refer to it using boot options when booting from ISO.

The ignition file sets up authentication, but also some other small settings recommended for Kubernetes.

{
  "ignition": { "version": "2.2.0" },
  "storage": {
    "files": [
        {
              "filesystem": "root",
              "path": "/etc/hostname",
              "mode": 420,
              "contents": {
                        "source": "data:,core1"
                }
        },
        {
        "filesystem": "root",
        "group": {},
        "path": "/etc/sysctl.d/local.conf",
        "user": {},
        "contents": {
          "source": "data:,net.netfilter.nf_conntrack_max%3D131072",
          "verification": {}
        },
        "mode": 420
        }
        ]
  },
  "passwd": {
    "users": [
      {
        "name": "core",
        "passwordHash": "$6--REDACTED--0 (create a password hash using Linux for example)",
        "sshAuthorizedKeys": [
          "ssh-rsa VL--REDACTED--Z0z="
        ]
      }
    ]
  }
}

Ignition files have many options and technically it could perform all steps described in this article, but I like to show the steps required so you will better understand the components that are required.

Ignition files can be validated via the Container Linux website.

Installing Container Linux

The first thing you want to do is install Container Linux on the disk. In this example I install the integrated VMware drivers, note that the ignition file is used to set authentication options.

sudo coreos-install -d /dev/sda -o vmware_raw -i yourcustomignfile.ign -C stable

Reboot and you should see a boot menu appear shortly, indicating that installation was successful. After booting from disk change the hostname and configure the network, it is very important understand that both the hostname but also the IP address will be bound to the trust certificates used in Kubernetes! (I prefer reserved dhcp leases)

Both hostname and IP address will be used for certificate trust, changing them later is a pain!

I use the following names since I regard the master as a normal node.

  • kube1.domain.fqn
  • kube2.domain.fqn
  • etc.

Prepare the node for Kubernetes

The following steps describe the components required for a Kubernetes node, perform these steps as root and don’t bother about sudo.

Every node must have the kubelet service running, it is the heart of Kubernetes. It bootstraps the cluster and does low level network setup. It does all the low level work and depends on tools we need to install first. Since Kubernetes runs in Docker, it’s both the chicken and the egg basically.

All tools will run from /opt, Container Linux respects this path when upgrading.

Prepare the paths and set the release version

# set an ENV for later use, do NOT forget this part ;)
RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)"
CNI_VERSION="v0.7.4"

# directories
mkdir -p /opt/bin /opt/cni/bin /etc/systemd/system/kubelet.service.d

Install CNI, the Container Network Interface

First thing we need is CNI or Container Network Interface, it enables the kubelet service to setup and manage the Kubernetes network. Make sure CNI is installed in /opt/cni/bin and the files are executable before proceeding.

curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz

Install low level Kubernetes tools

Next up are tools like crictl, kubelet, kubeadm and kubectl, these tools have no external dependencies making installation easy.

kubeadm: needed for low level node administration
kubelet: low level kubernetes bootstrapper
kubectl: user interface for Kubernetes
crictl: cri-o tool, part of cri-o containers

curl -L https://github.com/kubernetes-incubator/cri-tools/releases/download/${RELEASE}/crictl-${RELEASE}-linux-amd64.tar.gz | tar -C /opt/bin -xz

cd /opt/bin
curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl}
chmod +x /opt/bin/{kubeadm,kubelet,kubectl,crictl}

That’s basically it, now all we need to do is install and enable the kubelet service.

Kubelet is installed in /opt on Container Linux, we need to change that in the service file.

curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service

# this is the systemd way of changing a service, it's elegant
mkdir -p /etc/systemd/system/kubelet.service.d
curl -sSL "https://raw.githubusercontent.com/kubernetes/kubernetes/${RELEASE}/build/debs/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Update kubelet service to allow Container Linux volumeplugins
echo "KUBELET_EXTRA_ARGS=--volume-plugin-dir=/var/lib/kubelet/volumeplugins" > /etc/default/kubelet
# Almost done, enable services and reboot the node
systemctl enable docker
systemctl enable kubelet

Your node is ready

After this you can start installing Kubernetes with the kubeadm tool, this will be described in another blog.

Stay tuned..