Now that Kubernetes is up and running, it’s time to deploy the first workload. First, let’s talk about DNS, reverse proxies, and networking.

The cluster will host a multitude of web-based services which will be accessible either as https://service.domain.tld or https://domain.tld/service where domain.tld could be one of several domains. Kubernetes provides for ingress rules which defines how requests are routed to different workloads. To do this, it uses a reverse proxy (aka edge router) such as NGINX or Traefik.

Traefik is the better choice, I think, because it is aware of Docker and can automatically route based using labels/annotations. I have used it for standalone Docker hosts, but I had trouble getting it to work with Cert-Manager and Let’s Encrypt on Rancher. I’m sure it’s just because I don’t understand all of the intricacies in making it work on Kubernetes so I’ll stick with the NGINX ingress workload that was installed with Rancher until I understand better.

Before I deploy any workloads or define any ingress rules, I need to set up DNS. I chose to create a domain specifcally for the cluster which is independent of any of the domains of the services I’ll be hosting. This gives the flexibility to host services of domains for different purposes such as a personal and professional independent of each other. I registered new domain name with Namecheap and selected Cloudflare to manage DNS.

Cloudflare offers several useful services such as proxying requests, denial-of-service mitigation, an API for DNS updates, and handling SSL encryption. Since I’m hosting services from home on a dynamic IP address, I need to be able update DNS when my public IP address changes without being tied to a dyndns service like noip. I’ve done this using a container which updates cloudflare DNS if it detects a change in the public IP address.

So, how does this work? I have a public IP address which is assigned to an A record in my new domain (dmz.clusterdomain.tld) and the service’s subdomain is a CNAME to dmz.domain.tld. With a wildcard CNAME, I don’t to need to even do that if I’m willing to accept the risk of exposing the origin server IP address for the convenience.

Let’s look at this from the other direction - https://service.domain.tld resolves to dmz.clusterdomain.tld which resolves to the public IP address. When making http(s) requests, the browser gets to the public IP which has ports 80 and 443 forwarded to the kubein node’s private IP address. The ingress rule and the SSL certificate are checked against the original service’s URL (https://service.domain.tld) and the request sent to the correct workload through the cluster’s internal network.

Out of the box, Rancher has two projects: default and system each with their own namespaces. Projects seem to be a way for Rancher to group namespaces because I’ve only seen references for namespaces when using the kubectl CLI. I won’t worry too much about separating things into different projects or namespaces yet, but I wanted to note the system project where clusterwise services such as cluster management (kube-system namespace), NGINX (ingress-nginx namespace) and Cert-Manager (cert-manager namespace) can be found.

Dynamic DNS

First, I need to deploy the dynamic DNS workload into the defeult project.

I need to retrieve my Cloudflare API key and store it as a secret (Resources..Secrets) to be referenced in the workload definition under the Environment Variables section. I then create a new secret called “cloudflare-dns-api-key” with a key named api-key and the value set to the api key from Cloudflare.

Back under Workloads (Resources..Workload), I will deploy a new workload called “cloudflare-ddns” using the docker image “cupcakearmy/ddns-cloudflare” to the “default” namespace. Under environment variables, I use the following Keys and Values:

  • ZONE - clusterdomain.tld
  • EMAIL - Cloudflare account name
  • DNS_RECORD - clusterdomain.tld

Under Inject Values from Another Resource, I select the type as secret, source is “cloudflare-ddns”, key is “api-key”, and Prefix or Alias as “KEY”. This makes the value of the secret available to the workload’s environment as “KEY”.

No other special configuration is needed because it won’t be accepting incoming requests or require any persistent storage. Once started, it will check the public IP address every 5 minutes against icanhazip.com and send an API request to Cloudflare if it changes.

Persistent Storage

On the QNAP, Container Station already created a share (/Container) which is used to provide an area for any Docker containers which need to map internal paths (e.g. /data) to an external filesystem location (aka volume). There are other options such as Cephfs for persistent storage across the cluster, but NFS mounting /Container makes the most sense for my use.

First, I log back onto the manager node with ssh and make sure the NFS service is enabled:

$ ssh rancher@kubemgr
[rancher@kubemgr ~]$ sudo ros service list | grep volume-nfs
disabled volume-nfs
[rancher@kubemgr ~]$ sudo ros service enable volume-nfs
[rancher@kubemgr ~]$ sudo ros service list | grep nfs
enabled  volume-nfs

Now, I can test and make sure that I am able to mount the NFS share (NFS v4 eliminates the need for rpcbind and only requires a single port through firewall):

[rancher@kubemgr ~]$ mkdir /tmp/nfstest
[rancher@kubemgr ~]$ sudo mount -t nfs4 192.168.xx.xx:/Container /tmp/nfstest 
[rancher@kubemgr ~]$ ls /tmp/nfstest
@Recently-Snapshot      hassio-config           minecraft-data          mosquitto-ssl           unifi-config
[rancher@kubemgr ~]$ sudo umount /tmp/nfstest
[rancher@kubemgr ~]$ sudo rmdir /tmp/nfstest

Now to set up Persistent Storage in Rancher using Cluster..Storage..Persistent volumes and Add Volume:

* Name - qnapnfstest
* Volume Plugin - NFS share
* Capacity - 50GiB (not sure what effect this has, I guess to enforce a quota)
* Path - /Container/test
* Server - 192.168.xx.xx
* Read Only - No
* Access Modes - Many Nodes Read Write

I didn't set any other options such as mount options or node affinity.

This volume will be bound to a workload when it's claimed during when setting up a persistent volume in the workload. The /Container folder is already being backed up to [Backblaze B2](https://www.backblaze.com/) by a QNAP [Hybrid Cloud Backup](https://www.qnap.com/solution/hbs3/en/) job.

### NGINX Ingress and Cert-Manager

The NGINX Ingress workload was installed it's own namespace (ingress-nginx), but I need to change the Node Affinity so it only runs on the kubein node. The is the node to which I have configured to forward ports 80 (http) and 443 (https). The two workloads in ingress-nginx are default-http-backend and nginx-ingress-controller. Under Node Scheduling for both, I chose "Run all pods for this workload on a specific node" and selected kubein.

### Who am I

I can now deploy a new workload in the default namespace called whoami using the image "containous/whoami". For the Port Mapping portion, I configured a port called "web" on container port 80, TCP protocol, as a ClusterIP (Internal Only) with the Listening port being the same as the container port. Using ClusterIP means it will not be exposed outside the cluster unless I create an ingress rule for it. No volumes, environment variables, or other special configuration are needed.

I can now go to the Load Balancing tab and create a new Ingress to the whoami workload:

* Hostname - whoami.domain.tld
* Path - /
* Target - whoami
* Port 80

I also created a second ingress rule with for domain.tld and a path of /whoami. Now I can go to visit http://whoami.domain.tld and/or http://domain.tld/whoami to see something like this:

Hostname: whoami-7cd96d46-kpvsd IP: 127.0.0.1 IP: 10.42.4.2 RemoteAddr: 10.42.1.0:55400 GET / HTTP/1.1 Host: whoami.domain.tld User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:74.0) Gecko/20100101 Firefox/74.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,/;q=0.8 Accept-Encoding: gzip, deflate, br Accept-Language: en-US,en;q=0.5


This is fine for a test, but I will want any real service I deploy to be secured over SSL with https. For this, I need SSL certificates from [Let's Encrypt](https://letsencrypt.org/) and [Cert-Manager](https://cert-manager.io/) is going to handle that for me.

### Cert-Manager

Folllowing the instructions for [installing Cert-Manager](https://cert-manager.io/docs/installation/kubernetes), I run this command using kubectl :

$ kubectl apply –validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.0/cert-manager.yaml

This should install Cert-Manager under the System Project under it’s own cert-manager namespace. Alternatively, I could have installed using Helm (assuming I installed it on the workstation with kubectl):

# Kubernetes 1.15+
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.0/cert-manager.crds.yaml
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
# Helm v3+
$ helm install \
  cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --version v0.14.0

Either way, I verifed that it was installed correctly.

With Cert-Manager installed, I need to create a ClusterIssuer resource to handle issuing certificates for the entire cluster. Otherwise, I need create an Issuer for each namespace. Let’s Encrypt uses the ACME issuer with the HTTP01 or DNS01 challenge solver. With Cloudflare, DNS01 should have been possible, but I could not make it work correctly and HTTP01 will work for any ingress rule as long as both port 80 and port 443 are forwarded to nginx-ingress.

I created two ClusterIssuers: letsencrypt and letsencrypt-staging. The only difference with the staging issuer is that it can used for testing without being rate limited by the Let’s Encrypt API. I tested with staging until I had it working and then switched to letsencrypt.

First create letsencrypt-http-staging.yml:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: email@domain.tld
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource that will be used to store the account's private key.
      name: letsencrypt-staging-account-key
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
    - http01:
        ingress:
          class: nginx

Then apply it:

$ kubectl create -f letsencrypt-http-staging.yml

I quickly learned that to modify the definition, I needed to use the same file to delete, modify, and create the resource. The file for letsencrypt-http.yml looks exactly the same except that the name field and URL in the server field do not have “staging”.

Alright, it’s all coming together. The last step is to go into the ingress rule and complete these steps:

  • Edit the ingress rule to add a host the SSL/TLS certificate area with “Use default ingress controller certificate”
  • Add an annotation for “cert-manager.io/cluster-issuer” with the value value of the ClusterIssuer to use (e.g. letsencrypt or letsencrypt-staging)
  • Use the “Edit in YAML” option to add a secretName for storing the certificate information.

The end result should look something like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt
  name: whoami
  namespace: default
spec:
  rules:
  - host: whoami.domain.tld
    http:
      paths:
      - backend:
          serviceName: ingress-2035efd1aa4150e3f4772fb2a0f14548
          servicePort: 80
        path: /
  tls:
  - hosts:
    - whoami.domain.tld
    secretName: whoami-domain-cert
status:
  loadBalancer:
    ingress:
    - ip: 192.168.xx.xx

Cert-Manager should take care of the rest. It should create a temporary nginx load balancing rule to handle the HTTP01 challenge and it will make a request to the ACME API to initiate the it. If successful, it will store the certificate under the name specified and it will show as a certificate under “Choose a certificate” and in Resources..Secrets..Certificates. Cert-Manager will even track the expiration date of the certificate and automatically renew it.