Nextcloud is the first “production” service to be deployed. Everything else has been building the foundation for Nextcloud and the services which follow it - Mailcow, OpenPGP,, server hardware, Proxmox, NFS and iSCSi storage, Rancher OS, Kubernetes LDAP, and Keycloak SSO.

Nextcloud will utilize all of the patterns that I’ve learned so far with a new wrinkle - it will utilize FPM-PHP to make it perform effciently under load. PHP-FPM is an interface between the web server and external processes such as PHP which eliminates the overhead of creating a separate process for each request. My experience with PHP has always been compiling it as a module for Apache, but PHP-FPM is actually a server in itself that interfaces with any webserver which supports FastCGI.

I am using the fpm-alpine tag on the Nextcloud Docker image. As a reminder, it’s recommended to deploy a specific version tag rather than latest so that upgrades can be planned and tested. I started using Github’s Watch feature to turn on notifications to let me know when a new release is published for the Docker images I’m using. I really should be utilizing a private image repository for extra security, but that’s not something I want to tackle just yet.

NextCloud FPM will listen on port 9000. I will need a web server which can handle both communication with FPM as well as serve static content from a shared directory. I will use nginx for this purpose, but nginx-ingress is not designed to serve static content or FPM so it will need a dedicated ngxinx instance. That means:

nginx-ingress:443 -> nginx:80 -> nextcloud-fpm:9000 -> database:3306


I debated setting up a single, replicated database service for the entire cluster to share or a separate database instance for each service. I decided that it made more sense to keep the services as decoupled as possible through the use of namespaces. For example, I did this with separate keycloak and ldap namespaces. Keycloak uses ldap, but so will other services so ldap gets it’s own namspace and can be managed separately. Keycloak is the only service which uses its database so it gets a dedicated database instance in the same namespace.

To get started, I now follow the same pattern and create a nextcloud namespace with a nextclouddb service using MariaDB. In fact, I will demonstrate the Infrastructure-as-Code nature of Kubernetes by exporting the keycloak services as YAML, modifying it, and importing to create Nextcloud resources.

First, the namespace:

$ kubectl get namespace keycloak -o yaml > keycloak-ns.yaml
$ cp keycloak-ns.yaml nextcloud-ns.yaml
$ vi nextcloud-ns.yaml

I edited the resulting file to change the relevant fields in the spec section of the file and removed everything else so that it was just a barebones manifest with apiVersion, kind, metadata, and spec sections.

$ cat nextcloud-ns.yaml
apiVersion: v1
kind: Namespace
  name: nextcloud
$ kubectl create -f nextcloud-ns.yaml
namespace/nextcloud created

After I did this, I noticed this note in the [Rancher documentation on namespaces](

> **Note:** If you create a namespace with `kubectl`, it may be unusable because `kubectl` doesn’t require your new namespace to be scoped within a project that  you have access to. If your permissions are restricted to the project  level, it is better to [create a namespace through Rancher]( to ensure that you will have permission to access the namespace.

This wasn't really a problem because there was no resource limits placed on the default project and I have full access to the cluster. It showed up in the Rancher GUI as a namespace not in a project so I just assigned it to the default project.

At some point, I will want to take the work I've done to create these services and put them into a Git repository and I found a nifty script which I can use export and save k8s manifests called

#!/usr/bin/env bash

while read -r line do output=$(kubectl get “$line” –all-namespaces -o yaml 2>/dev/null | grep ‘^items:’) if ! grep -q “[]” «< $output; then echo -e “\n======== “$line” manifests ========\n” kubectl get “$line” –all-namespaces -o yaml » $line.yaml fi done < <(kubectl api-resources -o name)

I modified the script to save a single namespace's manifests called

#!/usr/bin/env bash

while read -r line do output=$(kubectl get “$line” –namespace $1 -o yaml 2>/dev/null | grep ‘^items:’) if ! grep -q “[]” «< $output; then echo -e “\n======== “$line” manifests ========\n” kubectl get “$line” –namespace $1 -o yaml » $line.yaml fi done < <(kubectl api-resources -o name –namespaced=true )

Following t[his advice]( on creating manifests by hand, I use a combination of the "-o yaml" and "--dry-run" options to create a basic manifest which I can then expand using the relevant pieces from keycloakdb manifests:

* persistentvolumes.yaml
* persistentvolumeclaims.yaml
* secrets.yaml
* deployments.apps.yaml

$ kubectl create deployment nextclouddb –image=mariadb:10.4.12-bionic -o yaml –dry-run > nextclouddb.yaml

Putting it all into a single file, here is the final manifest for nextclouddb.yaml:

apiVersion: v1 items:

  • apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nextclouddb name: nextclouddb namespace: nextcloud spec: replicas: 1 selector: matchLabels: app: nextclouddb strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 template: metadata: creationTimestamp: null labels: app: nextclouddb spec: containers: - envFrom: - secretRef: name: nextclouddb-secret optional: false ports: - containerPort: 3306 name: mariadb protocol: TCP image: mariadb:10.4.12-bionic imagePullPolicy: Always name: mariadb resources: {} securityContext: allowPrivilegeEscalation: false capabilities: {} privileged: false readOnlyRootFilesystem: false runAsNonRoot: false stdin: true volumeMounts: - mountPath: /var/lib/mysql name: dbvol dnsPolicy: ClusterFirst imagePullSecrets: - name: dockerhub restartPolicy: Always schedulerName: default-scheduler terminationGracePeriodSeconds: 30 volumes: - name: dbvol persistentVolumeClaim: claimName: nextclouddb-vol status: {}
  • apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nextclouddb-vol namespace: nextcloud spec: accessModes:
    • ReadWriteMany resources: requests: storage: 10Gi storageClassName: "" volumeMode: Filesystem volumeName: nextclouddb status: accessModes:
    • ReadWriteMany capacity: storage: 10Gi phase: Bound
  • apiVersion: v1 kind: PersistentVolume metadata: name: nextclouddb namespace: nextcloud spec: accessModes:
    • ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: nextclouddb-vol namespace: nextcloud nfs: path: /Container/nextclouddb server: 192.168.xx.xx persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem status: {}
  • apiVersion: v1 data: MYSQL_DATABASE: MYSQL_PASSWORD: MYSQL_ROOT_PASSWORD: MYSQL_USER: kind: Secret metadata: name: nextclouddb-secret namespace: nextcloud type: Opaque kind: List metadata: resourceVersion: "" selfLink: ""

The secret values are created using base64:

$ echo -n password | base64

Now let's try the dry-run to make sure everything is formatted correctly:

$ kubectl create -f nextclouddb.yaml –dry-run deployment.apps/nextclouddb created (dry run) persistentvolumeclaim/nextclouddb-vol created (dry run) persistentvolume/nextclouddb created (dry run) secret/nextclouddb-secret created (dry run)

Now for real:

$ kubectl create -f nextclouddb.yaml deployment.apps/nextclouddb created persistentvolumeclaim/nextclouddb-vol created persistentvolume/nextclouddb created secret/nextclouddb-secret created

Checking in the Rancher GUI, I found the nextclouddb workload running in the nextcloud namespace using the nextcloud-secret and successfully mounted and used the directory on the NAS that I specified. There appears to be a fully functioning MariaDB service!

> UPDATE: When finishing the setup of Nextcloud, I was getting an error that the service nextclouddb.nextcloud.cluster.local was not found. I discovered I needed to create an entry under Resources..Service Discovery which points to the nextclouddb workload on port 3306. I assume this is a step done automatically when deploying a new workload through the GUI as opposed to using kubectl. The is another resouce type which needs to added into the nextclouddb.yaml.

apiVersion: v1 kind: Service spec: clusterIP: None ports:

  • name: mariadb port: 3306 protocol: TCP targetPort: 3306 selector: workloadID_nextclouddb: “true” sessionAffinity: None type: ClusterIP

In the next post, I'll get the Nextcloud FPM service up and running.