I've created new worker nodes that are dedicated to running Longhorn
replicas. These nodes are tainted with the
`node-role.kubernetes.io/longhorn` taint, so no regular pods will be
scheduled there by default. Longhorn pods thus needs to be configured
to tolerate that taint, and to be scheduled on nodes with the
similarly-named label.
Longhorn uses a special Secret resource to configure the backup target.
This secret includes the credentials and CA certificate for accessing
the MinIO S3 service.
Longhorn must be configured to use this Secret by setting the
`backup-target-credential-secret` setting to
`minio-backups-credentials`.
This commit adds resources for deploying the Home Assistant ecosystem
inside Kubernetes. Home Assistant itself, as well as Mosquitto, are
just normal Pods, managed by StatefulSets, that can run anywhere.
ZWaveJS2MQTT and Zigbee2MQTT, on the other hand, have to run on a
special node (a Raspberry Pi), where the respective controllers are
attached.
The Home Assistant UI is exposed externally via an Ingress resource.
The MQTT broker is also exposed externally, using the TCP proxy feature
of *ingress-nginx*. Additionally, the Zigbee2MQTT and ZWaveJS2MQTT
control panels are exposed via Ingress resources, but these are
protected by Authelia.
Instead of using a static username/password and HTTP Basic
authentication for the Longhorn UI, we can now use Authelia via the
*nginx* auth subrequest functionality.
I originally added the `du5t1n.me/storage` label to the x86_64 nodes and
configured Longhorn to only run on nodes with those labels because I
thought that was the correct way to control where volume replicas are
stored. It turns out that this was incorrect, as it prevented Longhorn
from running on non-matching nodes entirely. Thus, any machine that was
not so labeled could not access any Longhorn storage volumes.
The correct way to limit where Longhorn stores volume replicas is to
enable the `create-default-disk-labeled-nodes` setting. With this
setting enabled, Longhorn will run on all nodes, but will not create
"disks" on them unless they have the
`node.longhorn.io/create-default-disk` label set to `true`. Nodes that
do not have "disks" will not store volume replicas, but will run the
other Longhorn components and can therefore access Longhorn volumes.
Note that changing the "default settings" ConfigMap does not change the
setting once Longhorn has been deployed. To update the setting on an
existing installation, the setting has to be changed explicitly:
```sh
kubectl get setting -n longhorn-system -o json \
create-default-disk-labeled-nodes \
| jq '.value="true"' \
| kubectl apply -f -
```
I was originally going to use GlusterFS to provide persistent storage
for pods, but [Heketi][0], the component that provides the API for
the Kubernetes StorageClass, is in "deep maintenance" status and looks
to be practically dead. I was a bit afraid to try to use it because of
that, and went looking for guidance on Reddit, which is how I discovered
Longhorn.