1
0
Fork 0
Commit Graph

10 Commits (master)

Author SHA1 Message Date
Dustin 0a6086eb2a longhorn: Run on dedicated nodes
I've created new worker nodes that are dedicated to running Longhorn
replicas.  These nodes are tainted with the
`node-role.kubernetes.io/longhorn` taint, so no regular pods will be
scheduled there by default.  Longhorn pods thus needs to be configured
to tolerate that taint, and to be scheduled on nodes with the
similarly-named label.
2024-11-21 22:59:14 -06:00
Dustin 145fa6286e storage: Add Longhorn backup target secret
Longhorn uses a special Secret resource to configure the backup target.
This secret includes the credentials and CA certificate for accessing
the MinIO S3 service.

Longhorn must be configured to use this Secret by setting the
`backup-target-credential-secret` setting to
`minio-backups-credentials`.
2024-10-13 14:03:49 -05:00
Dustin a7eac14d39 home-assistant: Deploy Home Assistant
This commit adds resources for deploying the Home Assistant ecosystem
inside Kubernetes.  Home Assistant itself, as well as Mosquitto, are
just normal Pods, managed by StatefulSets, that can run anywhere.
ZWaveJS2MQTT and Zigbee2MQTT, on the other hand, have to run on a
special node (a Raspberry Pi), where the respective controllers are
attached.

The Home Assistant UI is exposed externally via an Ingress resource.
The MQTT broker is also exposed externally, using the TCP proxy feature
of *ingress-nginx*.  Additionally, the Zigbee2MQTT and ZWaveJS2MQTT
control panels are exposed via Ingress resources, but these are
protected by Authelia.
2023-07-24 17:53:58 -05:00
Dustin 4952e6f278 storage: Upgrade Longhorn to v1.4.1 2023-04-24 23:21:55 -05:00
Dustin df12690958 storage: Use Authelia for Longhorn UI auth
Instead of using a static username/password and HTTP Basic
authentication for the Longhorn UI, we can now use Authelia via the
*nginx* auth subrequest functionality.
2023-01-13 21:33:14 -06:00
Dustin 6df6e552b7 longhorn: Remove node selector labels
I originally added the `du5t1n.me/storage` label to the x86_64 nodes and
configured Longhorn to only run on nodes with those labels because I
thought that was the correct way to control where volume replicas are
stored.  It turns out that this was incorrect, as it prevented Longhorn
from running on non-matching nodes entirely.  Thus, any machine that was
not so labeled could not access any Longhorn storage volumes.

The correct way to limit where Longhorn stores volume replicas is to
enable the `create-default-disk-labeled-nodes` setting.  With this
setting enabled, Longhorn will run on all nodes, but will not create
"disks" on them unless they have the
`node.longhorn.io/create-default-disk` label set to `true`.  Nodes that
do not have "disks" will not store volume replicas, but will run the
other Longhorn components and can therefore access Longhorn volumes.

Note that changing the "default settings" ConfigMap does not change the
setting once Longhorn has been deployed.  To update the setting on an
existing installation, the setting has to be changed explicitly:

```sh
kubectl get setting -n longhorn-system -o json \
    create-default-disk-labeled-nodes \
    | jq '.value="true"' \
    | kubectl apply -f -
```
2022-10-11 21:58:43 -05:00
Dustin 5f2aaefc35 stroage: Set default storage class
Setting a default storage class allows PersistentVolumes to be declared
without selecting a specific storage class in each object spec.
2022-08-23 21:21:54 -05:00
Dustin 76875e3dbf storage: Show how to create admin password secret 2022-08-23 21:21:43 -05:00
Dustin 8f6373fb70 storage: Fix typo in node selector 2022-08-23 21:21:22 -05:00
Dustin 9b86a117ef storage: Add manifest for Longhorn
I was originally going to use GlusterFS to provide persistent storage
for pods, but [Heketi][0], the component that provides the API for
the Kubernetes StorageClass, is in "deep maintenance" status and looks
to be practically dead.  I was a bit afraid to try to use it because of
that, and went looking for guidance on Reddit, which is how I discovered
Longhorn.
2022-07-31 00:57:53 -05:00