We don't need to explicitly specify every single host individually.
Domain controllers, for example, are registered in DNS with SRV records.
Kubernetes nodes, of course, can be discovered using the Kubernetes API.
Both of these classes of nodes change frequently, so discovering them
dynamically is convenient.
Instead of routing iSCSI traffic from the Kubernetes network, through
the firewall, to the storage network, nodes now have a second network
adapter connected to directly to the storage network. The nodes with
such an adapter are labelled `network.du5t1n.me/storage`, so we can pin
the Jenkins PersistentVolume to them via a node affinity rule.
Using a volume claim template to define the persistent volume claim for
the Redis pod has two advantages: first, it enables using clustered
Redis, if we decide that becomes necessary, and second, it makes
deleteing and recreating the volume easier in the case of data
corruption. Simply scale down the StatefulSet to 0, delete the PVC, and
scale the StatefulSet back up.
Using a volume claim template to define the persistent volume claim for
the Redis pod has two advantages: first, it enables using clustered
Redis, if we decide that becomes necessary, and second, it makes
deleteing and recreating the volume easier in the case of data
corruption. Simply scale down the StatefulSet to 0, delete the PVC, and
scale the StatefulSet back up.
By default, step-ca issues certificates that are valid for only one day.
This means that clients need to have multiple renew attempts scheduled
throughout the day, otherwise, missing one could mean having their
certificates expire. This is unnecessary, and not even possible in all
cases, so let's make the default validity period longer and avoid the
issue.
Since I added an IPv6 ULA prefix to the "main" VLAN (to allow
communicating with the Synology directly), the domain controllers now
have AAAA records. This causes the `sambadc` screpe job to fail because
Blackbox Exporter prefers IPv6 by default, but Kubernetes pods do not
have IPv6 addreses.
Managing the Jenkins volume with Longhorn has become increasingly
problematic. Because of its large size, whenever Longhorn needs to
rebuild/replicate it (which happens often for no apparent reason), it
can take several hours. While the synchronization is happening, the
entire cluster suffers from degraded performance.
Instead of using Longhorn, I've decided to try storing the data directly
on the Synology NAS and expose it to Kubernetes via iSCSI. The Synology
offers many of the same features as Longhorn, including
snapshots/rollbacks and backups. Using the NAS allows the volume to be
available to any Kubernetes node, without keeping multiple copies of
the data.
In order to expose the iSCSI service on the NAS to the Kubernetes nodes,
I had to make the storage VLAN routable. I kept it as IPv6-only,
though, as an extra precaution against unauthorized access. The
firewall only allows nodes on the Kubernetes network to access the NAS
via iSCSI.
I originally tried proxying the iSCSI connection via the VM hosts,
however, this failed because of how iSCSI target discovery works. The
provided "target host" is really only used to identify available LUNs;
follow-up communication is done with the IP address returned by the
discovery process. Since the NAS would return its IP address, which
differed from the proxy address, the connection would fail. Thus, I
resorted to reconfiguring the storage network and connecting directly
to the NAS.
To migrate the contents of the volume, I temporarily created a PVC with
a different name and bound it to the iSCSI PersistentVolume. Using a
pod with both the original PVC and the new PVC mounted, I used `rsync`
to copy the data. Once the copy completed, I deleted the Pod and both
PVCs, then created a new PVC with the original name (i.e. `jenkins`),
bound to the iSCSI PV. While doing this, Longhorn, for some reason,
kept re-creating the PVC whenever I would delete it, no matter how I
requested the deletion. Deleting the PV, the PVC, or the Volume, using
either the Kubernetes API or the Longhorn UI, they would all get
recreated almost immediately. Fortunately, there was actually enough of
a delay after deleting it before Longhorn would recreate it that I was
able to create the new PVC manually. Once I did that, Longhorn seemed
to give up.
Kitchen v0.5 a few changes that affect the deployment:
* The Bored Board is now backed by MQTT
* The pool temperature is now displayed in the weather pane
* The container image is now based on Fedora and includes its own time
zone database and root CA bundle
* The websocket server prevents the process from stopping correctly
unless the graceful shutdown feature of `uvicorn` is disabled
[fleetlock] is an implementation of the Zincati FleetLock reboot
coordination protocol. It only works for machines that are Kubernetes
nodes, but it does enable safe rolling updates for those machines.
Specifically, when a node acquires a lock (backed by a Kubernetes
Lease), it cordons that node and evicts pods from it. After the node
has rebooted into the new version of Fedora CoreOS, it uncordons the
node and releases the lock.
[fleetlock]: https://github.com/poseidon/fleetlock
Vaultwarden has started prompting for the master password occasionally
when syncing the vault. Thus, we need to make sure it is available in
the _sync_ container, by mounting the secret and providing the
`PINENTRY_PASSWORD_FILE` environment variable.
Just having the alert name and group name in the ntfy notification is
not enough to really indicate what the problem is, as some alerts can
generate notifications for many reasons. In the email notifications
AlertManager sends by default, the values (but not the keys) of all
labels are included in the subject, so we will reproduce that here.
I don't like having alerts sent by e-mail. Since I don't get e-mail
notifications on my watch, I often do not see alerts for quite some
time. They are also much harder to read in an e-mail client (Fastmail
web an K-9 Mail both display them poorly). I would much rather have
them delivered via _ntfy_, just like all the rest of the ephemeral
notifications I receive.
Fortunately, it is easy enough to integrate Alertmanager and _ntfy_
using the webhook notifier in Alertmanager. Since _ntfy_ does not
natively support the Alertmanager webhook API, though, a bridge is
necessary to translate from one data format to the other. There are a
few options for this bridge, but I chose
[alexbakker/alertmanager-ntfy][0] because it looked the most complete
while also having the simplest configuration format. Sadly, it does not
expose any Prometheus metrics itself, and since it's deployed in the
_victoria-metrics_ namespace, it needs to be explicitly excluded from
the VMAgent scrape configuration.
[0]: https://github.com/alexbakker/alertmanager-ntfy
Although most libraries support ED25519 signatures for X.509
certificates, Firefox does not. This means that any certificate signed
by DCH CA R3 cannot be verified by the browser and thus will always
present a certificate error.
I want to migrate internal services that do not need certificates
that are trusted by default (i.e. they are only accessed programatically
or only I use them in the browser) back to using an internal CA instead
of the public *pyrocufflink.net* wildcard certificate. For applications
like Frigate and UniFi Network, these need to be signed by a CA that
the browser will trust, so the ED25519 certificate is inappropriate.
Thus, I've decided to migrate back to DCH CA R2, which uses an EdDSA
signature, and can therefore be trusted by Firefox, etc.
The *hlcforms* application handles form submissions for the Hatch
Learning Center website. It has various features for Tabitha that are
only accessible internally, but the form submission handler itself of
course needs to be accessible anonymously.
A recent version of *Authelia* added a dark theme. Setting the `theme`
option to `auto` enables it when the user agent has the "prefers dark
mode" hint enabled.
Patroni, a component of the *postgres poerator*, exports metrics about
the PostgreSQL database servers it manages. Notably, it provides
information about the current transaction log location for each server.
This allows us to monitor and alert on the health of database replicas.
The *promtail* job scrapes metrics from all the hosts running Promtail.
The static targets are Fedora CoreOS nodes that are not part of the
Kubernetes cluster.
The relabeling rules ensure that both the static targets and the
targets discovered via the Kubernetes Node API use the FQDN of the host
as the value of the *instance* label.
Running Promtail in a pod controlled by a DaemonSet allows it to access
the Kubernetes API via a ServiceAccount token. Since it needs the API
in order to discover the Pods running on the current node in order to
find their log files, this makes the authentication process a lot
simpler.
I discovered today that if anonymous Grafana users have Viewer
permission, they can use the Datasource API to make arbitrary queries
to any backend, even if they cannot access the Explore page directly.
This is documented ([issue #48313][0]) as expected behavior.
I don't really mind giving anonymous access to the Victoria Metrics
datasource, but I definitely don't want anonymous users to be able to
make Loki queries and view log data. Since Grafana Datasource
Permissions is limited to Grafana Enterprise and not available in
the open source version of Grafana, the official recommendation from
upstream is to use a separate Organization for the Loki datasource.
Unfortunately, this would preclude having dashboards that have graphs
from both data sources. Although I don't have any of those right now, I
like the idea and may build some eventually.
Fortunately, I discovered the `send_user_header` Grafana configuration
option. With this enabled, Grafana will send an `X-Grafana-User` header
with the username of the user on whose behalf it is making a request to
the backend. If the user is not logged in, it does not send the header.
Thus, we can detect the presence of this header on the backend and
refuse to serve query requests if it is missing.
[0]: https://github.com/grafana/grafana/issues/48313
Usually, Grafana datastores are configured using its web GUI. When
setting up a datastore that requires TLS client authentication, the
client certificate and private key have to be pasted into the form.
For certificates that renew frequently, this method would require a
frequent manual effort. Fortunately, Grafana supports defining
datastores via its "provisioning" mechanism, reading the configuration
from YAML files on the filesystem.
The Loki CA is used to issue client certificates for Grafana Loki. This
_cert-manager_ ClusterIssuer will allow applications running in
Kubernetes (e.g. Grafana) to request a Certificate that they can use to
access the Loki HTTP API.
I never ended up using _Step CA_ for anything, since I was initially
focused on the SSH CA feature and I was unhappy with how it worked
(which led me to write _SSHCA_). I didn't think about it much until I
was working on deploying Grafana Loki. For that project, I wanted to
use a certificate signed by a private CA instead of the wildcard
certificate for _pyrocufflink.blue_. So, I created *DCH CA R3* for that
purpose. Then, for some reason, I used the exact same procedure to
fetch the certificate from Kubernetes as I had set up for the
_pyrocufflink.blue_ wildcard certificate, as used by Frigate. This of
course defeated the purpose, since I could have just as easily used
the wildcard certificate in that case.
When I discovered that Grafana Loki expects to be deployed behind a
reverse proxy in order to implement access control, I took the
opportunity to reevaluate the certificate issuance process. Since a
reverse proxy is required to implement the access control I want (anyone
can push logs but only authenticated users can query them), it made
sense to choose one with native support for requesting certificates via
ACME. This would eliminate the need for `fetchcert` and the
corresponding Kubernetes API token. Thus, I ended up deciding to
redeploy _Step CA_ with the new _DCH CA R3_ for this purpose.
Grafana Loki is hosted on a VM named *loki0.pyrocufflink.blue*. It runs
Fedora CoreOS, so in addition to scraping Loki itself, we need to scrape
_collectd_ and _Zincati_ as well.
Apparently, I never bothered to check that the Kitchen HUD server was
actually fetching data from Victoria Metrics when I updated it before; I
only verified that the Unauthorized errors in the `vmselect` log
went away. They did, but only because now the Kitchen server was
failing to contact `vmselect` at all.
I did not realize the batteries on the garage door tilt sensors had
died. Adding alerts for various sensor batteries should help keep me
better informed.