This hacky work-around is no longer necessary, as I've figured out why
the players don't (always) get rediscovered when the server restarts.
It turns out, Avahi on the firewall was caching responses to the mDNS PTR
requests Music Assistant makes. Rather than forward the requests to the
other VLANs, it would respond with its cached information, but in a way
that Music Assistant didn't understand. Setting `cache-entries-max` to
`0` in `avahi-daemon.conf` on the firewall resolved the issue.
This reverts commit 42a7964991.
I haven't fully determined why, but when the Music Assistant server
restarts, it marks the _shairport-sync_ players as offline and will not
allow playing to them. The only way I have found to work around this is
to restart the players after the server restarts. As that's pretty
cumbersome and annoying, I naturally want to automate it, so I've
created this rudimentary synchronization technique using _ntfy_: each
player listens for notifications on a specific topic, and upon receiving
one, tells _shairport-sync_ to exit. With the `Restart=` property
configured on the _shairport-sync.service_ unit, _systemd_ will restart
the service, which causes Music Assistant to discover the player again.
_Music Assistant_ is pretty straightforward to deploy, despite
upstream's apparent opinion otherwise. It just needs a small persistent
volume for its media index and customization. It does need to use the
host network namespace, though, in order to receive multicast
announcements from e.g. AirPlay players, as it doesn't have any way of
statically configuring them.
Jenkins needs to be able to patch the Deployment to trigger a restart
after it builds a new container image for _dch-webhooks_.
Note that this manifest must be applied on its own **without
Kustomize**. Kustomize seems to think the `dch-webhooks` in
`resourceNames` refers to the ConfigMap it manages and "helpfully"
renames it with the name suffix hash. It's _not_ the ConfigMap, though,
but there's not really any way to tell it this.
Without a node affinity rule, Kubernetes applies equal weight to the
"big" x86_64 nodes and the "small" aarch64 ones. Since we would really
rather Piper and Whisper _not_ run on a Raspberry Pi, we need the rule
to express this.
As it turns out, although Home Assistant itself works perfectly fine on
a Raspberry Pi, Piper and Whisper do not. They are _much_ too slow to
respond to voice commands.
This reverts commit 32666aa628.
With the introduction of the two new Raspberry Pi nodes that I intend to
be used for anything that supports running on aarch64, I'm eliminating
the `du5t1n.me/machine=raspberrypi` taint. It no longer makes sense, as
the only node that has it is the Zigbee/ZWave controller. Having
dedicated taints for those roles is much more clear.
As it turns out, it's not possible to reuse a YAML anchor. At least in
Rust's `serde_yaml`, only the final definition is used. All references,
even those that appear before the final definition, use the same
definition. Thus, each application that refers to its own URL in its
match criteria needs a unique anchor.
_Firefly III_ and _phpipam_ don't export any Prometheus metrics, so we
have to scrape them via the Blackbox Exporter.
Paperless-ngx only exposes metrics via Flower, but since it runs in the
same container as the main application, we can assume that if the former
is unavailable, the latter is as well.
The Kubernetes root CA certificate is stored in a ConfigMap named
`kube-root-ca.crt` in every namespace. The _host-provisioner_ needs to
be able to read this ConfigMap in order to prepare control plane nodes,
as it is used by HAProxy to check the health of the API servers running
on each node.
We don't want to pull public container images that already exist. This
creates prevents pods from starting if there is any connectivity issue
with the upstream registry.