Apache supports fetching server certificates via ACME (e.g. from Let's
Encrypt) using a new module called _mod_md_. Configuring the module is
fairly straightforward, mostly consisting of `MDomain` directives that
indicate what certificates to request. Unfortunately, there is one
rather annoying quirk: the certificates it obtains are not immediately
available to use, and the server must be reloaded in order to start
using them. Fortunately, the module provides a notification mechanism
via the `MDNotifyCmd` directive, which will run the specified command
after obtaining a certificate. The command is executed with the
privileges of the web server, which does not have permission to reload
itself, so we have to build in some indirection in order to trigger the
reload: the notification runs a script that creates an empty file in the
server's state directory; systemd is watching for that file to be
created, then starts another service unit to trigger the actual reload,
then removes trigger file.
Website roles, etc. that want to switch to using _mod_md_ to manage
their certificates should depend on this role and add an `MDomain`
directive to their Apache configuration file fragments.
The _haproxy_ role only installs HAProxy and provides some basic global
configuration; it expects another role to depend on it and provide
concrete proxy configuration with drop-in configuration files. Thus, we
need a role specifically for the Kubernetes control plane nodes to
provide the configuration to proxy for the API server.
Control plane nodes will now run _keepalived_, to provide a "floating"
IP address that is assigned to one of the nodes at a time. This
address (172.30.0.169) is now the target of the DNS A record for
_kubernetes.pyrocufflink.blue_, so clients will always communicate with
the server that currently holds the floating address, whichever that may
be.
I was originally inspired by the official Kubernetes [High Availability
Considerations][0] document when designing this. At first, I planned to
deploy _keepalived_ and HAProxy as DaemonSets on the control plane
nodes, but this ended up being somewhat problematic whenever all of the
control plane nodes would go down at once, as the _keepalived_ and
HAProxy pods would not get scheduled and thus no clients communicate
with the API servers.
[0]: 9d7cfab6fe/docs/ha-considerations.md
The man page for _containers-certs.d(5)_ says that subdirectories of
`/etc/containers/certs.d` should be named `host:port`, however, this is
a bit misleading. It seems instead, the directory name must match the
name of the registry server as specified, so in the case of a server
that supports HTTPS on port 443, where the port would be omitted from
the image name, it must also be omitted from the `certs.d` subdirectory
name.
One major weakness with Ansible's "lookup" plugins is that they are
evaluated _every single time they are used_, even indirectly. This
means, for example, a shell command could be run many times, potentially
resulting in different values, or executing a complex calculation that
always provides the same result. Ansible does not have a built-in way
to cache the result of a `lookup` or `query` call, so I created this
one. It's inspired by [ansible-cached-lookup][0], which didn't actually
work and is apparently unmaintained. Instead of using a hard-coded
file-based caching system, however, my plugin uses Ansible's
configuration and plugin infrastructure to store values with any
available cache plugin.
Although looking up the _pyrocufflink.net_ wildcard certificate with the
Kubernetes API isn't particularly expensive by itself right now, I can
envision several other uses that may be. Having this plugin available
could speed up future playbooks.
[0]: https://pypi.org/project/ansible-cached-lookup
Using files for certificates and private keys is less than ideal.
The only way to "share" a certificate between multiple hosts is with
symbolic links, which means the configuration policy has to be prepared
for each managed system. As we're moving toward a much more dynamic
environment, this becomes problematic; the host-provisioner will never
be able to copy a certificate to a new host that was just created.
Further, I have never really liked the idea of storing certificates and
private keys in Git anyway, even if it is in a submodule with limited
access.
The _containers-image_ role configures _containers-registries.conf(5)_ and
_containers-cert.d(5)_, which are used by CRI-O (and `podman`).
Specifically, we'll use these to redirect requests for images on Docker
Hub (docker.io) to the internal caching proxy.
Docker Hub's rate limits are so low now that they've started to affect
my home lab. Deploying a caching proxy and directing all pull requests
through it should prevent exceeding the limit. It will also help
prevent containers from starting if access to the Internet is down, as
long as their images have been cached recently.
The _nginx_ access log files are absolutely spammed with requets from
Restic and WAL-G, to the point where they fill the log volume on
_chromie_ every day. They're not particularly useful anyway; I've never
looked at them, and any information they contain can be obtained in
another way, if necessary, for troubleshooting.
I've become rather frusted witih Grafana Loki lately. It has several
bugs that affect my usage, including issues with counting and
aggregation, completely broken retention and cleanup, spamming itself
with bogus error log messages, and more. Now that VitoriaLogs has
first-class support in Grafana and support for alerts, it seems like a
good time to try it out. It's under very active development, with bugs
getting fixed extremely quickly, and new features added constantly.
Indeed, as I was experimenting with it, I thought, "it would be nice if
the web UI could decode ANSI escapes for terminal colors," and just a
few days later, that feature was added! Native support for syslog is
also a huge benefit, as it will allow me to collect logs directly from
network devices, without first collecting them into a file on the Unifi
controller.
This new role deploys VictoriaLogs in a manner very similar to how I
have Loki set up, as a systemd-managed Podman container. As it has no
built-in authentication or authorization, we rely on Caddy to handle
that. As with Loki, mTLS is used to prevent anonymous access to
querying the logs, however, authentication via Authelia is also an
option for human+browser usage. I'm re-using the same certificate
authority as with Loki to simplify Grafana configuration. Eventually, I
would like to have a more robust PKI, probably using OpenBao, at which
point I will (hopefully) have decided which log database I will be
using, and can use a proper CA for it.
Although I'm sure it will never be used, we might as well set the logout
URL to the correct value. When the link is clicked, the browser will
navigate to the Authelia logout page, which will invalidate all SSO
sessions.
Frigate has evolved a lot over the past year or so since v0.13.
Notably, some of the configuration options have been renamed, and
_events_ have become _alerts_ and _detections_. There's also now
support for authenication, though we don't need it because we're using
Authelia.
We're trying to sell the Hustler lawn mower, so we plan to set it out
at the end of the driveway for passers-by to see. I've temporarily
installed one of the Annke cameras in the kitchen, pointed out the
front window, to monitor it.
The linuxserver.io Unifi container stored Unifi server and device logs
under `/var/lib/unifi/logs`, while the new container stores them under
`/var/log/unifi`.
I continually struggle with machines' (physical and virtual, even the
Roku devices!) clocks getting out of sync. I have been putting off
fixing this because I wanted to set up a Windows-compatible NTP server
(i.e. on the domain controllers, with Kerberos signing), but there's
really no reason to wait for that to fix the clocks on all the
non-Windows machines, especially since there are exactly 0 Windows
machines on the network right now.
The *chrony* role and corresponding `chrony.yml` playbook are generic,
configured via the `chrony_pools`, `chrony_servers`, and `chrony_allow`
variables. The values for these variables will configure the firewall
to act as an NTP server, synchronizing with the NTP pool on the
Internet, while all other machines will synchronize with it. This
allows machines on networks without Internet access to keep their clocks
in sync.
The `root_authorized_keys` variable was originally defined only for the
*pyrocufflink* group. This used to effectively be "all" machines, since
everything was a member of the AD domain. Now that we're moving away
from that deployment model, we still want to have the break-glass
option, so we need to define the authorized keys for the _all_ group.
This was the last group that had an entire file encrypted with Ansible
Vault. Now that the Synapse server is long gone, rather than convert it
to having individually-encrypted values, we can get rid of it entirely.
While having a password set for _root_ provides a convenient way of
accessing a machine even if it is not available via SSH, using a static
password in this way is quite insecure and not worth the risk. I may
try to come up with a better way to set a unique password for each
machine eventually, but for now, having this password here is too
dangerous to keep.
This role can ensure PostgreSQL users and databases are created for
applications that are not themselves managed by Ansible. Notably, we
need to do this for anything deployed in Kubernetes that uses the
central database server.
The *k8s-iot-net-ctrl* group is for the Raspberry Pi that has the Zigbee
and Z-Wave controllers connected to it. This node runs the Zigbee2MQTT
and ZWaveJS2MQTT servers as Kubernetes pods.
Need to expose Victoria Metrics to the Internet so the `vmagent` process
on the VPS can push the metrics it has scraped from its Blackbox
exporter. Authelia needs to allow access to the `/insert/` paths, of
course.
The _remote-blackbox_ group defines a system that runs
_blackbox-exporter_ and _vmagent_ in a remote (cloud) location. This
system will monitor our public web sites. This will give a better idea
of their availability from the perspective of a user on the Internet,
which can be by factors that are necessarily visible from within the
network.
In the spirit of replacing bloated tools with unnecessary functionality
with smaller, more focused alternatives, we can use `doas` instead of
`sudo`. Originally, it was a BSD tool, but the Linux port supports PAM,
so we can still use `pam_auth_ssh_agent` for ppasswordless
authentication.
The Samba AD domain performs two important functions: centralized user
identity mapping via LDAP, and centralized authentication via
Kerberos/GSSAPI. Unfortunately, Samba, on both domain controllers and
members, is quite frustrating. The client, _winbind_, frequently just
stops working and needs to have its cache flushed in order to resolve
user IDs again. It also takes quite a lot of memory, something rather
precious on Raspberry Pis. The DC is also somewhat flaky at times, and
cumbersome to upgrade. In short, I really would like to get rid of as
much of it as possible.
For most use cases, OIDC can replace Kereros. For SSH specifically, we
can use SSH certificates (which are issued to OIDC tokens).
Unfortunately, user and group accounts still need ID numbers assigned,
which is what _winbind_ does. In reality, there's only one user that's
necessary: _dustin_. It doesn't make sense to bring along all the
baggage of Samba just to map that one account. Instead, it's a lot
simpler and more robust to create it statically.
So far, I have been managing Kubernetes worker nodes with Fedora CoreOS
Ignition, but I have decided to move everything back to Fedora and
Ansible. I like the idea of an immutable operating system, but the FCOS
implementation is not really what I want. I like the automated updates,
but that can be accomplished with _dnf-automatic_. I do _not_ like
giving up control of when to upgrade to the next Fedora release.
Mostly, I never did come up with a good way to manage application-level
configuration on FCOS machines. None of my experiments (Cue+tmpl,
KCL+etcd+Luci) were successful, which mostly resulted in my manually
managing configuration on nodes individually. Managing OS-level
configuration is also rather cumbersome, since it requires redeploying
the machine entirely. Altogether, I just don't think FCOS fits with my
model of managing systems.
This commit introduces a new playbook, `kubernetes.yml`, and a handful of
new roles to manage Kubernetes worker nodes running Fedora Linux. It
also adds two new deploy scripts, `k8s-worker.sh` and `k8s-longhorn.sh`,
which fully automate the process of bringing up worker nodes.
The target location for WAL archives and backups saved by WAL-G should
be separated based on the major version of PostgreSQL with which they
are compatible. This will make it easier to restore those backups,
since they can only be restored into a cluster of the same version.
Unfortunately, WAL-G does not natively handle this. In fact, it doesn't
really have any way of knowing the version of the PostgreSQL server it
is backing up, at least when it is uploading WAL archives. Thus, we
have to include the version number in the target path (S3 prefix)
manually. We can't rely on Ansible to do this, because there is no way
to ensure Ansible runs at the appropriate point during the upgrade
process. As such, we need to be able to modify the target location as
part of the upgrade, without causing a conflict with Ansible the next
time it runs.
To that end, I've changed how the _wal-g-pg_ role creates the
configuration file for WAL-G. Instead of rendering directly to
`wal-g.yml`, the role renders a template, `wal-g.yml.in`. This template
can include a `@PGVERSION@` specifier. The `wal-g-config` script will
then use `sed` to replace that specifier with the version of PostgreSQL
installed on the server, rendering the final `wal-g.yml`. This script
is called both by Ansible in a handler after generating the template
configuration, and also as a post-upgrade action by the
`postgresql-upgrade` script.
I originally wanted the `wal-g-config` script to use the version of
PostgreSQL specified in the `PG_VERSION` file within the data directory.
This would ensure that WAL-G always uploads/downloads files for the
matching version. Unfortunately, this introduced a dependency conflict:
the WAL-G configuration needs to be present before a backup can be
restored, but the data directory is empty until after the backup has
been restored. Thus, we have to use the installed server version,
rather than the data directory version. This leaves a small window
where WAL-G may be configured to point to the wrong target if the
`postgresql-upgrade` script fails and thus does not trigger regenerating
the configuration file. This could result in new WAL archives/backups
being uploaded to the old target location. These files would be
incompatible with the other files in that location, and could
potentially overwrite existing files. This is rather unlikely, since
the PostgreSQL server will not start if the _postgresql-upgrade.service_
failed. The only time it should be possible is if the upgrade fails in
such a way that it leaves an empty but valid data directory, and then
the machine is rebooted.
_loki1.pyrocufflink.blue_ replaces _loki0.pyrocufflink.blue_. The
former runs Fedora Linux and is managed by Ansible, while the latter ran
Fedora CoreOS and was managed by Ignition and _cfg_.
I want to publish the _20125_ Status application to an F-Droid
repository to make it easy for Tabitha to install and update. F-Droid
repositories are similar to other package repositories: a collection of
packages and some metadata files. Although there is a fully-fledged
server-side software package that can manage F-Droid repositories, it's
not required: the metadata files can be pre-generated and then hosted by
a static web server just fine.
This commit adds configuration for the web server and reverse proxy to
host the F-Droid repository at _apps.du5t1n.xyz_.
Bitwarden has not worked correctly for clients using the non-canonical
domain name (i.e. _bitwarden.pyrocufflink.blue_) for quite some time.
This still trips me up occasionally, though, so hopefully adding a
server-side redirect will help. Eventually, I'll probably remove the
non-canonical name entirely.
The current Grafana Loki server, *loki0.pyrocufflink.blue*, runs Fedora
CoreOS and is managed by Ignition and *cfg*. Since I have declared
*cfg* a failed experiment, I'm going to re-deploy Loki on a new VM
running Fedora Linux and managed by Ansible.
The *loki* role installs Podman and defines a systemd-managed container
to run Grafana Loki.
MinIO/S3 clients generate a _lot_ of requests. It's also not
particularly useful to have these stored in Loki anyway. As such, we'll
stop routing them to syslog/journal.
Having access logs is somewhat useful for troubleshooting, but really
for only live requests (i.e. what's happening right now). We therefore
keep the access logs around in a file, but only for one day, so as not
to fill up the filesystem with logs we'll never see.
_wal-g_ can send StatsD metrics when it completes an upload/backup/etc.
task. Using the `statsd_exporter`, we can capture these metrics and
make them available to Victoria Metrics.
Nextcloud writes JSON-structured logs to
`/var/lib/nextcloud/data/nextcloud.log`. These logs contain errors,
etc. from the Nextcloud server, which are useful for troubleshooting.
Having them in Loki will allow us to view them in Grafan as well as
generate alerts for certain events.