Need to expose Victoria Metrics to the Internet so the `vmagent` process
on the VPS can push the metrics it has scraped from its Blackbox
exporter. Authelia needs to allow access to the `/insert/` paths, of
course.
The _remote-blackbox_ group defines a system that runs
_blackbox-exporter_ and _vmagent_ in a remote (cloud) location. This
system will monitor our public web sites. This will give a better idea
of their availability from the perspective of a user on the Internet,
which can be by factors that are necessarily visible from within the
network.
Like the _blackbox-exporter_ role, the _vmagent_ role now deploys
`vmagent` as a container. This simplifies the process considerably,
eliminating the download/transfer step.
While refactoring this role, I also changed how the trusted CA
certificates are handled. Rather than copy files, the role now expects
a `vmagent_ca_certs` variable. This variable is a mapping of
certificate name (file name without extension) to PEM contents. This
allows certificates to be defined using normal host/group variables.
Instead of downloading the `blackbox_exporter` binary from GitHub and
copying it to the managed node, the _blackbox-exporter_ role now
installs _podman_ and configures a systemd container unit (Quadlet) to
run it in a container. This simplifies the deployment considerably, and
will make updating easier (just run the playbook with `-e
blackbox_exporter_pull_image=true`).
Since the canonical location for Anaconda kickstart scripts is now
Gitea, we need to allow hosts to access them from there.
Also allowing access from the _pyrocufflink.red_ network for e.g.
installation testing.
I want to use Gita as the canonical source for Anaconda kickstart
scripts. There are certain situations, however, where they cannot be
accessed via HTTPS, such as on a Raspberry Pi without an RTC, since it
cannot validate the certificate without the correct time. Thus, the
web server must not force an HTTPS redirect for these, but serve them
directly.
Jellyfin is one of those stupid programs that thinks it needs to mutate
its own config. At startup, it apparently reads `system.xml` and then
writes it back out. When it does this, it trims the final newline from
the file. Then, the next time Ansible runs, the template rewrites the
file with the trailing newline, and thus determines that the file has
changed and restarts the service. This cycle has been going on for a
while and is rather annoying.
The systemd unit configuration installed by Fedora's _kubeadm_ package
does not pass the `--config` argument to the kubelet service. Without
this argument, the kubelet will not read the configuration file
generated by `kubeadm` from the `kubelet-config` ConfigMap. Thus,
various features will not work correctly, including server TLS
bootstrap.
In order to manage servers that are not members of the
_pyrocufflink.blue_ AD domain, Jenkins needs a user certificate signed
by the SSH CA. Unfortunately, there is not really a good way to get a
certificate issued on demand in a non-interactive way, as SSHCA relies
on OIDC ID tokens which are issued by Authelia, and Authelica requires
browser-based interactive login and consent. Until I can come up with a
better option, I've manually signed a certificate for Jenkins to use.
The Jenkins SSH Credentials plugin does not support certificates
directly, so in order to use one, we have to explicitly configure `ssh`
to load it via the `CertificateFile` option.
Now that we have multiple domains (_pyrocufflink.blue_ for AD domain
members and _pyrocufflink.black_ for the new machines), we need a way to
specify the domain for new machines when they are created. Thus, the
`newvm.sh` script accepts either an FQDN or a `--domain` argument. The
DHCP server will register the DNS name in the zone containing the
machine's domain name.
In the spirit of replacing bloated tools with unnecessary functionality
with smaller, more focused alternatives, we can use `doas` instead of
`sudo`. Originally, it was a BSD tool, but the Linux port supports PAM,
so we can still use `pam_auth_ssh_agent` for ppasswordless
authentication.
The Samba AD domain performs two important functions: centralized user
identity mapping via LDAP, and centralized authentication via
Kerberos/GSSAPI. Unfortunately, Samba, on both domain controllers and
members, is quite frustrating. The client, _winbind_, frequently just
stops working and needs to have its cache flushed in order to resolve
user IDs again. It also takes quite a lot of memory, something rather
precious on Raspberry Pis. The DC is also somewhat flaky at times, and
cumbersome to upgrade. In short, I really would like to get rid of as
much of it as possible.
For most use cases, OIDC can replace Kereros. For SSH specifically, we
can use SSH certificates (which are issued to OIDC tokens).
Unfortunately, user and group accounts still need ID numbers assigned,
which is what _winbind_ does. In reality, there's only one user that's
necessary: _dustin_. It doesn't make sense to bring along all the
baggage of Samba just to map that one account. Instead, it's a lot
simpler and more robust to create it statically.
So far, I have been managing Kubernetes worker nodes with Fedora CoreOS
Ignition, but I have decided to move everything back to Fedora and
Ansible. I like the idea of an immutable operating system, but the FCOS
implementation is not really what I want. I like the automated updates,
but that can be accomplished with _dnf-automatic_. I do _not_ like
giving up control of when to upgrade to the next Fedora release.
Mostly, I never did come up with a good way to manage application-level
configuration on FCOS machines. None of my experiments (Cue+tmpl,
KCL+etcd+Luci) were successful, which mostly resulted in my manually
managing configuration on nodes individually. Managing OS-level
configuration is also rather cumbersome, since it requires redeploying
the machine entirely. Altogether, I just don't think FCOS fits with my
model of managing systems.
This commit introduces a new playbook, `kubernetes.yml`, and a handful of
new roles to manage Kubernetes worker nodes running Fedora Linux. It
also adds two new deploy scripts, `k8s-worker.sh` and `k8s-longhorn.sh`,
which fully automate the process of bringing up worker nodes.
The target location for WAL archives and backups saved by WAL-G should
be separated based on the major version of PostgreSQL with which they
are compatible. This will make it easier to restore those backups,
since they can only be restored into a cluster of the same version.
Unfortunately, WAL-G does not natively handle this. In fact, it doesn't
really have any way of knowing the version of the PostgreSQL server it
is backing up, at least when it is uploading WAL archives. Thus, we
have to include the version number in the target path (S3 prefix)
manually. We can't rely on Ansible to do this, because there is no way
to ensure Ansible runs at the appropriate point during the upgrade
process. As such, we need to be able to modify the target location as
part of the upgrade, without causing a conflict with Ansible the next
time it runs.
To that end, I've changed how the _wal-g-pg_ role creates the
configuration file for WAL-G. Instead of rendering directly to
`wal-g.yml`, the role renders a template, `wal-g.yml.in`. This template
can include a `@PGVERSION@` specifier. The `wal-g-config` script will
then use `sed` to replace that specifier with the version of PostgreSQL
installed on the server, rendering the final `wal-g.yml`. This script
is called both by Ansible in a handler after generating the template
configuration, and also as a post-upgrade action by the
`postgresql-upgrade` script.
I originally wanted the `wal-g-config` script to use the version of
PostgreSQL specified in the `PG_VERSION` file within the data directory.
This would ensure that WAL-G always uploads/downloads files for the
matching version. Unfortunately, this introduced a dependency conflict:
the WAL-G configuration needs to be present before a backup can be
restored, but the data directory is empty until after the backup has
been restored. Thus, we have to use the installed server version,
rather than the data directory version. This leaves a small window
where WAL-G may be configured to point to the wrong target if the
`postgresql-upgrade` script fails and thus does not trigger regenerating
the configuration file. This could result in new WAL archives/backups
being uploaded to the old target location. These files would be
incompatible with the other files in that location, and could
potentially overwrite existing files. This is rather unlikely, since
the PostgreSQL server will not start if the _postgresql-upgrade.service_
failed. The only time it should be possible is if the upgrade fails in
such a way that it leaves an empty but valid data directory, and then
the machine is rebooted.
The `postgresql-upgrade` script will now run any executables located in
the `/etc/postgresql/post-upgrade.d` directory. This will allow making
arbitrary changes to the system after a PostgreSQL major version
upgrade. Notably, we will use this capability to change the WAL-G
configuration to upload WAL archives and backups to the correct
version-specific location.
There's a bit of a dependency loop between the _postgresql-server_ role
and other roles that supplement it, like _wal-g-pg_ and
_postgresql-cert_. The latter roles need PostgreSQL installed, but when
those roles are used, the server cannot be started until they have been
applied. To resolve this situation, I've broken out the initial
installation steps from the _postgresql-server_ role into
_postgresql-server-base_. Roles that need PostgreSQL installed, but
need to be applied before the server can start, can depend on this role.
The `postgresql-upgrade.sh` script arranges to run `pg_upgrade` after a
major PostgreSQL version update. It's scheduled by a systemd unit,
_postgresql-upgrade.service_, which runs only after an OS update.
Tasks that must run as the _postgres_ user need to explicity enable
`become`, in case it is not already enabled at the playbook level. This
can happen, for example, when the playbook is running directly as root.
Now that we have the serial terminal server managing `picocom` processes
for each serial port, and those `picocom` processes are configured to
log console output to files, we can configure Promtail to scrape these
log files and send them to Loki.
Using `tmux`, we can spawn a bunch of `picocom` processes for the serial
ports connected to other server's console ports. The
_serial-terminal-server_ service manages the `tmux` server process,
while the individual _serial-terminal-server-window@.service_ units
create a window in the `tmux` session.
The serial terminal server runs as a dedicated user. The SSH server is
configured to force this user to connect to the `tmux` session. This
should help ensure the serial consoles are accessible, even if the
Active Directory server is unavailable.
WAL-G slows down significantly when too many backups are kept. We need
to periodically clean up old backups to maintain a reasonable level of
performance, and also keep from wasting space with useless old backups.
_loki1.pyrocufflink.blue_ replaces _loki0.pyrocufflink.blue_. The
former runs Fedora Linux and is managed by Ansible, while the latter ran
Fedora CoreOS and was managed by Ignition and _cfg_.
This udev rule will automatically re-add disks to the RAID array when
they are connected. `mdadm --udev-rules` is supposed to be able to
generate such a rule based on the `POLICY` definitions in
`/etc/mdadm.conf`, but I was not able to get that to work; it always
printed an empty rule file, no matter what I put in `mdadm.conf`.
I want to publish the _20125_ Status application to an F-Droid
repository to make it easy for Tabitha to install and update. F-Droid
repositories are similar to other package repositories: a collection of
packages and some metadata files. Although there is a fully-fledged
server-side software package that can manage F-Droid repositories, it's
not required: the metadata files can be pre-generated and then hosted by
a static web server just fine.
This commit adds configuration for the web server and reverse proxy to
host the F-Droid repository at _apps.du5t1n.xyz_.
Bitwarden has not worked correctly for clients using the non-canonical
domain name (i.e. _bitwarden.pyrocufflink.blue_) for quite some time.
This still trips me up occasionally, though, so hopefully adding a
server-side redirect will help. Eventually, I'll probably remove the
non-canonical name entirely.
Although listening on only an IPv6 socket works fine for the HTTP
front-end, it results in HAProxy logging client requests as IPv4-mapped
IPv6 addresses. For visual processing, this is ok, but it breaks Loki's
`ip` filter.
When troubleshooting configuration or connection issues, it will be
helpful to have the value of the HTTP Host header present in log
messages emitted by HAProxy. This will help reason about HAProxy's
routing decisions.
For TLS connections, of course, we don't have access to the Host header,
but we can use the value of the TLS SNI field. Note that the requisite
`content set-var` directive MUST come before the `content accept`;
HAProxy stops processing all `tcp-request content ...` directives once
it has encountered a decision.
HAProxy can export stats in Prometheus format, but this requires
special configuration of a dedicated front-end. To support this, the
_haproxy_ Ansible role now has a pair of variables,
`haproxy_enable_stats` and `haproxy_stats_port`, which control whether
or not the stats front-end is enabled, and if so, what port it listens
on. Note that on Fedora with the default SELinux policy, the port must
be labelled either `http_port_t` or `http_cache_port_t`.
Jellyfin can expose metrics in Prometheus format, but this functionality
is disabled by default. To enable it, we must set `EnableMetrics` in
the configuration file. This commit adds a template configuration file
that uses the `jellyfin_enable_metrics` Ansible variable to control this
value.
Frigate exports useful statistics natively, but in a custom JSON format.
There is a [feature request][0] to add support for Prometheus format,
but it's mostly being ignored. A community member has created a
standalone process that converts the JSON format into Prometheus format,
though, which we can use.
[0]: https://github.com/blakeblackshear/frigate/issues/2266
The current Grafana Loki server, *loki0.pyrocufflink.blue*, runs Fedora
CoreOS and is managed by Ignition and *cfg*. Since I have declared
*cfg* a failed experiment, I'm going to re-deploy Loki on a new VM
running Fedora Linux and managed by Ansible.
The *loki* role installs Podman and defines a systemd-managed container
to run Grafana Loki.
MinIO/S3 clients generate a _lot_ of requests. It's also not
particularly useful to have these stored in Loki anyway. As such, we'll
stop routing them to syslog/journal.
Having access logs is somewhat useful for troubleshooting, but really
for only live requests (i.e. what's happening right now). We therefore
keep the access logs around in a file, but only for one day, so as not
to fill up the filesystem with logs we'll never see.
There may be cases where we want either error logs or access logs to be
sent to syslog, but not both. To support these, there are now two
variables: `nginx_access_log_syslog` and `nginx_error_log_syslog`.
Both use the value of the `nginx_log_syslog` variable by default, so
existing users of the _nginx_ role will continue to work as before.
If _nginx_ is configured to send error/access log messages to syslog, it
may not make sense to _also_ send messages to log files as well. The
`nginx_error_log_file` and `nginx_access_log_file` variables are now
available to control whether/where to send log messages. Setting either
of these to a falsy value will disable logging to a file. A non-empty
string value is interpreted as the path to a log file. By default, the
existing behavior of logging to `/var/log/nginx/error.log` and
`/var/log/nginx/access.log` is preserved.
_wal-g_ can send StatsD metrics when it completes an upload/backup/etc.
task. Using the `statsd_exporter`, we can capture these metrics and
make them available to Victoria Metrics.
The *statsd exporter* is a Prometheus exporter that converts statistics
from StatsD format into Prometheus metrics. It is generally useful as a
bridge between processes that emit event-based statistics, turning them
into Prometheus counters and gauges.