The `postgresql-upgrade.sh` script arranges to run `pg_upgrade` after a
major PostgreSQL version update. It's scheduled by a systemd unit,
_postgresql-upgrade.service_, which runs only after an OS update.
Tasks that must run as the _postgres_ user need to explicity enable
`become`, in case it is not already enabled at the playbook level. This
can happen, for example, when the playbook is running directly as root.
Now that we have the serial terminal server managing `picocom` processes
for each serial port, and those `picocom` processes are configured to
log console output to files, we can configure Promtail to scrape these
log files and send them to Loki.
Using `tmux`, we can spawn a bunch of `picocom` processes for the serial
ports connected to other server's console ports. The
_serial-terminal-server_ service manages the `tmux` server process,
while the individual _serial-terminal-server-window@.service_ units
create a window in the `tmux` session.
The serial terminal server runs as a dedicated user. The SSH server is
configured to force this user to connect to the `tmux` session. This
should help ensure the serial consoles are accessible, even if the
Active Directory server is unavailable.
WAL-G slows down significantly when too many backups are kept. We need
to periodically clean up old backups to maintain a reasonable level of
performance, and also keep from wasting space with useless old backups.
_loki1.pyrocufflink.blue_ replaces _loki0.pyrocufflink.blue_. The
former runs Fedora Linux and is managed by Ansible, while the latter ran
Fedora CoreOS and was managed by Ignition and _cfg_.
This udev rule will automatically re-add disks to the RAID array when
they are connected. `mdadm --udev-rules` is supposed to be able to
generate such a rule based on the `POLICY` definitions in
`/etc/mdadm.conf`, but I was not able to get that to work; it always
printed an empty rule file, no matter what I put in `mdadm.conf`.
I want to publish the _20125_ Status application to an F-Droid
repository to make it easy for Tabitha to install and update. F-Droid
repositories are similar to other package repositories: a collection of
packages and some metadata files. Although there is a fully-fledged
server-side software package that can manage F-Droid repositories, it's
not required: the metadata files can be pre-generated and then hosted by
a static web server just fine.
This commit adds configuration for the web server and reverse proxy to
host the F-Droid repository at _apps.du5t1n.xyz_.
Bitwarden has not worked correctly for clients using the non-canonical
domain name (i.e. _bitwarden.pyrocufflink.blue_) for quite some time.
This still trips me up occasionally, though, so hopefully adding a
server-side redirect will help. Eventually, I'll probably remove the
non-canonical name entirely.
Although listening on only an IPv6 socket works fine for the HTTP
front-end, it results in HAProxy logging client requests as IPv4-mapped
IPv6 addresses. For visual processing, this is ok, but it breaks Loki's
`ip` filter.
When troubleshooting configuration or connection issues, it will be
helpful to have the value of the HTTP Host header present in log
messages emitted by HAProxy. This will help reason about HAProxy's
routing decisions.
For TLS connections, of course, we don't have access to the Host header,
but we can use the value of the TLS SNI field. Note that the requisite
`content set-var` directive MUST come before the `content accept`;
HAProxy stops processing all `tcp-request content ...` directives once
it has encountered a decision.
HAProxy can export stats in Prometheus format, but this requires
special configuration of a dedicated front-end. To support this, the
_haproxy_ Ansible role now has a pair of variables,
`haproxy_enable_stats` and `haproxy_stats_port`, which control whether
or not the stats front-end is enabled, and if so, what port it listens
on. Note that on Fedora with the default SELinux policy, the port must
be labelled either `http_port_t` or `http_cache_port_t`.
Jellyfin can expose metrics in Prometheus format, but this functionality
is disabled by default. To enable it, we must set `EnableMetrics` in
the configuration file. This commit adds a template configuration file
that uses the `jellyfin_enable_metrics` Ansible variable to control this
value.
Frigate exports useful statistics natively, but in a custom JSON format.
There is a [feature request][0] to add support for Prometheus format,
but it's mostly being ignored. A community member has created a
standalone process that converts the JSON format into Prometheus format,
though, which we can use.
[0]: https://github.com/blakeblackshear/frigate/issues/2266
The current Grafana Loki server, *loki0.pyrocufflink.blue*, runs Fedora
CoreOS and is managed by Ignition and *cfg*. Since I have declared
*cfg* a failed experiment, I'm going to re-deploy Loki on a new VM
running Fedora Linux and managed by Ansible.
The *loki* role installs Podman and defines a systemd-managed container
to run Grafana Loki.
MinIO/S3 clients generate a _lot_ of requests. It's also not
particularly useful to have these stored in Loki anyway. As such, we'll
stop routing them to syslog/journal.
Having access logs is somewhat useful for troubleshooting, but really
for only live requests (i.e. what's happening right now). We therefore
keep the access logs around in a file, but only for one day, so as not
to fill up the filesystem with logs we'll never see.
There may be cases where we want either error logs or access logs to be
sent to syslog, but not both. To support these, there are now two
variables: `nginx_access_log_syslog` and `nginx_error_log_syslog`.
Both use the value of the `nginx_log_syslog` variable by default, so
existing users of the _nginx_ role will continue to work as before.
If _nginx_ is configured to send error/access log messages to syslog, it
may not make sense to _also_ send messages to log files as well. The
`nginx_error_log_file` and `nginx_access_log_file` variables are now
available to control whether/where to send log messages. Setting either
of these to a falsy value will disable logging to a file. A non-empty
string value is interpreted as the path to a log file. By default, the
existing behavior of logging to `/var/log/nginx/error.log` and
`/var/log/nginx/access.log` is preserved.
_wal-g_ can send StatsD metrics when it completes an upload/backup/etc.
task. Using the `statsd_exporter`, we can capture these metrics and
make them available to Victoria Metrics.
The *statsd exporter* is a Prometheus exporter that converts statistics
from StatsD format into Prometheus metrics. It is generally useful as a
bridge between processes that emit event-based statistics, turning them
into Prometheus counters and gauges.
The [Memories] app for Nextcloud provides a better user interface and
more features than the built-in Photos app. The latter seems to be
somewhat broken recently (timeline stops in June 2024, even though there
are more recent photos available), so we're trying out Memories (and
Recognize for facial recognition).
[Memories]: https://memories.gallery
Nextcloud 28+ uses JavaScript modules (`.mjs` files). These need to be
served from the filesystem like other static files, so the *mod_rewrite*
configuration needs to be updated as such.
When a VM uses a serial port for its default console, kernel messages
(e.g. panics) are lost if no console client is connected at the time.
This is a major disadvantage when compared to a graphical console, which
usually at least keeps a "screenshot" of the console when the kernel
crashes.
While researching the available console device types to determine how
best to implement a tool that would both log the output from the serial
console at all times, while still allowing interactive connections to
it, I discovered that _libvirt_ actually already has this exact
functionality built-in:
https://libvirt.org/formatdomain.html#consoles-serial-parallel-channel-devices
Nextcloud writes JSON-structured logs to
`/var/lib/nextcloud/data/nextcloud.log`. These logs contain errors,
etc. from the Nextcloud server, which are useful for troubleshooting.
Having them in Loki will allow us to view them in Grafan as well as
generate alerts for certain events.
_nginx_ access logs are typically either very small or very large. For
small log files, it's fast enough to decompress them on the fly if
necessary. For large files, they may take up so much space in
uncompressed form that the log volume fills too quickly. In either
case, compressing the files as soon as they are rotated is a good
option, especially since their contents should already be sent to Loki.
_WAL-G_ and _restic_ both generate a lot of HTTP traffic, which fills up
the log volume pretty quickly. Let's reduce the number of days logs are
kept on the file system. Logs are shipped to Loki anyway, so there's
not much need to have them local very long.
The default `logrotate` configuration for _nginx_ may not be appropriate
for high-volume servers. The `nginx_keep_num_logs` variable is now
available to control how many days of logs are kept.
Invoice Ninja needs to be accessible from the Internet in order to
receive webhooks from Stripe. Additionally, Apple Pay requires
contacting Invoice Ninja for domain verification.
Gitea and Vaultwarden both have SQLite databases. We'll need to add
some logic to ensure these are in a consistent state before beginning
the backup. Fortunately, neither of them are very busy databases, so
the likelihood of an issue is pretty low. It's definitely more
important to get backups going again sooner, and we can deal with that
later.
Since `restic` needs to run as root in order to back up files regardless
of their permissions, we need to restrict it to doing only that. Using
systemd sandbox features, especially the capability bounding set, we can
remove all of _root_'s powers except the ability to read all files.
The `restic.yml` playbook applies the _restic_ role to hosts in the
_restic_ group. The _restic_ role installs `restic` and creates a
systemd timer and service unit to run `restic backup` every day.
Restic doesn't really have a configuration file; all its settings are
controlled either by environment variables or command-line options. Some
options, such as the list of files to include in or exclude from
backups, take paths to files containing the values. We can make use of
these to provide some configurability via Ansible variables. The
`restic_env` variable is a map of environment variables and values to
set for `restic`. The `restic_include` and `restic_exclude` variables
are lists of paths/patterns to include and exclude, respectively.
Finally, the `restic_password` variable contains the password to decrypt
the repository contents. The password is written to a file and exposed
to the _restic-backup.service_ unit using [systemd credentials][0].
When using S3 or a compatible service for respository storage, Restic of
course needs authentication credentials. These can be set using the
`restic_aws_credentials` variable. If this variable is defined, it
should be a map containing the`aws_access_key_id` and
`aws_secret_access_key` keys, which will be written to an AWS shared
credentials file. This file is then exposed to the
_restic-backup.service_ unit using [systemd credentials][0].
[0]: https://systemd.io/CREDENTIALS/