*logs0.pyrocufflink.blue* has been replaced by *loki0.pyrocufflink.blue*
since ages, so I'm not sure how I hadn't updated the autostart list with
it yet.
*unifi3.pyrocufflink.blue* replaced *unifi2.p.b* recently, when I was
testing *Luci*/etcd.
Frigate uses the Github API to check for new releases. It then
populates the `update.frigate_server` entity in Home Assistant via MQTT
with the information it retrieved. If it is unable to access the Github
API, the Home Assistant entity will be marked as "unavailable," which
triggers an alert notification from Home Assistant. Thus, we need to
allow Frigate to access Github if we want to use that entity as an
indicator of whether or not Frigate is connected to the MQTT broker.
I don't want to allow access to the Github API to everything on the
Frigate server, just Frigate itself. To do that, I've assigned a unique
username and password for Frigate. Only requests with the proper
`Proxy-Authorization` header will be allowed access. By providing the
credentials only the Frigate container, we can ensure no other process
has access.
I think I did this mostly as an exercise; there's no particular reason
to disallow access to the Github API, since it's mostly read-only and
can't really be used to exfiltrate any data (probably?).
Restoring the SELinux label of a mount point is really only necessary
for a band new filesystem, which will have no label at all. In other
cases, changing the context is probably neither necessary nor desirable,
as the existing data is potentially labelled correctly already.
Changing the label on on only the root directory should be sufficient to
ensure applications run correctly with newly-provisioned filesystems,
since they only have one directory anyway, without impacting too much
for existing filesystems.
*nvr2.pyrocufflink.blue* originally ran Fedora CoreOS. Since I'm tired
of the tedium and difficulty involved in making configuration changes to
FCOS machines, I am migrating it to Fedora Linux, managed by Ansible.
Deploying Caddy as a reverse proxy for Frigate enables HTTPS with a
certificate issued by the internal CA (via ACME) and authentication via
Authelia.
Separating the installation and base configuratieon of Caddy into its
own role will allow us to reuse that part for other sapplications that
use Caddy for similar reasons.
The *gasket-dkms* package provides the `gasket` and `apex` kernel
modules, which are needed fro the Google Coral Edge TPU. Since these
are out-of-tree modules, they are not allowed in Fedora proper, so they
are provided in a COPR, and have to be rebuilt for every kernel version.
The DKMS framework handles automatically building the modules whenever
the kernel updates.
For systems usign UEFI with SecureBoot enabled, kernel modules must be
signed by a key trusted by the platform. For locally-built modules, we
can use the Machine Owner Key (MOK). Unfortunately, enrolling a new MOK
requires rebooting and manual intervention during the boot process.
Therefore, the *gasket-dkms* role has a `pause` step to ensure someone
is paying attention and able handle the key enrollment interactively.
Eventually, I'd like to have an RPM package with these modules
pre-built, so production servers do not need the kernel development
tools (`perl`, `gcc`, headers, etc.). It will be tricky, though, to
make sure the modules get rebuilt for every kernel version as Fedora
releases them.
The Frigate NVR servers, prod & test, need to be able to access Fedora
COPR (for the *gasket-dkms* package) and Github Container Registry (for
Frigate itself).
Although the `newvm.sh` script had a variable to configure the value
specified for the `--network` argument to `virt-install`, it didn't
expose a way to set it. We need this ability so we can e.g. create VMs
on non-default networks like `camera` or `mgmt`.
* Switch to Quadlet-style `.container` for systemd unit
* Update to new image tag naming scheme (not arch-specific)
* Use environment variables for secrets
* Allow the entire `frigate_config` variable to be overridden
The *useproxy* role configures the `http_proxy` et al. environmet
variables for systemd services and interactive shells. Additionally, it
configures Yum repositories to use a single mirror via the `baseurl`
setting, rather than a list of mirrors via `metalink`, since the proxy
a) the proxy only allows access to _dl.fedoraproject.org_ and b) the
proxy caches RPM files, but this is only effective if all clients use
the same mirror all the time.
The `useproxy.yml` playbook applies this role to servers in the
*needproxy* group.
The UniFi Network server needs to be able access the
_linuxserver.io_/GitHub and Docker Hub OCI image registries for the
Unifi Network and Caddy container images, respectively.
Although it's rare, sometimes Samba crashes or fails to start. When
this happens, restarting it is almost always enough to get it working
again. Since all sorts of authentication problems can occur if one of
the domain controllers is down, it's probably best to just have systemd
automatically restart _samba.service_ if it ever stops for any reason.
*k8s-amd64-n0*, *k8s-amd64-n1*, and *k8s-amd64-n2* have been replaced by
*k8s-amd64-n4*, *k8s-amd64-n5*, *k8s-amd64-n6*, respectively. *db0* is
the new database server, which needs to be up before anything in
Kubernetes starts, since a lot of applications running there depend on
it.
This script captures the steps taken to migrate from the PostgreSQL
server in the Kubernetes cluster, managed by _postgres operator_, to the
dedicated server on _db0.pyrocufflink.blue_. The data were restored
from the backups created by _wal-e_, and then the new server was
promoted to primary. Finally, I cleaned up the roles and databases that
are no longer needed.
The [postgres-exporter][0] exposes PostgreSQL server statistics to
Prometheus. It connects to a specified PostgreSQL server (in this
case, a server on the local machine via UNIX socket) and collects data
from the `pg_stat_activity`, et al. views. It needs the `pg_monitor`
role in order to be allowed to read the relevant metrics.
Since we're setting up the exporter to connect via UNIX socket, it needs
a dedicated OS user to match the PostgreSQL user in order to
authenticate via the _peer_ method.
[0]: https://github.com/prometheus-community/postgres_exporter/
By default, WAL-G tries to connect to the PostgreSQL server via TCP
socket on the loopback interface. Our HBA configuration requires
certificate authentication for TCP sockets, so we need to configure
WAL-G to use the UNIX socket.
All data have been migrated from the PostgreSQL server in Kubernetes and
the three applications that used it (Firefly-III, Authelia, and Home
Assistant) have been updated to point to the new server.
To avoid comingling the backups from the old server with those from the
new server, we're reconfiguring WAL-G to push and pull from a new S3
prefix.
WAL archives are not much good without a base backup onto which they
can be applied. Thus, we need to schedule WAL-G to create and upload a
backup periodically.
The `deploy.sh` script ensures the execution environment is correct by
configuring the Ansible Vault secret, unlocking the `rbw` vault, and
requesting an SSH client certificate. It then runs the specified
end-to-end deployment script from the `deploy` directory.
*db0.pyrocufflink.blue* will be the primary server in the new PostgreSQL
database cluster. We're starting with Fedora 39 so we can have
PostgreSQL 15, to match the version managed by the Postgres Operator in
the Kubernetes cluster right now.
I've actually had this playbook for a _long_ time, just never bothered
to commit it. It's useful for the very first time Ansible is run for a
managed node to configure all the basic stuff.
For the longest time, whenever I needed to create a new virtual machine,
I just used `Ctrl+R` to find the last `virt-install` command I had run
and tweaked it for the new machine. At some point, my `~/.zsh_history`
overflowed, though, so the command I had been running got lost. To
avoid this silliness in the future, I've created a script that runs
`virt-manager` for me. As a bonus, it has some configuration flags for
the parameters that often vary between machines. For most machines,
though, the script can be run as simply as `newvm.sh name`.
I am going to use the *postgresql* group for the dedicated database
servers. The configuration for those machines will be quite a bit
different than for the one existing machine that is a member of that
group already: the Nextcloud server. Rather than undefine/override all
the group-level settings at the host level, I have removed the Nextcloud
server from the *postgresql* group, and updated the `nextcloud.yml`
playbook to apply the *postgresql-server* role itself.
Eventually, I want to move the Nextcloud database to the central
database servers. At that point, I will remove the *postgresql-server*
role from the `nextcloud.yml` playbook.
The `datavol.yml` playbook can provision one or more data volumes on
a managed node, using the definitions in the `data_volumes` variable.
This variable must contain a list of dictionaries with the following
keys:
* `dev`: The block device where the data volume is stored (e.g.
`/dev/vdb`)
* `fstype`: The type of filesystem to create on the block device
* `mountpoint`: The location in the filesystem hierarchy where the
volume is mounted
* `opts`: (Optional) options to pass to the `mkfs` program when
formatting the device
* `mountopts`: (Optional) additional options to pass to the `mount`
program when mounting the filesystem
This role installs `wal-g` from the DCH Yum repository, and creates a
configuration file for it in `/etc/postgresql`. Additionally, it
installs a custom SELinux policy module that allows `wal-g` to run in
the `postgresql_t` domain (i.e. when spawned by the PostgreSQL server).
This role can be used to get a server certificate for PostgreSQL from an
ACME CA using `certbot`. It fetches the initial certificate and copies
it to the PostgreSQL configuration directory. It also sets up a
post-renewal hook script that copies updated certificates and reload
the server.
This rewrite brings a lot of improvements and new functionality to the
*postgresql-server* role. The most noticeable change is the
introduction of the `postgresql_config_dir` variable, which can be used
to specify a different location for the PostgreSQL server configuration
files, separate from the data storage directory. By default, the
variable is set to `/etc/postgresql`. For some situations, it may be
necessary to disable this functionality, which can be accomplished by
setting the value of `postgresql_config_dir` to the same path as
`pgdata_dir`. Note also that the `postgresql-setup` tool, and the
corresponding `postgresql-check-db-dir` script, which are included in
the Fedora/Red Hat distribution of PostgreSQL, do not support having
separate configuration and data directories, so their use has to be
disabled.
Another significant improvement is to how the `postgresql.conf` file
is generated. Any setting can be set now, using the `postgresql_config`
variable; any key in this dictionary will be written to the
configuration file. Note that configuration file syntax requires
single quotes around string values, so these have to be included in the
YAML value.
To support deploying standby servers, the role now supports running a
command to restore from a backup instead of running `initdb`.
Additionally, the `postgresql_standby` variable can be set to `true`
to create the `recovery.signal` file, configuring the server as a
standby.
Since the `samba-dc.yml` playbook executes on a single host at a time,
if the fact cache is not current, only the facts for the current host
will be available. This prevents some tasks, especially the
configuration of the trusted SSH host keys for `sysvolsync`, to have
incorrect data. To avoid this, we need to explicitly gather facts for
all of the domain controllers before starting to configure any of them.
Sending SIGHUP to the main PID (i.e. conmon) ends up stopping the
service. What we really want is to send the signal to main PID _inside_
the container. We can achieve this by using `podman kill` instead of
`kill`.
*k8s-amd64-n0.pyrocufflink.blue*, *k8s-amd64-n1.pyrocufflink.blue*, and
*k8s-amd64-n2.pyrocufflink.blue*, which ran Fedora Linux, have been
replaced by *k8s-amd64-n4.pyrocufflink.blue*,
*k8s-amd64-n5.pyrocufflink.blue*, and *k8s-amd64-n6.pyrocufflink.blue*,
respectively. The new machines run Fedora CoreOS, and are thus not
managed by the Ansible configuration policy.
To improve the performance of persistent volumes accessed directly from
the Synology by Kubernetes pods, I've decided to expose the storage
network to the Kubernetes worker node VMs. This way, iSCSI traffic does
not have to go through the firewall.
I chose not to use the physical interfaces that are already directly
connected to the storage network for this for two reasons: 1) I like
the physical separation of concerns and 2) it would add complexity to
the setup by introducing a bridge on top of the existing bond.
Nextcloud usually (always?) wants the `occ upgrade` command to be run
after an update. If the *nextcloud* package gets updated along with
the rest of the OS, Nextcloud will be down until I manually run that
command hours/days later.
Without making the firewall changes permanent, when a server tries to
renew its certificate after rebooting, it will fail as the ACME server
cannot connect to the HTTP port.