Modern versions of Podman use Netavark, which needs to write various
files on the host file system (even when the container uses the
host's network namespace).
If the `minio_address` variable is specified, it will be passed with the
`--address` argument to `minio server`. This allows controlling the
socket the server binds to and listens on.
The `minio_browser_redirect_url` can be specified to populate the
similarly-named environment variable, which configures how MinIO serves
the web UI.
The `minio_domain` variable sets the `MINIO_DOMAIN` environment
variable, which enables DNS names (subdomains) for buckets, i.e.
`{bucket_name}.{MINIO_DOMAIN}`.
`wal-g` needs to connect to the PostgreSQL database system, so it should
run as the _postgres_ user, who has permission to connect, rather than
_root_, who does not.
Gitea needs SMTP configuration in order to send e-mail notifications
about e.g. pull requests. The `gitea_smtp` variable can be defined to
enable this feature.
Gitea complains if the `WORK_DIR` setting is not set. It tries to set
it itself, but fails because the configuration is read-only. The value
it uses is incorrect anyway (`/usr/local/bin`, since that's where the
`gitea` executable is).
I've already made a couple of mistakes keeping the HTTP and HTTPS rules
in sync. Let's define the sites declaratively and derive the HAProxy
rules from the data, rather then manually type the rules.
_haproxy0.pyrocufflink.blue_ is a Fedora Linux VM that runs HAProxy to
provide reverse proxy, exposing web sites and applications to the
Internet. It has a static MAC address because it will need a static IP
address, at least initially, in order for DNAT to work.
The *dch-proxy* role has not been used for quite some time. The web
server has been handling the reerse proxy functionality, in addition to
hosting websites. The drawback to using Apache as the reverse proxy,
though, is that it operates in TLS-terminating mode, so it needs to have
the correct certificate for every site and application it proxies for.
This is becoming cumbersome, especially now that there are several sites
that do not use the _pyrocufflink.net_ wildcard certificate. Notably,
Tabitha's _hatchlearningcenter.org_ is problematic because although the
main site are hosted by the web server, the Invoice Ninja client portal
is hosted in Kubernetes.
Switching back to HAProxy to provide the reverse proxy functionality
will eliminate the need to have the server certificate both on the
backend and on the reverse proxy, as it can operate in TLS-passthrough
mode. The main reason I stopped using HAProxy in the first place was
because when using TLS-passthrough mode, the original source IP address
is lost. Fortunately, HAProxy and Apache can both be configured to use
the PROXY protocol, which provides a mechanism for communicating the
original IP address while still passing through the TLS connection
unmodified. This is particularly important for Nextcloud because of its
built-in intrusion prevention; without knowing the actual source IP
address, it blocks _everyone_, since all connections appear to come from
the reverse proxy's IP address.
Combining TLS-passthrough mode with the PROXY protocol resolves both the
certificate management issue and the source IP address issue.
I've cleaned up the _dch-proxy_ role quite a bit in this commit.
Notably, I consolidated all the backend and frontend definitions into a
single file; it didn't really make sense to have them all separate,
since they were managed by the same role and referred to each other. Of
course, I had to update the backends to match the currently-deployed
applications as well.
The `migrate-all.sh` script is used to migrate one or more VMs (default:
all) from one VM host to the other on demand.
The `shutdown-vmhost.sh` script prepares a VM host to shut down by
evicting Kubernetes Pods from the Nodes running on that host and then
shutting them down, followed by migrating the rest of the running VMs to
the other host.
Originally, the VM hosts were in a separate inventory so they would
not be managed with the rest of the servers. It used to be that one
server was running all the VMs, while the other was asleep. That's
no longer the case; both alre always running and each has about half
of the VMs. Since they're both always online, they can be managed
normally now.
Sometimes, the mail server for *hatch.name* is extremely slow. While
there isn't much I can do about it for external senders, I can at least
ensure that email messages sent by internal services like Authelia are
always delivered quickly by rewriting the recipient address to my
actualy email address, bypassing the *hatch.name* exchange entirely.
The *postfix* role will now generate configuration and a lookup table
for [canonical address mapping][0] of email recipients. To configure
the mapping, the `postfix_recipient_canonical_map` must be a dictionary
of source-target addresses, e.g.:
```yaml
postfix_recipient_canonical_map:
my.bad.email@fake.test: my.real.email@example.com
```
[0]: https://www.postfix.org/ADDRESS_REWRITING_README.html#canonical
*logs0.pyrocufflink.blue* has been replaced by *loki0.pyrocufflink.blue*
since ages, so I'm not sure how I hadn't updated the autostart list with
it yet.
*unifi3.pyrocufflink.blue* replaced *unifi2.p.b* recently, when I was
testing *Luci*/etcd.
Frigate uses the Github API to check for new releases. It then
populates the `update.frigate_server` entity in Home Assistant via MQTT
with the information it retrieved. If it is unable to access the Github
API, the Home Assistant entity will be marked as "unavailable," which
triggers an alert notification from Home Assistant. Thus, we need to
allow Frigate to access Github if we want to use that entity as an
indicator of whether or not Frigate is connected to the MQTT broker.
I don't want to allow access to the Github API to everything on the
Frigate server, just Frigate itself. To do that, I've assigned a unique
username and password for Frigate. Only requests with the proper
`Proxy-Authorization` header will be allowed access. By providing the
credentials only the Frigate container, we can ensure no other process
has access.
I think I did this mostly as an exercise; there's no particular reason
to disallow access to the Github API, since it's mostly read-only and
can't really be used to exfiltrate any data (probably?).
If winbind is unable to communicate with any domain controller, the
`pam_winbind.so` module will time out. In _auth_ and _account_ context,
this was not an issue, at least for local users, because other modules
terminated the stack before `pam_winbind.so` was called. In _session_
context, though, nothing terminated the stack at all, so
`pam_winbind.so` was called unconditionally. This prevented even _root_
from logging in on the console. This made troubleshooting difficult,
especially for the VM hosts, when the domain controllers were down.
Restoring the SELinux label of a mount point is really only necessary
for a band new filesystem, which will have no label at all. In other
cases, changing the context is probably neither necessary nor desirable,
as the existing data is potentially labelled correctly already.
Changing the label on on only the root directory should be sufficient to
ensure applications run correctly with newly-provisioned filesystems,
since they only have one directory anyway, without impacting too much
for existing filesystems.
*nvr2.pyrocufflink.blue* originally ran Fedora CoreOS. Since I'm tired
of the tedium and difficulty involved in making configuration changes to
FCOS machines, I am migrating it to Fedora Linux, managed by Ansible.
Deploying Caddy as a reverse proxy for Frigate enables HTTPS with a
certificate issued by the internal CA (via ACME) and authentication via
Authelia.
Separating the installation and base configuratieon of Caddy into its
own role will allow us to reuse that part for other sapplications that
use Caddy for similar reasons.
The *gasket-dkms* package provides the `gasket` and `apex` kernel
modules, which are needed fro the Google Coral Edge TPU. Since these
are out-of-tree modules, they are not allowed in Fedora proper, so they
are provided in a COPR, and have to be rebuilt for every kernel version.
The DKMS framework handles automatically building the modules whenever
the kernel updates.
For systems usign UEFI with SecureBoot enabled, kernel modules must be
signed by a key trusted by the platform. For locally-built modules, we
can use the Machine Owner Key (MOK). Unfortunately, enrolling a new MOK
requires rebooting and manual intervention during the boot process.
Therefore, the *gasket-dkms* role has a `pause` step to ensure someone
is paying attention and able handle the key enrollment interactively.
Eventually, I'd like to have an RPM package with these modules
pre-built, so production servers do not need the kernel development
tools (`perl`, `gcc`, headers, etc.). It will be tricky, though, to
make sure the modules get rebuilt for every kernel version as Fedora
releases them.
The Frigate NVR servers, prod & test, need to be able to access Fedora
COPR (for the *gasket-dkms* package) and Github Container Registry (for
Frigate itself).
Although the `newvm.sh` script had a variable to configure the value
specified for the `--network` argument to `virt-install`, it didn't
expose a way to set it. We need this ability so we can e.g. create VMs
on non-default networks like `camera` or `mgmt`.
* Switch to Quadlet-style `.container` for systemd unit
* Update to new image tag naming scheme (not arch-specific)
* Use environment variables for secrets
* Allow the entire `frigate_config` variable to be overridden
The *useproxy* role configures the `http_proxy` et al. environmet
variables for systemd services and interactive shells. Additionally, it
configures Yum repositories to use a single mirror via the `baseurl`
setting, rather than a list of mirrors via `metalink`, since the proxy
a) the proxy only allows access to _dl.fedoraproject.org_ and b) the
proxy caches RPM files, but this is only effective if all clients use
the same mirror all the time.
The `useproxy.yml` playbook applies this role to servers in the
*needproxy* group.
The UniFi Network server needs to be able access the
_linuxserver.io_/GitHub and Docker Hub OCI image registries for the
Unifi Network and Caddy container images, respectively.
Although it's rare, sometimes Samba crashes or fails to start. When
this happens, restarting it is almost always enough to get it working
again. Since all sorts of authentication problems can occur if one of
the domain controllers is down, it's probably best to just have systemd
automatically restart _samba.service_ if it ever stops for any reason.
*k8s-amd64-n0*, *k8s-amd64-n1*, and *k8s-amd64-n2* have been replaced by
*k8s-amd64-n4*, *k8s-amd64-n5*, *k8s-amd64-n6*, respectively. *db0* is
the new database server, which needs to be up before anything in
Kubernetes starts, since a lot of applications running there depend on
it.
This script captures the steps taken to migrate from the PostgreSQL
server in the Kubernetes cluster, managed by _postgres operator_, to the
dedicated server on _db0.pyrocufflink.blue_. The data were restored
from the backups created by _wal-e_, and then the new server was
promoted to primary. Finally, I cleaned up the roles and databases that
are no longer needed.
The [postgres-exporter][0] exposes PostgreSQL server statistics to
Prometheus. It connects to a specified PostgreSQL server (in this
case, a server on the local machine via UNIX socket) and collects data
from the `pg_stat_activity`, et al. views. It needs the `pg_monitor`
role in order to be allowed to read the relevant metrics.
Since we're setting up the exporter to connect via UNIX socket, it needs
a dedicated OS user to match the PostgreSQL user in order to
authenticate via the _peer_ method.
[0]: https://github.com/prometheus-community/postgres_exporter/
By default, WAL-G tries to connect to the PostgreSQL server via TCP
socket on the loopback interface. Our HBA configuration requires
certificate authentication for TCP sockets, so we need to configure
WAL-G to use the UNIX socket.
All data have been migrated from the PostgreSQL server in Kubernetes and
the three applications that used it (Firefly-III, Authelia, and Home
Assistant) have been updated to point to the new server.
To avoid comingling the backups from the old server with those from the
new server, we're reconfiguring WAL-G to push and pull from a new S3
prefix.
WAL archives are not much good without a base backup onto which they
can be applied. Thus, we need to schedule WAL-G to create and upload a
backup periodically.
The `deploy.sh` script ensures the execution environment is correct by
configuring the Ansible Vault secret, unlocking the `rbw` vault, and
requesting an SSH client certificate. It then runs the specified
end-to-end deployment script from the `deploy` directory.
*db0.pyrocufflink.blue* will be the primary server in the new PostgreSQL
database cluster. We're starting with Fedora 39 so we can have
PostgreSQL 15, to match the version managed by the Postgres Operator in
the Kubernetes cluster right now.