It turns out, having the exporter connect to the _template1_ database is
not a great idea. PostgreSQL does not allow creating a new database if
the template database is currently being accessed by any clients. Since
_template1_ is the default choice, the `createdb` command will probably
fail.
It doesn't specifically matter which database the exporter connects to,
since it reads most (all?) of its data from the PostgreSQL catalog,
which isn't database-specific.
Moving the Nextcloud database to the central PostgreSQL server will
allow it to take advantage of the monitoring and backups in place there.
For backups specifically, this will make it easier to switch from BURP
to Restic, since now only the contents of the filesystem need backed up.
The PostgreSQL server on _db0_ requires certificate authentication for
all clients. The certificate for Nextcloud is stored in a Secret in
Kubernetes, so we need to use the _nextcloud-db-cert_ role to install
the script to fetch it. Nextcloud configuration doesn't expose the
parameters for selecting the certificate and private key files, but
fortunately, they can be encoded in the value provided to the `host`
parameter, though it makes for a rather cumbersome value.
Currently, the certificate authority that issues certificates for
PostgreSQL clients is hosted in Kubernetes and managed by
_cert-manager_. Certificates it issues are stored in Kubernetes Secret
resources, making them easy to consume by applications running in the
cluster, but not for anything outside. Since Nextcloud runs on its own
VM, we need a way to get the certificate out of the Secret and into a
file on that machine. To that end, I've written the
`nextcloud-fetch-cert.py` script. This script uses a Kubernetes Service
Account token to authenticate to the Kubernetes API and download the
contents of the Secret. It runs periodically, triggered by a systemd
timer unit, to ensure the certificate is always up-to-date.
The obvious drawback to this approach is the requirement for a static
token. Since there's not really a way to "renew" Service Account
tokens, it needs to be issued with a fairly long duration, to mitigate
the risk of being unable to fetch a new certificate once it has expired
because the token has also expired. This somewhat negates the advantage
of using certificates for authentication, since now the machine needs a
static, pre-defined secret.
At some point, I may deploy another instance of _step-ca_ to manage the
PostgreSQL client CA. Clients can then use e.g. `certbot` or `step ca
certificate` to obtain their certificates. I chose not to implement
this yet, though for a couple of reasons. First, I need to move the
Nextcloud database very soon, so we switch to using `restic` for backups
without having to deal with the database. Second, I am still
considering moving Nextcloud into Kubernetes eventually, where it will
be able to get the Secret directly; since Nextcloud is the only client
outside the cluster, it may not be worth setting up _step-ca_ in that
case.
The _nextcloud_ role originally handled setting up the PostgreSQL
database and assumed that it was running on the same server as Nextcloud
itself. I have factored out those tasks into their own role,
_nextcloud-db_, which can be applied to a separate host.
I have also introduced some new variables (`nextcloud_db_host`,
`nextcloud_db_name`, `nextcloud_db_user`, and `nextcloud_db_password`),
which can be used to specify how to connect to the database, if it is
hosted remotely. Since these variables are used by both the _nextcloud_
and _nextcloud-db_ roles, they are actually defined in a separate role,
_nextcloud-base_, upon which both depend.
When HAProxy binds to the IPv6 socket, it can handle both IPv6 and IPv4
clients. IPv4 clients are handled as IPv4-mapped IPv6 addresses, which
some backends (i.e. Apache) cannot support. To avoid this, we configure
HAProxy to bind to the IPv4 and IPv6 sockets separately, so that IPv4
addresses are handled as IPv4 addresses.
Expose a virtual host on a separate TCP port that uses the PROXY
protocol. This way, HAProxy can pass the original client IP address to
Jellyfin without terminating the TLS connection.
In order to enable authentication using LDAP over TLS in Jellyfin, we
need to expose the CA certificate that issues the LDAP server
certificates to the container.
*chromie.pyrocufflink.blue* will replace *burp1.pyrocufflink.blue* as
the backup server. It is running on the hardware that was originally
*nvr1.pyrocufflink.blue*: a 1U Jetway server with an Intel Celeron N3160
CPU and 4 GB of RAM.
The `raid-array.yml` playbook can create Linux *md* software RAID arrays
using the `mdadm` command. Two variables are required: `md_name` and
`raid_disks`. The former is a string name for the array. The latter is
an array of paths of block devices to add to the array.
This playbook uses the *minio-nginx* and *minio-backups-cert* role to
deploy MinIO with nginx.
The S3 API server is *s3.backups.pyrocufflink.blue*, and buckets can be
accessed as subdomains of this name.
The Admin Console is *minio.backups.pyrocufflink.blue*.
Certificates are issued by DCH CA via ACME using `certbot`.
The MinIO server for backups has special requirements for HTTPS. I want
to use subdomains for bucket names, so the certificate must have a
wildcard name, which requires using the DNS-01 challenge. Fortunately,
it is actually pretty easy to use `nsupdate` with GSS-TSIG
authentication to automate DNS record creation, and by default, all
domain-member machines can create any records. Thus, using the `manual`
auth plugin for `certbot` and a script to run `nsupdate`, obtaining the
wildcard certificate is fairly straightforward.
The biggest issue I encountered while developing this feature was
caching of NXDOMAIN responses. There doesn't seem to be a way to change
the TTL of the SOA record of the Active Directory DNS domain, which
defaults to 3600, meaning NXDOMAIN responses are always cached for an
hour. When adding a record using `nsupdate -g`, the tool always
performs a SOA lookup of new name to find the target zone for it. Since
the name does not exist yet, the domain controller responds with
NXDOMAIN, which gets cached by the main DNS server. Thus, even after
adding the record, the ACME server will not be able to resolve the
name for up to an hour. We can a void this by explicitly setting the
target zone. That would not work in a multi-domain forest, but
fortunately, we do not have to worry about that.
This role borrows some logic from the *postgresql-cert* role.
Eventually, I probably want to combine some of the steps from both of
these roles, possibly replacing the old *certbot* role.
The *minio-nginx* role configures nginx to proxy for MinIO. It uses the
"subdomain" pattern, as described in [Configure NGINX Proxy for MinIO
Server][0]; the S3 API and the console UI are accessible through
different domain names.
[0]: https://min.io/docs/minio/linux/integrations/setup-nginx-proxy-with-minio.html
Modern versions of Podman use Netavark, which needs to write various
files on the host file system (even when the container uses the
host's network namespace).
If the `minio_address` variable is specified, it will be passed with the
`--address` argument to `minio server`. This allows controlling the
socket the server binds to and listens on.
The `minio_browser_redirect_url` can be specified to populate the
similarly-named environment variable, which configures how MinIO serves
the web UI.
The `minio_domain` variable sets the `MINIO_DOMAIN` environment
variable, which enables DNS names (subdomains) for buckets, i.e.
`{bucket_name}.{MINIO_DOMAIN}`.
`wal-g` needs to connect to the PostgreSQL database system, so it should
run as the _postgres_ user, who has permission to connect, rather than
_root_, who does not.
Gitea needs SMTP configuration in order to send e-mail notifications
about e.g. pull requests. The `gitea_smtp` variable can be defined to
enable this feature.
Gitea complains if the `WORK_DIR` setting is not set. It tries to set
it itself, but fails because the configuration is read-only. The value
it uses is incorrect anyway (`/usr/local/bin`, since that's where the
`gitea` executable is).
I've already made a couple of mistakes keeping the HTTP and HTTPS rules
in sync. Let's define the sites declaratively and derive the HAProxy
rules from the data, rather then manually type the rules.
_haproxy0.pyrocufflink.blue_ is a Fedora Linux VM that runs HAProxy to
provide reverse proxy, exposing web sites and applications to the
Internet. It has a static MAC address because it will need a static IP
address, at least initially, in order for DNAT to work.
The *dch-proxy* role has not been used for quite some time. The web
server has been handling the reerse proxy functionality, in addition to
hosting websites. The drawback to using Apache as the reverse proxy,
though, is that it operates in TLS-terminating mode, so it needs to have
the correct certificate for every site and application it proxies for.
This is becoming cumbersome, especially now that there are several sites
that do not use the _pyrocufflink.net_ wildcard certificate. Notably,
Tabitha's _hatchlearningcenter.org_ is problematic because although the
main site are hosted by the web server, the Invoice Ninja client portal
is hosted in Kubernetes.
Switching back to HAProxy to provide the reverse proxy functionality
will eliminate the need to have the server certificate both on the
backend and on the reverse proxy, as it can operate in TLS-passthrough
mode. The main reason I stopped using HAProxy in the first place was
because when using TLS-passthrough mode, the original source IP address
is lost. Fortunately, HAProxy and Apache can both be configured to use
the PROXY protocol, which provides a mechanism for communicating the
original IP address while still passing through the TLS connection
unmodified. This is particularly important for Nextcloud because of its
built-in intrusion prevention; without knowing the actual source IP
address, it blocks _everyone_, since all connections appear to come from
the reverse proxy's IP address.
Combining TLS-passthrough mode with the PROXY protocol resolves both the
certificate management issue and the source IP address issue.
I've cleaned up the _dch-proxy_ role quite a bit in this commit.
Notably, I consolidated all the backend and frontend definitions into a
single file; it didn't really make sense to have them all separate,
since they were managed by the same role and referred to each other. Of
course, I had to update the backends to match the currently-deployed
applications as well.
The `migrate-all.sh` script is used to migrate one or more VMs (default:
all) from one VM host to the other on demand.
The `shutdown-vmhost.sh` script prepares a VM host to shut down by
evicting Kubernetes Pods from the Nodes running on that host and then
shutting them down, followed by migrating the rest of the running VMs to
the other host.
Originally, the VM hosts were in a separate inventory so they would
not be managed with the rest of the servers. It used to be that one
server was running all the VMs, while the other was asleep. That's
no longer the case; both alre always running and each has about half
of the VMs. Since they're both always online, they can be managed
normally now.
Sometimes, the mail server for *hatch.name* is extremely slow. While
there isn't much I can do about it for external senders, I can at least
ensure that email messages sent by internal services like Authelia are
always delivered quickly by rewriting the recipient address to my
actualy email address, bypassing the *hatch.name* exchange entirely.
The *postfix* role will now generate configuration and a lookup table
for [canonical address mapping][0] of email recipients. To configure
the mapping, the `postfix_recipient_canonical_map` must be a dictionary
of source-target addresses, e.g.:
```yaml
postfix_recipient_canonical_map:
my.bad.email@fake.test: my.real.email@example.com
```
[0]: https://www.postfix.org/ADDRESS_REWRITING_README.html#canonical
*logs0.pyrocufflink.blue* has been replaced by *loki0.pyrocufflink.blue*
since ages, so I'm not sure how I hadn't updated the autostart list with
it yet.
*unifi3.pyrocufflink.blue* replaced *unifi2.p.b* recently, when I was
testing *Luci*/etcd.
Frigate uses the Github API to check for new releases. It then
populates the `update.frigate_server` entity in Home Assistant via MQTT
with the information it retrieved. If it is unable to access the Github
API, the Home Assistant entity will be marked as "unavailable," which
triggers an alert notification from Home Assistant. Thus, we need to
allow Frigate to access Github if we want to use that entity as an
indicator of whether or not Frigate is connected to the MQTT broker.
I don't want to allow access to the Github API to everything on the
Frigate server, just Frigate itself. To do that, I've assigned a unique
username and password for Frigate. Only requests with the proper
`Proxy-Authorization` header will be allowed access. By providing the
credentials only the Frigate container, we can ensure no other process
has access.
I think I did this mostly as an exercise; there's no particular reason
to disallow access to the Github API, since it's mostly read-only and
can't really be used to exfiltrate any data (probably?).
If winbind is unable to communicate with any domain controller, the
`pam_winbind.so` module will time out. In _auth_ and _account_ context,
this was not an issue, at least for local users, because other modules
terminated the stack before `pam_winbind.so` was called. In _session_
context, though, nothing terminated the stack at all, so
`pam_winbind.so` was called unconditionally. This prevented even _root_
from logging in on the console. This made troubleshooting difficult,
especially for the VM hosts, when the domain controllers were down.
Restoring the SELinux label of a mount point is really only necessary
for a band new filesystem, which will have no label at all. In other
cases, changing the context is probably neither necessary nor desirable,
as the existing data is potentially labelled correctly already.
Changing the label on on only the root directory should be sufficient to
ensure applications run correctly with newly-provisioned filesystems,
since they only have one directory anyway, without impacting too much
for existing filesystems.
*nvr2.pyrocufflink.blue* originally ran Fedora CoreOS. Since I'm tired
of the tedium and difficulty involved in making configuration changes to
FCOS machines, I am migrating it to Fedora Linux, managed by Ansible.
Deploying Caddy as a reverse proxy for Frigate enables HTTPS with a
certificate issued by the internal CA (via ACME) and authentication via
Authelia.
Separating the installation and base configuratieon of Caddy into its
own role will allow us to reuse that part for other sapplications that
use Caddy for similar reasons.
The *gasket-dkms* package provides the `gasket` and `apex` kernel
modules, which are needed fro the Google Coral Edge TPU. Since these
are out-of-tree modules, they are not allowed in Fedora proper, so they
are provided in a COPR, and have to be rebuilt for every kernel version.
The DKMS framework handles automatically building the modules whenever
the kernel updates.
For systems usign UEFI with SecureBoot enabled, kernel modules must be
signed by a key trusted by the platform. For locally-built modules, we
can use the Machine Owner Key (MOK). Unfortunately, enrolling a new MOK
requires rebooting and manual intervention during the boot process.
Therefore, the *gasket-dkms* role has a `pause` step to ensure someone
is paying attention and able handle the key enrollment interactively.
Eventually, I'd like to have an RPM package with these modules
pre-built, so production servers do not need the kernel development
tools (`perl`, `gcc`, headers, etc.). It will be tricky, though, to
make sure the modules get rebuilt for every kernel version as Fedora
releases them.