The latest version of the *ansible* container runs processes as the
unprivileged *jenkins* user, provides its own "sleep forever" default
command, and sets the correct LANG environment variable. Since it runs
processes as *jenkins*, we need to override HOME and set it to the
WORKSPACE to ensure Jenkins has a writable path for arbitrary files.
Gitea package names (e.g. OCI images, etc.) can contain `/` charactres.
These are encoded as %2F in request paths. Apache needs to forward
these sequences to the Gitea server without decoding them.
Unfortunately, the `AllowEncodedSlashes` setting, which controls this
behavior, is a per-virtualhost setting that is *not* inherited from the
main server configuration, and therefore must be explicitly set inside
the `VirtualHost` block. This means Gitea needs its own virtual host
definition, and cannot rely on the default virtual host.
Hopefully this will fix this warning from Ansible:
> [WARNING]: An error occurred while calling
> ansible.utils.display.initialize_locale (unsupported locale setting).
> This may result in incorrectly calculated text widths that can cause
> Display to print incorrect line lengths
I don't know why I didn't think of this before! There's no reason to
have to have already copied the `ssh_known_hosts` file from to
`/etc/ssh` before running `ansible-playbook`. In fact, keys just end up
getting copied from `/etc/ssh/ssh_known_hosts` into `~/.ssh/known_hosts`
anyway. So let's just make it so that step isn't necessary: copy the
host key database directly to `~/.ssh` and avoid the trouble.
We'll use the `podTemplate` block to define an ephemeral agent running in
a Kubernetes pod as the node for this pipeline. This takes the place of
the Docker container we used previously.
I moved the metrics Pi from the red network to the blue network. I
started to get uncormfortable with the firewall changes that were
required to host a service on the red network. I think it makes the
most sense to define the red network as egress only.
The only major change that affects the configuration policy is the
introduction of the `webhook.ALLOWED_HOST_LIST` setting. For some dumb
reason, the default value of this setting *denies* access to machines on
the local network. This makes no sense; why do they expect you to host
your CI or whatever on a *public* network? Of course, the only reason
given is "for security reasons."
This work-around is no longer necessary as the default Fedora policy now
covers the Samba DC daemon. It never really worked correctly, anyway,
because Samba doesn't start `winbindd` fast enough for the
`/run/samba/winbindd` directory to be created before systemd spawns the
`restorecon` process, so it would usually fail to start the service the
first time after a reboot.
Sometimes, Frigate crashes in situations that should be recoverable or
temporary. For example, it will fail to start if the MQTT server is
unreachable initially, and does not attempt to connect more than once.
To avoid having to manually restart the service once the MQTT server is
ready, we can configure the systemd unit to enable automatic restarts.
If the *vaultwarden* service terminates unexpectedly, e.g. due to a
power loss, `podman` may not successfully remove the container. We
therefore need to try to delete it before starting it again, or `podman`
will exit with an error because the container already exists.
When I added the *systemd-networkd* configuration for the Kubernetes
network interface on the VM hosts, I only added the `.netdev`
configuration and forgot the `.network` part. Without the latter,
*systemd-networkd* creates the interface, but does not configure or
activate it, so it is not able to handle traffic for the VMs attached to
the bridge.
Both *zwavejs2mqtt* and *zigbee2mqtt* have various bugs that can cause
them to crash in the face of errors that should be recoverable.
Specifically, when there are network errors, the processes do not always
handle these well. Especially during first startup, they tend to crash
instead of retry. Thus, we'll move the retry logic into systemd.
The *zwavejs2mqtt* and *zigbee2mqtt* services need to wait until the
system clock is fully synchronized before starting. If the system clock
is wrong, they may fail to validate the MQTT server certificate.
The *time-sync.target* unit is not started until after services that
sync the clock, e.g. using NTP. Notably, the *chrony-wait.service* unit
delays *time-sync.target* until `chrony waitsync` returns.
The *vlan99* interface needs to be created and activated by
`systemd-networkd` before `dnsmasq` can start and bind to it. Ordering
the *dnsmasq.service* unit after *network.target* and
*network-online.target* should ensure that this is the case.
*libvirt*'s native autostart functionality does not work well for
machines that migrate between hosts. Machines lose their auto-start
flag when they are migrated, and the flag is not restored if they are
migrated back. This makes the feature pretty useless for us.
To work around this limitation, I've added a script that is run during
boot that will start the machines listed in `/etc/vm-autostart`, if they
exist. That file can also insert a delay between starting two machines,
which may be useful to allow services to fully start on one machine
before starting another that may depend on them.
I've moved handling of DNS to the border firewall instead of a dedicated
virtual machine. Originally, the VM was necessary because the UniFi
Security Gateway sucked and could not (easily) handle the complex
configuration I wanted to use. Since moving to the new firewall, this
is no longer a problem.
Having DNS on a VM is problematic when full-network outages occur, like
the one that happened on 16 August 2022. When everything starts back
up, DNS is unavailable. libvirt VM autostart does not work for machines
that have been migrated between hosts (the auto-start flag is not
migrated, and libvirt "forgets" that the VM was supposed to autostart if
it is migrated away and back). I plan to script a solution for this at
some point, but I still think it makes more sense for the firewall to
handle it. It will certainly make it come up quicker regardless.
If `/` is mounted read-only, as is usually the case, the Proton VPN
watchdog cannot update the `remote_addrs` configuration file. It needs
to be stored in a directory that is guaranteed to be writable.
The *netboot/basementhud* Ansible role configures two network block
devices for the basement HUD machine:
* The immutable root filesystem
* An ephemeral swap device
The *netboot/jenkins-agent* Ansible role configures three NBD exports:
* A single, shared, read-only export containing the Jenkins agent root
filesystem, as a SquashFS filesystem
* For each defined agent host, a writable data volume for Jenkins
workspaces
* For each defined agent host, a writable data volume for Docker
Agent hosts must have some kind of unique value to identify their
persistent data volumes. Raspberry Pi devices, for example, can use the
SoC serial number.
The *pxe* role configures the TFTP and NBD stages of PXE network
booting. The TFTP server provides the files used for the boot stage,
which may either be a kernel and initramfs, or another bootloader like
SYSLINUX/PXELINUX or GRUB. The NBD server provides the root filesystem,
typically mounted by code in early userspace/initramfs.
The *pxe* role also creates a user group called *pxeadmins*. Users in
this group can publish content via TFTP; they have write-access to the
`/var/lib/tftpboot` directory.
The *tftp* role installs the *tftp-server* package. There is
practically no configuration for the TFTP server. It "just works" out
of the box, as long as its target directory exists.
The *nbd-server* role configures a machine as a Network Block Device
(NDB) server, using the reference `nbd-server` implementation. It
configures a systemd socket unit to listen on the port and accept
incoming connections, and a template service unit for systemd to
instantiate and pass each incoming connection.
The reference `nbd-server` is actually not very good. It does not clean
up closed connections reliably, especially if the client disconnects
unexpectedly. Fortunately, systemd provides the necessary tools to work
around these bugs. Specifically, spawning one process per connection
allows processes to be killed externally. Further, since systemd
creates the listening socket, it can control the keep-alive interval.
By setting this to a rather low value, we can clean up server processes
for disconnected clients more quickly.
Configuration of the server itself is minimal; most of the configuration
is done on a per-export basis using drop-in configuration files. Other
Ansible roles should create these configuration files to configure
application-specific exports. Nothing needs to be reloaded or restarted
for changes to take effect; the next incoming connection will spawn a
new process, which will use the latest configuration file automatically.
The `selinux_permissive` module fails on hosts that do not have SELinux
activated. We must skip running this task on those machines to avoid
fatal errors.
Frigate needs to be able to connect to the MQTT immediately upon start
up or it will crash. Ordering the *frigate.service* unit after
*network-online.target* will help ensure Frigate starts when the system
boots.
The *systemd-resolved* role/playbook ensures the *systemd-resolved*
service is enabled and running, and ensures that the `/etc/resolv.conf`
file is a symlink to the appropriate managed configuration file.
In order for Jenkins to apply configuration policy on machines that are
not members of the *pyrocufflink.blue* domain, it needs to use an SSH
private key for authentication.
The `-external.url` and `-external.alert.source` command line arguments
and their corresponding environment variables can be used to configure
the "Source" links associated with alerts created by `vmalert`.
The firewall hardware is too slow to run the *prometheus_speedtest*
program. It always showed *way* lower speeds than were actually
available. I've moved the service to the Kubernetes cluster and it
works a lot better there.
The *metricspi* hosts several Victoria Metrics-adjacent applications.
These each expose their own HTTP interface that can be used for
debugging or introspecting state. To make these accessible on the
network, the *victoria-metrics-nginx* role now configures `proxy_pass`
directives for them in its nginx configuration.