The *pxe* role configures the TFTP and NBD stages of PXE network
booting. The TFTP server provides the files used for the boot stage,
which may either be a kernel and initramfs, or another bootloader like
SYSLINUX/PXELINUX or GRUB. The NBD server provides the root filesystem,
typically mounted by code in early userspace/initramfs.
The *pxe* role also creates a user group called *pxeadmins*. Users in
this group can publish content via TFTP; they have write-access to the
`/var/lib/tftpboot` directory.
The *tftp* role installs the *tftp-server* package. There is
practically no configuration for the TFTP server. It "just works" out
of the box, as long as its target directory exists.
The *nbd-server* role configures a machine as a Network Block Device
(NDB) server, using the reference `nbd-server` implementation. It
configures a systemd socket unit to listen on the port and accept
incoming connections, and a template service unit for systemd to
instantiate and pass each incoming connection.
The reference `nbd-server` is actually not very good. It does not clean
up closed connections reliably, especially if the client disconnects
unexpectedly. Fortunately, systemd provides the necessary tools to work
around these bugs. Specifically, spawning one process per connection
allows processes to be killed externally. Further, since systemd
creates the listening socket, it can control the keep-alive interval.
By setting this to a rather low value, we can clean up server processes
for disconnected clients more quickly.
Configuration of the server itself is minimal; most of the configuration
is done on a per-export basis using drop-in configuration files. Other
Ansible roles should create these configuration files to configure
application-specific exports. Nothing needs to be reloaded or restarted
for changes to take effect; the next incoming connection will spawn a
new process, which will use the latest configuration file automatically.
Frigate needs to be able to connect to the MQTT immediately upon start
up or it will crash. Ordering the *frigate.service* unit after
*network-online.target* will help ensure Frigate starts when the system
boots.
The *systemd-resolved* role/playbook ensures the *systemd-resolved*
service is enabled and running, and ensures that the `/etc/resolv.conf`
file is a symlink to the appropriate managed configuration file.
The `-external.url` and `-external.alert.source` command line arguments
and their corresponding environment variables can be used to configure
the "Source" links associated with alerts created by `vmalert`.
The *metricspi* hosts several Victoria Metrics-adjacent applications.
These each expose their own HTTP interface that can be used for
debugging or introspecting state. To make these accessible on the
network, the *victoria-metrics-nginx* role now configures `proxy_pass`
directives for them in its nginx configuration.
The *scrape-collectd* role generates the
`/etc/prometheus/scrape-collectd.yml` file. This file can be read by
Prometheus/Victoria Metrics/vmagent to identify the hosts running
*collectd* with the *write_prometheus* plugin, using the
`files_sd_configs` scrape configuration option.
All hosts in the *collectd-prometheus* group are listed as scrape
targets.
*mtrcs0.pyrocufflink.red* is a Raspberry Pi CM4 on a Waveshare
CM4-IO-BASE-B carrier board with a NVMe SSD. It runs a custom OS built
using Buildroot, and is not a member of the *pyrocufflink.blue* AD
domain.
*mtrcs0.p.r* hosts Victoria Metrics/`vmagent`, `vmalert`, AlertManager,
and Grafana. I've created a unique group and playbook for it,
*metricspi*, to manage all these applications together.
The `grafana_ldap_root_ca_cert` can be used to set the path to the root
CA certificate (bundle) Grafana uses to validate the certificate
presented by the configured LDAP server. By default, Grafana uses the
system root CA trust store, but this variable can be used in situations
where this is not suitable.
`vmalert` is a component of Victoria Metrics. It handles alerting and
recording rules, periodically executing queries and dispatching alerts
or writing aggregated data back to the TSDB.
The Prometheus *blackbox_exporter* is a tool that can perform arbitrary,
generic ICMP, TCP, or HTTP "probes" against external services. This is
useful for applications that do not export their own metrics, and for
evaluating the health of protocol-level operations (e.g. TLS
certificate expiration).
The *blackbox-exporter* Ansible role installs and configures the
Blackbox Exporter on the target system. It fetches the specified binary
release from Github and copies it to the remote machine. It also
creates a systemd unit and configures the Blackbox exporter's "modules"
from the `blackbox_modules` Ansible variable.
Some hosts may not need this plugin, or may not have it installed.
Notably, it is not needed or used on my systems based on Buildroot,
since the only current use case for it is to keep track of the Fedora
version.
There are a few minor differences between the way Fedora and Buildroot
package *nginx*:
* Fedora uses a user named *nginx* while buildroot uses *www-data*
* Buildroot uses a Debian-like configuration layout (with
`sites-enabled` and `modules-enabled` directories)
This commit adjusts the *nginx* Ansible role to compensate for these
differences, eschewing Buildroot's configuration layout for the one used
by Fedora/Red Hat.
The *victoria-metrics* role deploys a single-server instance of the
Victoria Metrics time series database server. It installs the selected
version by downloading the binary release from Github and copying it to
`/usr/local/sbin` on the managed node. Scrape configuration is optional
and can be specified with the `scrape_configs` variable.
Tasks that configure the SELinux policy obviously only make sense if the
host uses SELinux. Similarly, if the host does not use FirewallD,
configuring firewall rules doesn't work.
The `/etc/collectd.d` directory is created by the RPM package on
machines running a Red Hat-based Linux distribution, but it may not
always be present on other machines.
In addition to ignoring particular types of filesystems, e.g. OverlayFS,
we can also ignore filesystems by their mount point. This could be
useful, for example, for bind-mounted directories, such as those used on
Kubernetes nodes.
By default, the *df* pluggin for collectd, which monitors filesystem
usage, collects data about all mounted filesystems. It can be
configured to ignore some filesystems, either by mount point, device, or
filesystem type. We will uses this capability to avoid collecting data
about OverlayFS mounts, because by definition, they do not represent a
real filesystem, but one or more other mounted filesystems. Collecting
data about these just creates useless metrics, especially on machines
that run containers.
Some machines, such as the nodes in the Kubernetes cluster, do not use
*firewalld*. For these machines, we need to skip the `firewalld` tasks,
as they will fail. The `host_uses_firewalld` variable can be set to
`False` for these machines to do so.
*nvr1.pyrocufflink.blue* is the new video recording server. It is a
1U rack-mounted physical machine based on the [Jetway
JBC150F596-3160-B][0] barebone system. It replaces
*nvr0.pyrocufflink.blue* in this role.
[0]: https://www.jetwaycomputer.com/JBC150F596.html
Podman 4 puts lock files in the configuration directory for [some stupid
reason][0]. There are so many issues here!
* It is now impossible to run `podman` as root with a read-only `/etc`.
* Why does it need the lock file at all when using `--network=host`?
Luckily, we can work around it fairly easily by mounting a tmpfs
filesystem over the directory it wants to put the lock file in. This
pretty much defeats the purpose of having a lock file, but it's likely
not needed anyway.
[0]: 836fa4c493
The *sensors* plugin for collectd reads temperature information from the
I²C/SMBus using *lm_sensors*. Naturally, it is only useful on physical
machines, so it is not installed or enabled by default.
Instead of a simple list of disabled plugins, hosts and host groups can
now control whether plugins are enabled or disabled using the
`collectd_plugins` map. The map keys are plugin names, and the values
are booleans indicating if the plugin is enabled.
Using this mechanism, some plugins can be disabled by default (e.g. the
*md* plugin), and enabling them per host or per host group is simpler.
Mosquitto can save retained messages, persistent clients, etc. to the
filesystem and restore them at startup. This allows state to be
maintained even after the process restarts.
The KDC service, as managed by Samba, continuously logs to two files
that need to be rotated. The upstream configuration for logrotate only
manages one of these files, and does not correctly signal the service
after rotating, as it expects the service to be managed by systemd
instead of Samba. As such, we need to adjust the configuration to
handle both files and send SIGHUP directly to the process.
Promoting the new site I have been working on at *dustin.hatch.is* to my
main domain, *dustin.hatch.name*. The new site is just static content,
generated and uploaded by a Jenkins job.
Finally have a certificate for *dustin.hatch.name* now, too!
This resolves two issues with fetching the Proton VPNserver list:
1. If a connection error occurs when fetching the list, it will be
ignored, just as with HTTP errors
2. If any errors are encountered when fetching the list, and a valid
cache was loaded, its contents are returned, regardless of the
timestamp of the cache file.
To handle the RSVP form on *dustinandtabitha.com*, we are going to use
*formsubmit*. It runs on the same machine that hosts the website, so
there's no dealing with CORS. The */submit/rsvp* path, which is proxied
to the backend, is the RSVP form's target.
*formsubmit* is a simple, customizable HTML for submission handler. I
designed it for Tabitha to use to collect information from forms on her
websites. Notably, we will use it for the RSVP page on our wedding
invitation site.
The state history database is entirely too big. It takes over an hour
to create a backup of it, which usually causes BURP to time out. The
data it stores isn't particularly interesting anyway. Instead of trying
to back it up and ultimately not getting any backup at all, we'll just
skip it altogether to ensure we have a consistent backup of everything
else that is actually important.
Uploading large files can take a very long time. If the process takes
longer than the configured timeout in Apache, it will be aborted and the
client will receive an HTTP 504 Gateway Timeout error. Increasing the
timeout will help alleviate this for files up to a certain size.
Notably, it now lets me upload Signal backups without errors.
Nextcloud thinks it needs to run the upgrade/migration tool if the
version number in its configuration file does not match the running
version. It then updates the config file with the correct version. The
next time the configuration policy is applied, however, the version will
revert back to whatever is set in the template. This will re-trigger
the upgrade notification.
To avoid this problem, we now set the version in the configuration file
dynamically. Nextcloud writes its version number in a constant in
`version.php`.
Nextcloud uses double backslashes in its fully-qualified path names.
Although single backslashes work, the application will replace them,
leading to a constant conflict between itself and the Ansible template.
The first time launching a container after pulling a new image, it can
take several minutes for the container to actually start. Podman has to
set up the overlay filesystems, which is very slow on a Raspberry Pi.
With the default start timeout, systemd may end up killing the process
before the container is completely set up. Thus, we need to increase
the timeout to ensure there is plenty of time for Podman to work.
Processes running in containers only have access to a limited set of
devices, based on their SELinux type label. The USB serial devices
exposed by the Zwave and Zigbee adapters are not labelled correctly by
default to allow them to be used in containers.
Using `chcon` to change the type label of the device before starting the
container seems to work, but seems a bit kludgy. It would probably be
better to use a SELinux file context rule and/or a udev rule to ensure
the label is set correctly when the device node is created.
Although Home Assistant itself will start fine if the network is not yet
available, some integrations will not. Notably, the Matrix integration
will fail to load if it cannot contact the homeserver when it is first
initialized. To avoid this problem, we can just delay starting Home
Assistant until the network is available.
Before the `burp` tool gained the `-Q` option, the only way to disable
the progress counter was through the configuration file. Since we do
not want any output from automatic backups (except of course
catastrophic failures), since it would end up being e-mailed by cron,
the progress counter had to be disabled globally. This meant that
on-demand runs on a terminal could not have a progress counter, which
was pretty disappointing.
Now that `burp` has `-Q`, this is no longer the case. Scheduled backups
can run with `-Q`, but ad-hoc runs can omit it to get a progress
counter.
Send logs to the systemd journal for easier viewing and disable logging
to a file. Also, the `samba_dc_log_level` variable can control the log
level (0-10, 0 being off, 10 being insane debugging).
Docker is effectively deprecated by Fedora/Red Hat. It is a pain in the
ass to work with anyway. Podman integrates better with systemd, and is
in general more aligned with how I prefer to deploy and manage
applications.
I am following the same pattern here that I have used for Home
Assistant, ZWaveJS2MQTT, etc. The systemd service starts the container
with `podman`, passing the necessary arguments for UID/GID mapping, etc.
Note that, by default, Vaultwarden expects to be able to bind to port
80; since the container is unprivileged, we have to configure it (or
rather, its embedded HTTP server [Rocket](https://rocket.rs)) to listen
on a different port. We also configure it to listen only on the
loopback, since it is being proxied by Apache to the outside network.
To migrate the data from the Docker volume, we just have to copy the
files and fix their ownership.
The *bitwarden_rs* project was recently renamed to *Vaultwarden*, so I
took this opportunity to update the name in most places within the
*bitwarden_rs* role.
Sometimes I need to configure a machine to be a domain member without
actually adding it to the domain. Now I can by running
`ansible-playbook` with `--skip-tags domain-join`
I honestly don't remember why the `use rfc2307` setting was only enabled
on the first DC. All DCs seem to need this setting in order to use the
UID/GID numbers from the directory, instead of using auto-generated
numbers.
If the remote address configuration for strongSwan is not valid when the
Proton VPN watchdog starts, it will now regenerate it immediately. This
can happen, for example, if the Internet has been down for a while, and
the watchdog has iterated through all of the servers in the cache.
Restarting the service will now force it to reconfigure the tunnel and
bring the VPN back up.
The `collectd-version` script uses the *collectd* UNIX socket to send
custom values to *collectd* to track the OS version. Since these values
obviously cannot change while the system is running, the values are
specified with a very long interval. This avoids having to continuously
insert the values, either with a long-running process or by repeatedly
running a script. The values only need to be inserted once when
*collectd* starts.
All values sent to *collectd* must have an associated type. The type
defines the acceptable range of values. Types are defined in a simple
text file database. *collectd* loads all of the databases specified by
`TypesDB` directives in its configuration file. When configuring a
custom types database, the default database needs to be specified
explicitly; it will not be loaded automatically if there are any
`TypesDB` directives in the configuration.
The *unixsock* plugin for *collectd* provides a socket-based interface
that other software can use to communicate with *collectd*. Notably,
this can be used to publish custom values, query existing values, and
flush caches.
The socket is created at `/run/collectd/socket`. The `/run/collectd`
directory is managed by systemd; it will be created automatically when
the service starts and cleaned up when it stops.
The *collectd-prometheus* role now has a
`collectd_prometheus_allow_outsize` variable. This variable controls
whether or not external hosts are allowed to scrape data from *collectd*.
When set to `false`, as is the default value, *collectd* will be
configured to listen on the loopback interface only, and the TCP port
will not be opened in the firewall.
Synapse supports exporting metrics in Prometheus format. It can do this
either as part of the main server, or in a separate listener. I chose
to use a separate listener so that the metrics are not exposed
publicly.
The *processes* plugin for collectd can be configured to monitor
additional information about specific processes. By specifying one or
more `Process` or `ProcessMatch` directives in the plugin configuration,
collectd will start monitoring the listed processes in detail.
The `collectd_processes` Ansible variable can contain a list of
processes to monitor. Each item must at least have a `name` property,
and may also have a `regex` property. If the latter is present, a
`ProcessMatch` directive will be emitted instead of a `Process`
directive.
The *base* role will now set the password for the *root* user, if the
`root_password_hash` variable is defined. This ensures that there is a
way to log into machines directly, even if other authentication
mechanisms like Active Directory are unavailable.
The *serial-console* Ansible role enables and starts a systemd service
unit to activate a console getty on the specified serial console device
(by default: ttyS0). This is particularly useful for virtual machines,
allowing one to control them in absence of a graphical VM management
tool.
Filesystems like NFS and CIFS require "helper" utilities (i.e.
`mount.nfs` and `mount.cifs`, respectively). These need to be installed
in order for a system to be able to mount those filesystems.
The current shared storage system uses NFSv4, and as such, the
*nfs-utils* package needs to be installed on the VM hosts.
With the transition away from *dhcpcd* on the VM hosts, there is no
longer any need for a custom wait script that must run prior to
attempting to mount the shared filesystem. This dramatically simplifies
the configuration necessary for shared storage.
I don't really see any reason why the shared storage configuration needs
to be managed by a separate role. The *vmhost* role is not really
generic anyway, and will probably not work for any other VM host
deployment besides the two machines running now. As such, I think it
makes sense to move the task to mount the shared filesystem into the
*vmhost* role and drop the *dch-storage-net* role.
The *libvirt-daemon-driver-network* package provides support for
managing virtual networks with libvirt. It is necessary in order to use
managed networks in VM configuration, as opposed to directly specifying
VM network interfaces in their domain configuration.
*systemd-networkd* is (currently) my preferred way to manage network
interfaces on machines running Fedora. The *systemd-networkd* role
provides a generic way to configure network links, devices, and
interfaces, using Ansible variables to generate network unit
configuration files.
The `collectd_df` variable can be used to configure the *df* plugin for
collectd. It should contain a map on key-value pairs that correspond
exactly to the plugin's configuration options.
*nvr0.pyrocufflink.blue* hosts Frigate. It is deployed on a separate
subnet, for two reasons:
* To avoid streaming video from the cameras through the firewall
* To prevent any hosts on the LAN except Home Assistant from
communicating with Frigate, since it does not have any kind of
authentication or access control
Frigate is an NVR that uses machine learning to detect objects on camera
in real time. It integrates with Home Assistant to expose sensors which
can be used for automation, etc.
The only official way to deploy Frigate is with a container, so we use
Podman and systemd to manage it.
The production deployment of *dnsmasq* for Home Assistant has deviated
from how the *hass-dhcp* role configures it. Bringing the role back in
sync with how things really are.
ZwaveJS2Mqtt includes a very powerful web-based UI for configuring and
controlling the Z-Wave network. This functionality is no longer
available within Home Assistant itself, so being able to access the
ZwaveJS2Mqtt UI is crucial to operating the network.
I wanted to make the UI available at */zwave/*, which requires using
*mod_rewrite* to conditionally proxy requests based on the `Connection`
HTTP header, since the UI passes both HTTP and WebSocket requests to the
same paths. *mod_rewrite* configuration is not inherited from the main
server configuration to virtual hosts, so the
`RewriteRule`/`RewriteCond` directives have to be specified within the
`<VirtualHost>` block. This means that the Home Assistant proxy
configuration has to be within its own virtual host, and the
Zwavejs2Mqtt configuration has to be there as well.
*hass2.pyrocufflink.blue* is a Raspberry Pi Compute Module 4-based
system, currently mounted in a WaveShare CM4 Mini Base Board (A). With
an NVMe SSD for primary storage, it runs significantly faster than a
standard Raspberry Pi 4, and blows the old Raspberry Pi 3-based Home
Assistant deployment out of the water. It has a Zooz 700 series Z-Wave
Plus S2 USB stick and a ConBee II Zigbee USB stick attached to its USB
2.0 ports. It runs a customized Fedora Minimal distribution.
Zigbee2MQTT is very similar to ZwaveJS2Mqtt: it is a daemon process that
communicates with the Zigbee radio and integrates with Home Assistant
using MQTT. Naturally, I decided to deploy it in the same way as
ZwaveJS2Mqtt, using a systemd unit to run it in a container with Podman.
Mosquitto 2.x included two significant changes from 1.6:
* There is no longer a "default" listener; all listeners are configured
in the same way
* The daemon drops privileges *before* reading TLS certificates and
private keys