Previously, _node-474c83.k8s.pyrocufflink.black_ was tainted
`du5t1n.me/machine=raspberrypi`, which prevented arbitrary pods from
being scheduled on it. Now that there are two more Raspberry Pi nodes
in the cluster, and arbitrary pods _should_ be scheduled on them, this
taint no longer makes sense. Instead, having specific taints for the
node's roles is more clear.
The _chmod777.sh_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
With the transition to modular _libvirt_ daemons, the SELinux policy is
a bit more granular. Unfortunately, the new policy has a funny [bug]: it
assumes directories named `storage` under `/run/libvirt` must be for
_virtstoraged_ and labels them as such, which prevents _virtnetworkd_
from managing a virtual network named `storage`.
To work around this, we need to give `/run/libvirt/network` a special
label so that its children do not match the file transition pattern for
_virtstoraged_ and thus keep their `virtnetworkd_var_run_t` label.
[bug]: https://bugzilla.redhat.com/show_bug.cgi?id=2362040
If the _libvirt_ daemon has not fully started by the time `vm-autostart`
runs, we want it to fail and try again shortly. To allow this, we first
attempt to connect to the _libvirt_ socket, and if that fails, stop
immediately and try again in a second. This way, the first few VMs
don't get skipped with the assumption that they're missing, just because
the daemon wasn't ready yet.
_libvirt_ has gone full Polkit, which doesn't work with systemd dynamic
users. So, we have to run `vm-autostart` as root (with no special
OS-level privileges) in order for Polkit to authorize the connection to
the daemon socket.
Apparently, it's not guaranteed that _libosinfo_ always supports even
the version of Fedora it's installed on: there's no _fedora42_ in
_libosinfo-1.12.0-2.fc42_ 🤦🏻♂️.
Fortunately, it makes almost no difference what OS variant is selected
at install time, and we probably want the latest features anyway. Thus,
we can just use _fedora-rawhide_ instead of any particular version and
not worry about it.
The Anaconda runtime is _way_ bigger in Fedora 41, presumably
because of the new web UI. Even though we use text-only automated
installs, we still need enough space for the whole thing to fit in RAM.
I never remember to update this list when I add/remove VMs.
* _bw0_ has been decommissioned; Vaultwarden now runs in Kubernetes
* _unifi3_ has been replaced by _unifi-nuptials_
* _logs-dusk_ runs Victoria Logs, which will evenutally replace Loki
* _node-refrain_ has been replaced by _node-direction_
* _k8s-ctrl0_ has been replaced by _ctrl-crave_ and _ctrl-sycamore_
Just a few days before its third birthday 🎂
There are now three Kubernetes control plane nodes:
* _ctrl-2ed8d3.k8s.pyrocufflink.black_ Raspberry Pi CM4
* _ctrl-crave.k8s.pyrocufflink.black_ (virtual machine)
* _ctrl-sycamore.k8s.pyrocufflink.black_ (virtual machine)
This fixes an `Unable to look up a name or access an atribute in
template string` error when applying the `datavol.yml` playbook for a
machine that does not define any LVM logical volumes.
I don't know what the deal is, but restarting the _victoria-logs_
container makes it lose inbound network connectivity. It appears that
the firewall rules that forward the ports to the container's namespace
seem to get lost, but I can't figure out why. To fix it, I have to
flush the netfilter rules (`nft flush ruleset`) and then restart
_firewalld_ and _victoria-logs_ to recreate them. This is rather
cumbersome, and since Victoria Logs runs on a dedicated VM, there's
really not much advantage to isolating the container's network.
The Linux [netconsole][0] protocol is a very simple plain-text UDP
stream, with no real metadata to speak of. Although it's not really
syslog, Victoria Logs is able to ingest the raw data into the `_msg`
field, and uses the time of arrival as the `_time` field.
_netconsole_ is somewhat useful for debugging machines that do not have
any other console (no monitor, no serial port), like the Raspberry Pi
CM4 modules in the DeskPi Super 6c cluster. Unfortunately, its
implementation in the kernel is so simple, even the source address isn't
particularly useful as an identifier, and since Victoria Logs doesn't
track that anyway, we might as well just dump all the messages into a
single stream.
It's not really discussed in the Victora Logs documentation, but any
time multiple syslog listeners with different properties, _all_ of the
listeners _must_ specify _all_ of those properties. The defaults will
_not_ be used for any stream; the value provided for one stream will be
used for all the others unless they specify one themselves. Thus, in
order to use the default stream fields for the "regular" syslog
listener, we have to explicitly set them.
[0]: https://www.kernel.org/doc/html/latest/networking/netconsole.html
The Raspberry Pi CM4 nodes on the DeskPi Super 6c cluster board are
members of the _cm4-k8s-node_ group. This group is a child of
_k8s-node_ which overrides the data volume configuration and node
labels.
The `datavol.yml` playbook can now create LVM volume groups and logical
volumes. This will be useful for physical hosts with static storage.
LVM LVs and VGs are defined using the `logical_volumes` Ansible
variable, which contains a mapping of VG names to their properties.
Each VG must have two properties: `pvs`, which is a list of LVM physical
volumes to add to the VG, and `lvs`, a list of LVs and their properties,
including `name` and `size. For example:
```yaml
logical_volumes:
kubernetes:
pvs:
- /dev/nvme0n1
lvs:
- name: containers
size: 64G
- name: kubelet
size: 32G
```
The _apps.du5t1n.xyz_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
Apache supports fetching server certificates via ACME (e.g. from Let's
Encrypt) using a new module called _mod_md_. Configuring the module is
fairly straightforward, mostly consisting of `MDomain` directives that
indicate what certificates to request. Unfortunately, there is one
rather annoying quirk: the certificates it obtains are not immediately
available to use, and the server must be reloaded in order to start
using them. Fortunately, the module provides a notification mechanism
via the `MDNotifyCmd` directive, which will run the specified command
after obtaining a certificate. The command is executed with the
privileges of the web server, which does not have permission to reload
itself, so we have to build in some indirection in order to trigger the
reload: the notification runs a script that creates an empty file in the
server's state directory; systemd is watching for that file to be
created, then starts another service unit to trigger the actual reload,
then removes trigger file.
Website roles, etc. that want to switch to using _mod_md_ to manage
their certificates should depend on this role and add an `MDomain`
directive to their Apache configuration file fragments.
We don't want the iSCSI and NFS client tools to be installed on control
plane nodes. Let's move this task to the _k8s-worker_ role so it will
only apply to worker nodes.
Since the _haproxy_ role relies on other roles to provide drop-in
configuration files for actual proxy configuration, we cannot start the
service in the base role. If there are any issues with the drop-in
files that are added later, the service will not be able to start,
causing the playbook to fail and thus never be able to update the broken
configuration. The dependent roles need to be responsible for starting
the service once they have put their configuration files in place.
The _haproxy_ role only installs HAProxy and provides some basic global
configuration; it expects another role to depend on it and provide
concrete proxy configuration with drop-in configuration files. Thus, we
need a role specifically for the Kubernetes control plane nodes to
provide the configuration to proxy for the API server.
Control plane nodes will now run _keepalived_, to provide a "floating"
IP address that is assigned to one of the nodes at a time. This
address (172.30.0.169) is now the target of the DNS A record for
_kubernetes.pyrocufflink.blue_, so clients will always communicate with
the server that currently holds the floating address, whichever that may
be.
I was originally inspired by the official Kubernetes [High Availability
Considerations][0] document when designing this. At first, I planned to
deploy _keepalived_ and HAProxy as DaemonSets on the control plane
nodes, but this ended up being somewhat problematic whenever all of the
control plane nodes would go down at once, as the _keepalived_ and
HAProxy pods would not get scheduled and thus no clients communicate
with the API servers.
[0]: 9d7cfab6fe/docs/ha-considerations.md
[keepalived][0] is a free implementation of the Virtual Router
Redundancy Protocol (VRRP), which is a simple method for automatically
assigning an IP address to one of several potential hosts based on
certain criteria. It is particularly useful in conjunction with a load
balancer like HAProxy, to provide layer 3 redundancy in addition to
layer 7. We will use it for both the reverse proxy for the public
websites and the Kubernetes API server.
[0]: https://www.keepalived.org/
The `kubernetes.yml` playbook now applies the _kubelet_ role to hosts in
the _k8s-controller_ group. This will prepare them to join the cluster
as control plane nodes, but will not actually add them to the cluster.
I didn't realize this playbook wasn't even in the Git repository when I
added it to `site.yml`.
This playbook manages the `scrape-collectd` ConfigMap, which is used by
Victoria Metrics to identify the hosts it should scrape to retrieve
metrics from _collectd_.
Machines that are not part of the Kubernetes cluster, need to be
explicitly listed in this ConfigMap in order for Victoria Metrics to
scrape collectd metrics from them.
The man page for _containers-certs.d(5)_ says that subdirectories of
`/etc/containers/certs.d` should be named `host:port`, however, this is
a bit misleading. It seems instead, the directory name must match the
name of the registry server as specified, so in the case of a server
that supports HTTPS on port 443, where the port would be omitted from
the image name, it must also be omitted from the `certs.d` subdirectory
name.
`/etc/containers/registries.conf.d` is distinct from
`/etc/containers/registries.d`. The latter contains YAML files relating
to image signatures, while the former contains TOML files relating to
registry locations.
It turns out _nginx_ has a built-in default value for `access_log` and
`error_log`, even if they are omitted from the configuration file. To
actually disable writing logs to a file, we need to explicitly specify
`off`.
If `virt-install` fails before the VM starts for the first time; the
`virsh event` process running in the background will never terminate and
therefore the main process will `wait` forever. We can avoid this by
killing the background process if `virt-install` fails.
Although the `newvm.sh` script had a `--vcpus` argument, its value was
never being used.
The `--cpu host` argument for `virt-install` is deprecated in favor of
`--cpu host`.
VMs don't really need graphical consoles; serial terminals are good
enough, or even better given that they are logged. For the few cases
where a graphical console is actually necessary, the `newvm.sh` script
can add one with the `--graphics` argument.
Since the kickstart scripts are now generated from templates by Jenkins,
we need to fetch the final rendered artifacts from the PXE server,
rather than the source files from Gitea.
One major weakness with Ansible's "lookup" plugins is that they are
evaluated _every single time they are used_, even indirectly. This
means, for example, a shell command could be run many times, potentially
resulting in different values, or executing a complex calculation that
always provides the same result. Ansible does not have a built-in way
to cache the result of a `lookup` or `query` call, so I created this
one. It's inspired by [ansible-cached-lookup][0], which didn't actually
work and is apparently unmaintained. Instead of using a hard-coded
file-based caching system, however, my plugin uses Ansible's
configuration and plugin infrastructure to store values with any
available cache plugin.
Although looking up the _pyrocufflink.net_ wildcard certificate with the
Kubernetes API isn't particularly expensive by itself right now, I can
envision several other uses that may be. Having this plugin available
could speed up future playbooks.
[0]: https://pypi.org/project/ansible-cached-lookup
Using files for certificates and private keys is less than ideal.
The only way to "share" a certificate between multiple hosts is with
symbolic links, which means the configuration policy has to be prepared
for each managed system. As we're moving toward a much more dynamic
environment, this becomes problematic; the host-provisioner will never
be able to copy a certificate to a new host that was just created.
Further, I have never really liked the idea of storing certificates and
private keys in Git anyway, even if it is in a submodule with limited
access.
Now that we're serving kickstart files from the PXE server, we need to
have a correctly-configured HTTPD server, with valid HTTPS certificates,
running there.
The _containers-image_ role configures _containers-registries.conf(5)_ and
_containers-cert.d(5)_, which are used by CRI-O (and `podman`).
Specifically, we'll use these to redirect requests for images on Docker
Hub (docker.io) to the internal caching proxy.
Docker Hub's rate limits are so low now that they've started to affect
my home lab. Deploying a caching proxy and directing all pull requests
through it should prevent exceeding the limit. It will also help
prevent containers from starting if access to the Internet is down, as
long as their images have been cached recently.
The *lego-nginx* role automates obtaining certificates for *nginx* via
ACME using `lego`. It generates a shell script with the appropriate
arguments for `lego run`, runs it once to obtain a certificate
initially, then schedules it to run periodically via a systemd timer
unit. Using `lego`'s "hook" capability, the script signals the `nginx`
server process to reload. This uses `doas` for now, but could be
adapted easily to use `sudo`, if the need ever arises.