Commit Graph

1137 Commits (c35c7b8520ddd1823af11723fdcbc079c7f022e4)

Author SHA1 Message Date
Dustin c35c7b8520 r/apache: log errors to syslog by default
Logging to syslog will allow messages to be aggregated in the central
server (Loki now, Victoria Logs eventually), so I don't have to SSH into
the web server to check for errors.
2025-08-04 09:49:19 -05:00
Dustin 84a8a0d4af websites: dustin.hatch.n: Switch to mod_md for cert
The _dustin.hatch.name_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module.  This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
2025-08-04 09:49:19 -05:00
Dustin 71b1363c58 r/vmhost: Install nmap-ncat
While clients can use `virt-ssh-helper` to communicate with `libvirtd`,
they need `nc` in order to forward SPICE graphics communication.
2025-07-31 10:19:11 -05:00
Dustin 9e7b9420f4 k8s-iot-net-ctrl: Add node role taints
Previously, _node-474c83.k8s.pyrocufflink.black_ was tainted
`du5t1n.me/machine=raspberrypi`, which prevented arbitrary pods from
being scheduled on it.  Now that there are two more Raspberry Pi nodes
in the cluster, and arbitrary pods _should_ be scheduled on them, this
taint no longer makes sense.  Instead, having specific taints for the
node's roles is more clear.
2025-07-29 21:44:29 -05:00
Dustin 7f8e39ebd4 websites: chmod777.sh: Switch to mod_md for cert
The _chmod777.sh_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module.  This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
2025-07-28 18:53:58 -05:00
Dustin 2b12ce769c remote-blackbox: Scrape Invoice Ninja 2025-07-28 18:28:30 -05:00
Dustin 3270011fee r/vmhost: Work around libvirt SELinux policy bug
With the transition to modular _libvirt_ daemons, the SELinux policy is
a bit more granular.  Unfortunately, the new policy has a funny [bug]: it
assumes directories named `storage` under `/run/libvirt` must be for
_virtstoraged_ and labels them as such, which prevents _virtnetworkd_
from managing a virtual network named `storage`.

To work around this, we need to give `/run/libvirt/network` a special
label so that its children do not match the file transition pattern for
_virtstoraged_ and thus keep their `virtnetworkd_var_run_t` label.

[bug]: https://bugzilla.redhat.com/show_bug.cgi?id=2362040
2025-07-28 18:23:24 -05:00
Dustin 2ee86f6344 r/vmhost: Retry vm-autostart if libvirt is down
If the _libvirt_ daemon has not fully started by the time `vm-autostart`
runs, we want it to fail and try again shortly.  To allow this, we first
attempt to connect to the _libvirt_ socket, and if that fails, stop
immediately and try again in a second.  This way, the first few VMs
don't get skipped with the assumption that they're missing, just because
the daemon wasn't ready yet.
2025-07-28 18:20:50 -05:00
Dustin 4df047cf76 r/vmhost: Disable DynamicUsers for vm-autostart
_libvirt_ has gone full Polkit, which doesn't work with systemd dynamic
users.  So, we have to run `vm-autostart` as root (with no special
OS-level privileges) in order for Polkit to authorize the connection to
the daemon socket.
2025-07-28 18:18:35 -05:00
Dustin a63ee2bff5 newvm: Use fedora-rawhide OS variant
Apparently, it's not guaranteed that _libosinfo_ always supports even
the version of Fedora it's installed on: there's no _fedora42_ in
_libosinfo-1.12.0-2.fc42_ 🤦🏻‍♂️.

Fortunately, it makes almost no difference what OS variant is selected
at install time, and we probably want the latest features anyway.  Thus,
we can just use _fedora-rawhide_ instead of any particular version and
not worry about it.
2025-07-28 18:15:45 -05:00
Dustin 4804b1357b newvm: Adjust min memory for Fedora 41+
The Anaconda runtime is _way_ bigger in Fedora 41, presumably
because of the new web UI.  Even though we use text-only automated
installs, we still need enough space for the whole thing to fit in RAM.
2025-07-28 18:14:02 -05:00
Dustin 0ef65e4e5d vm-hosts: Update vm_autostart list
I never remember to update this list when I add/remove VMs.

* _bw0_ has been decommissioned; Vaultwarden now runs in Kubernetes
* _unifi3_ has been replaced by _unifi-nuptials_
* _logs-dusk_ runs Victoria Logs, which will evenutally replace Loki
* _node-refrain_ has been replaced by _node-direction_
* _k8s-ctrl0_ has been replaced by _ctrl-crave_ and _ctrl-sycamore_
2025-07-28 18:12:09 -05:00
Dustin e6ac6ae202 hosts: Decommission k8s-ctrl0
Just a few days before its third birthday 🎂

There are now three Kubernetes control plane nodes:

* _ctrl-2ed8d3.k8s.pyrocufflink.black_ Raspberry Pi CM4
* _ctrl-crave.k8s.pyrocufflink.black_ (virtual machine)
* _ctrl-sycamore.k8s.pyrocufflink.black_ (virtual machine)
2025-07-28 17:52:11 -05:00
Dustin e1c157ce87 raspberry-pi: Add collectd sensors, thermal plugins
All the Raspberry Pi machines should have the _sensors_ and _thermal_
plugins enabled so we can monitor their CPU etc. temperatures.
2025-07-28 17:50:39 -05:00
Dustin bf33c2ab7c datavol: Handle undefined logical_volumes
This fixes an `Unable to look up a name or access an atribute in
template string` error when applying the `datavol.yml` playbook for a
machine that does not define any LVM logical volumes.
2025-07-28 16:51:04 -05:00
Dustin 59d17bf3f4 r/v-l: Use the host network
I don't know what the deal is, but restarting the _victoria-logs_
container makes it lose inbound network connectivity.  It appears that
the firewall rules that forward the ports to the container's namespace
seem to get lost, but I can't figure out why.  To fix it, I have to
flush the netfilter rules (`nft flush ruleset`) and then restart
_firewalld_ and _victoria-logs_ to recreate them.  This is rather
cumbersome, and since Victoria Logs runs on a dedicated VM, there's
really not much advantage to isolating the container's network.
2025-07-27 17:47:31 -05:00
Dustin b2d35ac881 victoria-logs: Listen for Linux netconsole logs
The Linux [netconsole][0] protocol is a very simple plain-text UDP
stream, with no real metadata to speak of.  Although it's not really
syslog, Victoria Logs is able to ingest the raw data into the `_msg`
field, and uses the time of arrival as the `_time` field.

_netconsole_ is somewhat useful for debugging machines that do not have
any other console (no monitor, no serial port), like the Raspberry Pi
CM4 modules in the DeskPi Super 6c cluster.  Unfortunately, its
implementation in the kernel is so simple, even the source address isn't
particularly useful as an identifier, and since Victoria Logs doesn't
track that anyway, we might as well just dump all the messages into a
single stream.

It's not really discussed in the Victora Logs documentation, but any
time multiple syslog listeners with different properties, _all_ of the
listeners _must_ specify _all_ of those properties.  The defaults will
_not_ be used for any stream; the value provided for one stream will be
used for all the others unless they specify one themselves.  Thus, in
order to use the default stream fields for the "regular" syslog
listener, we have to explicitly set them.

[0]: https://www.kernel.org/doc/html/latest/networking/netconsole.html
2025-07-27 17:47:31 -05:00
Dustin fad63d5973 inventory: Ignore errors connecting to libvirt
If one of the VM hosts is offline, we still want to be able to generate
the inventory from the other host.
2025-07-27 17:47:31 -05:00
Dustin 53c0107651 hosts: Add CM4 k8s cluster nodes
These three machines are Raspberry Pi CM4 nodes on the DeskPi Super 6c
cluster board.  The worker nodes have a 256 GB NVMe SSD attached.
2025-07-27 17:47:24 -05:00
Dustin c67e5f4e0c cm4-k8s-node: Add group
The Raspberry Pi CM4 nodes on the DeskPi Super 6c cluster board are
members of the _cm4-k8s-node_ group.  This group is a child of
_k8s-node_ which overrides the data volume configuration and node
labels.
2025-07-27 17:45:46 -05:00
Dustin 93553c7630 datavol: Add support for LVM
The `datavol.yml` playbook can now create LVM volume groups and logical
volumes.  This will be useful for physical hosts with static storage.

LVM LVs and VGs are defined using the `logical_volumes` Ansible
variable, which contains a mapping of VG names to their properties.
Each VG must have two properties: `pvs`, which is a list of LVM physical
volumes to add to the VG, and `lvs`, a list of LVs and their properties,
including `name` and `size.  For example:

```yaml
logical_volumes:
  kubernetes:
    pvs:
    - /dev/nvme0n1
    lvs:
    - name: containers
      size: 64G
    - name: kubelet
      size: 32G
```
2025-07-27 12:37:23 -05:00
Dustin dc924aa70b web/hlc: Remove obsolete form submit paths
Tabitha doesn't have any forms on her website any more.
2025-07-23 11:42:33 -05:00
Dustin 7034d5fec0 websites/tabitha: Redirect to HLC, use mod_md cert
Tabitha has effectively decommissioned her _tabitha.biz_ website.  She
wants it to redirect to the Hatch Learning Center site instead.
2025-07-23 11:40:25 -05:00
Dustin 48f47b8905 websites: apps.d.x: Switch to mod_md for cert
The _apps.du5t1n.xyz_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module.  This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
2025-07-23 10:07:16 -05:00
Dustin 0eb6220672 r/mod_md: Configure Apache for ACME certificates
Apache supports fetching server certificates via ACME (e.g. from Let's
Encrypt) using a new module called _mod_md_.  Configuring the module is
fairly straightforward, mostly consisting of `MDomain` directives that
indicate what certificates to request.  Unfortunately, there is one
rather annoying quirk: the certificates it obtains are not immediately
available to use, and the server must be reloaded in order to start
using them.  Fortunately, the module provides a notification mechanism
via the `MDNotifyCmd` directive, which will run the specified command
after obtaining a certificate.  The command is executed with the
privileges of the web server, which does not have permission to reload
itself, so we have to build in some indirection in order to trigger the
reload: the notification runs a script that creates an empty file in the
server's state directory; systemd is watching for that file to be
created, then starts another service unit to trigger the actual reload,
then removes trigger file.

Website roles, etc. that want to switch to using _mod_md_ to manage
their certificates should depend on this role and add an `MDomain`
directive to their Apache configuration file fragments.
2025-07-23 10:07:16 -05:00
Dustin 9690234203 r/k8s-worker: Install iSCSI/NFS client tools
We don't want the iSCSI and NFS client tools to be installed on control
plane nodes.  Let's move this task to the _k8s-worker_ role so it will
only apply to worker nodes.
2025-07-22 16:21:49 -05:00
Dustin fb9f46cc47 r/haproxy: Do not start service
Since the _haproxy_ role relies on other roles to provide drop-in
configuration files for actual proxy configuration, we cannot start the
service in the base role.  If there are any issues with the drop-in
files that are added later, the service will not be able to start,
causing the playbook to fail and thus never be able to update the broken
configuration.  The dependent roles need to be responsible for starting
the service once they have put their configuration files in place.
2025-07-22 16:21:49 -05:00
Dustin c7374c8cca r/k8s-controller: Deploy HAProxy
The _haproxy_ role only installs HAProxy and provides some basic global
configuration; it expects another role to depend on it and provide
concrete proxy configuration with drop-in configuration files.  Thus, we
need a role specifically for the Kubernetes control plane nodes to
provide the configuration to proxy for the API server.
2025-07-22 16:21:49 -05:00
Dustin 381ffe7112 kubernetes: Configure keepalived on control plane
Control plane nodes will now run _keepalived_, to provide a "floating"
IP address that is assigned to one of the nodes at a time.  This
address (172.30.0.169) is now the target of the DNS A record for
_kubernetes.pyrocufflink.blue_, so clients will always communicate with
the server that currently holds the floating address, whichever that may
be.

I was originally inspired by the official Kubernetes [High Availability
Considerations][0] document when designing this.  At first, I planned to
deploy _keepalived_ and HAProxy as DaemonSets on the control plane
nodes, but this ended up being somewhat problematic whenever all of the
control plane nodes would go down at once, as the _keepalived_ and
HAProxy pods would not get scheduled and thus no clients communicate
with the API servers.

[0]: 9d7cfab6fe/docs/ha-considerations.md
2025-07-22 16:21:49 -05:00
Dustin f62b11bb9d r/keepalived: Deploy keepalived
[keepalived][0] is a free implementation of the Virtual Router
Redundancy Protocol (VRRP), which is a simple method for automatically
assigning an IP address to one of several potential hosts based on
certain criteria.  It is particularly useful in conjunction with a load
balancer like HAProxy, to provide layer 3 redundancy in addition to
layer 7.  We will use it for both the reverse proxy for the public
websites and the Kubernetes API server.

[0]: https://www.keepalived.org/
2025-07-22 16:21:49 -05:00
Dustin 0e6cc4882d Add k8s-test group
This group is used for temporary machines while testing Kubernetes node
deployment changes.
2025-07-22 16:21:49 -05:00
Dustin 0e168e0294 kubernetes: Prepare k8s control plane nodes
The `kubernetes.yml` playbook now applies the _kubelet_ role to hosts in
the _k8s-controller_ group.  This will prepare them to join the cluster
as control plane nodes, but will not actually add them to the cluster.
2025-07-22 15:28:42 -05:00
Dustin 2d36d1fc8f websites: Remove darkchestofwonders.us
This website has been moved to Kubernetes since some time ago.  We don't
need to configure the web server to host it anymore.
2025-07-22 13:10:30 -05:00
Dustin b2213416d0 scrape-collectd-configmap: Add PB
I didn't realize this playbook wasn't even in the Git repository when I
added it to `site.yml`.

This playbook manages the `scrape-collectd` ConfigMap, which is used by
Victoria Metrics to identify the hosts it should scrape to retrieve
metrics from _collectd_.
2025-07-20 21:27:54 -05:00
Dustin a5b47eb661 hosts: Add vm-hosts to collectd group
Now that the VM hosts are not members of the AD domain, they need to be
added to the _collectd_ group directly.
2025-07-18 12:47:55 -05:00
Dustin 506ddad2dc site: Apply scrape-collectd-configmap PB
Machines that are not part of the Kubernetes cluster, need to be
explicitly listed in this ConfigMap in order for Victoria Metrics to
scrape collectd metrics from them.
2025-07-18 12:46:22 -05:00
Dustin f7546791cc kubelet: Fix CA cert for Docker Hub proxy
The man page for _containers-certs.d(5)_ says that subdirectories of
`/etc/containers/certs.d` should be named `host:port`, however, this is
a bit misleading.  It seems instead, the directory name must match the
name of the registry server as specified, so in the case of a server
that supports HTTPS on port 443, where the port would be omitted from
the image name, it must also be omitted from the `certs.d` subdirectory
name.
2025-07-16 16:05:19 -05:00
Dustin ba3f61fb08 r/containers-image: Fix registries.conf path
`/etc/containers/registries.conf.d` is distinct from
`/etc/containers/registries.d`.  The latter contains YAML files relating
to image signatures, while the former contains TOML files relating to
registry locations.
2025-07-14 16:21:58 -05:00
Dustin 1bf6ae6d3c kubernetes: Disable become for delegated task
We do not need "become" for the Kubernetes resource manipulation task
that runs on the control machine.
2025-07-14 16:19:33 -05:00
Dustin e65bcc25ba r/k8s-worker: Fix typo in variable name
This typographical error was causing the "join" tasks to be executed
every time.
2025-07-14 16:18:35 -05:00
Dustin 61a4f64bbb r/nginx: Fix disabling access/error log files
It turns out _nginx_ has a built-in default value for `access_log` and
`error_log`, even if they are omitted from the configuration file.  To
actually disable writing logs to a file, we need to explicitly specify
`off`.
2025-07-14 16:11:35 -05:00
Dustin b4f5b419e1 newvm: Stop waiting for VM events if install fails
If `virt-install` fails before the VM starts for the first time; the
`virsh event` process running in the background will never terminate and
therefore the main process will `wait` forever.  We can avoid this by
killing the background process if `virt-install` fails.
2025-07-14 15:57:12 -05:00
Dustin 51e8cae618 newvm: Fix vCPU count/CPU model
Although the `newvm.sh` script had a `--vcpus` argument, its value was
never being used.

The `--cpu host` argument for `virt-install` is deprecated in favor of
`--cpu host`.
2025-07-14 15:57:12 -05:00
Dustin 04718fa6d0 newvm: Avoid adding graphics adapter by default
VMs don't really need graphical consoles; serial terminals are good
enough, or even better given that they are logged.  For the few cases
where a graphical console is actually necessary, the `newvm.sh` script
can add one with the `--graphics` argument.
2025-07-14 15:57:12 -05:00
Dustin 0824e6bea0 newvm: Update default kickstart location
Since the kickstart scripts are now generated from templates by Jenkins,
we need to fetch the final rendered artifacts from the PXE server,
rather than the source files from Gitea.
2025-07-14 15:57:12 -05:00
Dustin 7823a2ceaf ci: Add Jenkins pipeline for pxe.yml 2025-07-13 16:10:20 -05:00
Dustin b9a046c7f4 plugins: Add lookup cache plugin
One major weakness with Ansible's "lookup" plugins is that they are
evaluated _every single time they are used_, even indirectly.  This
means, for example, a shell command could be run many times, potentially
resulting in different values, or executing a complex calculation that
always provides the same result.  Ansible does not have a built-in way
to cache the result of a `lookup` or `query` call, so I created this
one.  It's inspired by [ansible-cached-lookup][0], which didn't actually
work and is apparently unmaintained.  Instead of using a hard-coded
file-based caching system, however, my plugin uses Ansible's
configuration and plugin infrastructure to store values with any
available cache plugin.

Although looking up the _pyrocufflink.net_ wildcard certificate with the
Kubernetes API isn't particularly expensive by itself right now, I can
envision several other uses that may be.  Having this plugin available
could speed up future playbooks.

[0]: https://pypi.org/project/ansible-cached-lookup
2025-07-13 16:02:57 -05:00
Dustin 906819dd1c r/apache: Use variables for HTTPS cert/key content
Using files for certificates and private keys is less than ideal.
The only way to "share" a certificate between multiple hosts is with
symbolic links, which means the configuration policy has to be prepared
for each managed system.  As we're moving toward a much more dynamic
environment, this becomes problematic; the host-provisioner will never
be able to copy a certificate to a new host that was just created.
Further, I have never really liked the idea of storing certificates and
private keys in Git anyway, even if it is in a submodule with limited
access.
2025-07-13 16:02:57 -05:00
Dustin f08f147931 r/pxe: Depend on apache role
Now that we're serving kickstart files from the PXE server, we need to
have a correctly-configured HTTPD server, with valid HTTPS certificates,
running there.
2025-07-13 16:02:57 -05:00
Dustin 6667066826 kubelet: Configure cri-o container registries
The _containers-image_ role configures _containers-registries.conf(5)_ and
_containers-cert.d(5)_, which are used by CRI-O (and `podman`).
Specifically, we'll use these to redirect requests for images on Docker
Hub (docker.io) to the internal caching proxy.
2025-07-12 16:45:47 -05:00