After some initial testing, I decided that the HTTP API approach to
managing the reboot lock is not going to work. I originally implemented
it this way so that the reboot process on the nodes could stay the same
as it had always been, only adding a systemd unit to interact with the
server to obtain the lock and drain the node. Unfortunately, this does
not actually work in practice because there is no way to ensure that the
new unit runs _first_ during the shutdown process. In fact, systemd
practically _insists_ on stopping all running containers before any
other units. The only solution, therefore, is to obtain the reboot lock
and drain the node before initiating the actual shutdown procedure.
I briefly considered installing a script on each node to handle all of
this, and configuring _dnf-automatic_ to run that. I decided against
that, though, as I would prefer to have as much of the node
configuration managed by Kubnernetes as possible; I don't want to have
to maintain that script with Ansible.
I decided that the best way to resolve these issues was to rewrite the
coordinator as a daemon that runs on every node. It waits for a
sentinel file to appear (`/run/reboot-needed` by default), and then
tries to obtain the reboot lock, drain the node, and reboot the machine.
All of the logic is contained in the daemon and deployed by Kubernetes;
the only change that has to be deployed by Ansible is configuring
_dnf-automatic_ to run `touch /run/reboot-needed` instead of `shutdown
-r +5`.
This implementation is heavily inspired by [kured](https://kured.dev).
Both rely on a sentinel file to trigger the reboot, but Kured uses a
naive polling method for detecting it, which either means wasting a lot
of CPU checking frequently, or introducing large delays by checking
infrequently. Kured also implements the reboot lock without using a
Lease, which may or may not be problematic if multiple nodes try to
reboot simultaneously.