Since transitioning to externalIPs for TCP services, it is no longer
possible to use the HTTP.01 ACME challenge to issue certificates for
services hosted in the cluster, because the ingress controller does not
listen on those addresses. Thus, we have to switch to using the DNS.01
challenge. I had avoided using it before because of the complexity of
managing dynamic DNS records with the Samba AD server, but this was
actually pretty to work around. I created a new DNS zone on the
firewall specifically for ACME challenges. Names in the AD-managed zone
have CNAME records for their corresponding *_acme-challenge* labels
pointing to this new zone. The new zone has dynamic updates enabled,
which _cert-manager_ supports using the RFC2136 plugin.
For now, this is only enabled for _rabbitmq.pyrocufflink.blue_. I will
transition the other names soon.
Since the IP address assigned to the ingress controller is now managed
by keepalived and known to Kubernetes, the network policy needs to allow
access to it by pod namespace rather than IP address. It seems that the
former takes precedence over the latter, so even though the IP address
was explicitly allowed, traffic was not permitted because it was
destined for a Kubernetes service that was not.
Home Assistant supports unauthenticated access for certain clients using
its _trusted_network_ auth provider. With this configuration, we allow
the desk panel to automatically sign in as the _kiosk_ user, but all
other clients must authenticate normally.
The new machines have names in the _pyrocufflink.black_ zone. We need
to trust the SSHCA certificate to sign keys for these names in order to
connect to them and manage them with Ansible.
Since _ingress-nginx_ no longer runs in the host network namespace,
traffic will appear to come from pods' internal IP addresses now.
Similarly, the network policy for Invoice Ninja needs to be updated to
allow traffic _to_ the ingress controllers' new addresses.
Clients outside the cluster can now communicate with RabbitMQ directly
on port 5671 by using its dedicated external IP address. This address
is automatically assigned to the node where RabbitMQ is running by
`keepalived`.
Clients outside the cluster can now communicate with Mosquitto directly
on port 8883 by using its dedicated external IP address. This address
is automatically assigned to the node where Mosquitto is running by
`keepalived`.
Now that we have `keepalived` managing the "virtual" IP address for the
ingress controller, we can change _ingress-nginx_ to run as a Deployment
rather than a DaemonSet. It no longer needs to use the host network
namespace, as `kube-proxy` will route all traffic sent to the configured
external IP address to the controller pods. Using the _Local_ external
traffic policy disables NAT, so incoming traffic is seen by the
nginx unmodified.
Running `keepalived` as a DaemonSet will allow managing floating
"virtual" IP addresses for Kubernetes services with configured external
IP addresses. The main services we want to expose outside the cluster
are _ingress-nginx_, Mosquitto, and RabbitMQ. The `keepalived` cluster
will negotiate using the VRRF protocol to determine which node should
have each external address. Using the process tracking feature of
`keepalived`, we can steer traffic directly to the node where the target
service is running.
I've created new worker nodes that are dedicated to running Longhorn
replicas. These nodes are tainted with the
`node-role.kubernetes.io/longhorn` taint, so no regular pods will be
scheduled there by default. Longhorn pods thus needs to be configured
to tolerate that taint, and to be scheduled on nodes with the
similarly-named label.
This will make it easier to "blow away" the RabbitMQ data volume on the
occasions when it gets into a weird state. Simply scale the StatefulSet
down to 0 replicas, delete the PVC, then scale back up. Kubernetes will
handle creating a new PVC automatically.
Nextcloud uses a _client-side_ (Javascript) redirect to navigate the
browser to its `index.php`. The page it serves with this redirect is
static and will often load successfully, even if there is a problem with
the application. This causes the Blackbox exporter to record the site
as "up," even when it it definitely is not. To avoid this, we can
scrape the `index.php` page explicitly, ensuring that the application is
loaded.
The _fleetlock_ server drains all pods from a node before allocating the
reboot lock to that node. Unfortunately, it doesn't actually wait for
those pods to be completely evicted. If some pods take too long to shut
down, they may get stuck in `Terminating` state once the machine starts
rebooting. This makes it so those pods cannot be replaced on another
node with the original one is offline, which pretty much defeats the
purpose of using Fleetlock in the first place.
It seems upstream has abandoned this project, as there is an open [Pull
Request][0] to fix this issue that has so far been ignored.
Fortunately, building a new container image containing the patch is easy
enough, so we can run our own patched build.
[0]: https://github.com/poseidon/fleetlock/pull/271
Just like I did with the RAID-1 array in the old BURP server, I will
keep one member active and one in the fireproof safe, swapping them each
month. We can use the same metrics queries to alert on when the swap
should happen that we used with the BURP server.
The ephemeral Jenkins worker nodes that run in AWS don't have colletcd,
promtail, or Zincati. We don't needto get three alerts every time a
worker starts up to handle am ARM build job, so we drop these discovered
targets for these scrape jobs.