This DaemonSet runs Fluent Bit on all nodes in the cluster. The
ConfigMap that contains the pipeline configuration is actually managed
by Ansible, so that it can remain in sync with the configuration used by
Fluent Bit on non-Kubernetes nodes.
I've noticed that from time to time, the container storage volume seems
to accumulate "dangling" containers. These are paths under
`/var/lib/containers/storage/overlay` that have a bunch of content in
their `diff` sub-directory, but nothing else, and do not seem to be
mounted into any running containers. I have not identified what causes
this, nor a simple and reliable way to clean them up. Fortunately,
wiping the entire container storage graph with `crio wipe` seems to work
well enough.
The `crio-clean.sh` script takes care of safely wiping the container
storage graph on a given node. It first drains the node and then stops
any running containers that were left. Then, it uses `crio wipe` to
clean the entire storage graph. Finally, it restarts the node, allowing
Kubernetes to reschedule the pods that were stopped.
PiKVM exports some rudimentary metrics, but requires authentication to
scrape them. At the very least, this will provide alerting in case the
PiKVM systems go offline.
`updatecheck` is a little utility I wrote that queries Fedora Bodhi for
updates and sends an HTTP request when one is found. I am specifically
going to use it to trigger rebuilding the _gasket-driver_ RPM whenever
there is a new _kernel_ published.
Instead of allocating a volume for each individual Buildroot-based
project, I think it will be easier to reuse the same one for all of
them. It's not like we can really run more than one job at a time,
anyway.
Now that Headlamp supports PKCE, we can use the same OIDC client for it
as for the Kubneretes API server/`kubectl`. The only difference is the
callback redirect URL
The latest version of the _OpenId Connect Authentication Plugin_ for
Jenkins has several changes. Apparently, one of them is that it
defaults to using the `client_secret_basic` token authorization method,
instead of `client_secret_post` as it did previously.
At some point, Firefly III added an `ALLOW_WEBHOOKS` option. It's set
to `false` by default, but it didn't seem to have any affect on
_running_ webhooks, only visiting the webhooks configuraiton page. Now,
that seems to have changed, and the setting needs to be enabled in order
for the webhooks to run.
I'm not sure why `disableNameSuffixHash` was set on the ConfigMap
generator. It shouldn't be, so that Kustomize can ensure the Pod is
restarted when the contents of the ConfigMap change.
This network policy blocks all outbound communication except to the
designated internal services. This will help prevent any data
exfiltration in the unlikely event the Firefly were to be compromised.