This CronJob schedules a periodic run of `restic forget`, which deletes
snapshots according to the specified retention period (14 daily, 4
weekly, 12 monthly).
This task used to run on my workstation, scheduled by a systemd timer
unit. I've kept the same schedule and retention period as before. Now,
instead of relying on my PC to be on and awake, the cleanup will occur
more regularly. There's also the added benefit of getting the logs into
Loki.
Occasionally, some documents may have odd rendering errors that
prevent the archival process from working correctly. I'm less concerned
about the archive document than simply having a centralized storage for
paperwork, so enabling this "continue on soft render error" feature is
appropriate. As far as I can tell, it has no visible effect for the
documents that could not be imported at all without it.
*unifi3.pyrocufflink.blue* has been replaced by
*unifi-nuptials.host.pyrocufflink.black*. The former was the last
Fedora CoreOS machine in use, so the entire Zincati scrape job is no
longer needed.
This is a custom-built application for managing purchase receipts. It
integrates with Firefly III to fill some of the gaps that `xactmon`
cannot handle, such as restaurant bills with tips, gas station
purchases, purchases with the HSA debit card, refunds, and deposits.
Photos of receipts can be taken directly within the application using
the User Media Web API, or uploaded as existing files. Each photo is
associated with transaction data, including date, vendor, amount, and
general notes. These data are also synchronized with Firefly whenever
possible.
By default, the _pyrocufflink_ Ansible inventory plugin ignores VMs
whose names begin with `test-`. This prevents Jenkins from failing to
apply policy to machines that it should not be managing. The host
provisioner job, though, should apply policy to those machines, so we
need to disable that filter.
The *dch-webhooks* user is used by *dch-webhooks* in order to publish
host information when a new machine triggers its _POST /host/online_
webhook. It therefore needs to be able to write to the
_host-provisioner_ queue (via the default exchange).
The *host-provisioner* user is used by the corresponding consumer to
receive the host information and initiate the provisioning process.
The *dch-webhooks* server now has a _POST /host/online_ hook that can
be triggered by a new machine when it first comes online. This hook
starts an automatic provisioning process by creating a Kubernetes Job
to run Ansible and publishing information about the host to provision
via AMQP. Thus, the server now needs access to the Kubernetes API in
order to create the Job and access to RabbitMQ in order to publish the
task parameters.
The contents of the DCH Root CA will not change, so it does not make
sense to enable the hash suffix feature for this ConfigMap. Without it,
the ConfigMap name is predictable and can be used outside of a Kustomize
project.
The `pg_stat_archiver_failed_count` metric is a counter, so once a WAL
archival has failed, it will increase and never return to `0`. To
ensure the alert is resolved once the WAL archival process recovers, we
need to use the `increase` function to turn it into a gauge. Finally,
we aggregate that gauge with `max_over_time` to keep the alert from
flapping if the WAL archive occurs less frequently than the scrape
interval.
We're using the Alpine variant of the Vaultwarden container images,
since the default variant is significantly larger and we do not need any
of the extra stuff it includes.
[ARA Records Ansible][0] is a results storage system for Ansible. It
provides a convenient UI for tracking Ansible playbooks and tasks. The
data are populated by an Ansible callback plugin.
ARA is a fairly simple Python+Django application. It needs a database
to store Ansible results, so we've connected it to the main PostgreSQL
database and configured it to connect and authenticate using mTLS.
Rather than mess with managing and distributing a static password for
ARA clients, I've configured Autheliad to allow anonymous access to
post data to the ARA API from within the private network or the
Kubernetes cluster. Access to the web UI does require authentication.
[0]: https://ara.recordsansible.org/