Deploying _democratic-csi_ to manage PersistentVolumeClaim resources,
mapping them to iSCSI volumes on the Synology.
Eventually, all Longhorn-managed PVCs will be replaced with Synology
iSCSI volumes. Getting rid of Longhorn should free up a lot of
resources and remove a point of failure from the cluster.
We don't want to pull public container images that already exist. This
creates prevents pods from starting if there is any connectivity issue
with the upstream registry.
This is a custom-built application for managing purchase receipts. It
integrates with Firefly III to fill some of the gaps that `xactmon`
cannot handle, such as restaurant bills with tips, gas station
purchases, purchases with the HSA debit card, refunds, and deposits.
Photos of receipts can be taken directly within the application using
the User Media Web API, or uploaded as existing files. Each photo is
associated with transaction data, including date, vendor, amount, and
general notes. These data are also synchronized with Firefly whenever
possible.
Vaultwarden requires basically no configuration anymore. Older versions
needed some environment variables for configuring the WebSocket server,
but as of 1.31, WebSockets are handled by the same server as HTTP, so
even that is not necessary now. The only other option that could
potentially be useful is `ADMIN_TOKEN`, but it's optional. For added
security, we can leave it unset, which disables the administration
console; we can set it later if/when we actually need that feature.
Migrating data from the old server was pretty simple. The database is
pretty small, and even the attachments and site icons don't take up much
space. All-in-all, there was only about 20 MB to move, so the copy only
took a few seconds.
Aside from moving the Vaultwarden server itself, we will also need to
adjust the HAProxy configuration to proxy requests to the Kubernetes
ingress controller.
I never ended up using _Step CA_ for anything, since I was initially
focused on the SSH CA feature and I was unhappy with how it worked
(which led me to write _SSHCA_). I didn't think about it much until I
was working on deploying Grafana Loki. For that project, I wanted to
use a certificate signed by a private CA instead of the wildcard
certificate for _pyrocufflink.blue_. So, I created *DCH CA R3* for that
purpose. Then, for some reason, I used the exact same procedure to
fetch the certificate from Kubernetes as I had set up for the
_pyrocufflink.blue_ wildcard certificate, as used by Frigate. This of
course defeated the purpose, since I could have just as easily used
the wildcard certificate in that case.
When I discovered that Grafana Loki expects to be deployed behind a
reverse proxy in order to implement access control, I took the
opportunity to reevaluate the certificate issuance process. Since a
reverse proxy is required to implement the access control I want (anyone
can push logs but only authenticated users can query them), it made
sense to choose one with native support for requesting certificates via
ACME. This would eliminate the need for `fetchcert` and the
corresponding Kubernetes API token. Thus, I ended up deciding to
redeploy _Step CA_ with the new _DCH CA R3_ for this purpose.
Now that Victoria Metrics is hosted in Kubernetes, it only makes sense
to host Grafana there as well. I chose to use a single-instance
deployment for simplicity; I don't really need high availability for
Grafana. Its configuration does not change enough to worry about the
downtime associated with restarting it. Migrating the existing data
from SQLite to PostgreSQL, while possible, is just not worth the hassle.
Invoice Ninja is a small business management tool. Tabitha wants to
use it for HLC.
I am a bit concerned about the code quality of this application, and
definitely alarmed at the data it send upstream, so I have tried to be
extra careful with it. All privileges are revoked, including access to
the Internet.
Since *mtrcs0.pyrocufflink.blue* (the Metrics Pi) seems to be dying,
I decided to move monitoring and alerting into Kubernetes.
I was originally planning to have a single, dedicated virtual machine
for Victoria Metrics and Grafana, similar to how the Metrics Pi was set
up, but running Fedora CoreOS instead of a custom Buildroot-based OS.
While I was working on the Ignition configuration for the VM, it
occurred to me that monitoring would be interrupted frequently, since
FCOS updates weekly and all updates require a reboot. I would rather
not have that many gaps in the data. Ultimately I decided that
deploying a cluster with Kubernetes would probably be more robust and
reliable, as updates can be performed without any downtime at all.
I chose not to use the Victoria Metrics Operator, but rather handle
the resource definitions myself. Victoria Metrics components are not
particularly difficult to deploy, so the overhead of running the
operator and using its custom resources would not be worth the minor
convenience it provides.
[sshca] is a simple web service I wrote to automatically create signed
SSH certificates for hosts' public keys. It authenticates hosts by
their machine UUID, which it can find using the libvirt API.
[sshca]: https://git.pyrocufflink.net/dustin/sshca
The `argocd` command needs to have its own OIDC client configuration,
since it works like a "public" client. To log in, run
```sh
argocd login argocd.pyrocufflink.blue --sso
```
[Argo CD] is a Kubernetes-native GitOps/continuous deployment manager.
It monitors the state of Kubnernetes resources, such as Pods,
Deployments, ConfigMaps, Secrets, and Custom Resources, and synchronizes
them with their canonical definitions from a Git repository.
*Argo CD* consists of various components, including a Repository
Service, an Application Controller, a Notification Controller, and an
API server/Web UI. It also has some optional components, such as a
bundled Dex server for authentication/authorization, and an
ApplicationSet controller, which we will not be using.
[Argo CD]: https://argo-cd.readthedocs.io/