Commit Graph

324 Commits (965cfa98c0f8d302561ed5978d77b3e295951b22)

Author SHA1 Message Date
Dustin 965cfa98c0 fixup! wip: unifi: restore data dir 2025-08-08 14:52:05 -05:00
Dustin abdd12b1d1 wip: unifi: restore data dir 2025-08-07 18:13:44 -05:00
Dustin 423f28ea53 remote-blackbox: Do not follow HTTP redirects
There are a couple of websites we scrape that simply redirect to another
name (e.g. _pyrocufflink.net_ → _dustin.hatch.name_, _tabitha.biz_ →
_hatchlearningcenter.org_).  For these, we want to track the
availability of the first step, not the last, especially with regard to
their certificate lifetimes.
2025-08-07 11:55:31 -05:00
Dustin 0e15c6a635 needproxy: Add logs.p.b to NO_PROXY
`fluent-bit` has a bug ([#3619], [#3907], [#6759]) in its handling of
the `NO_PROXY` environment variable.  Instead of matching a domain and
all its subdomain, like it claims to do in its [documentation][0], it
only does an exact string match on the full host name.  To work around
this, we need to explicitly list `logs.pyrocufflink.blue` in the
`no_proxy` value; this will not have any impact on other consumers of
this variable, but will make `fluent-bit` work as expected, connecting
directly to Victoria Logs instead of through the proxy.

[0]: https://docs.fluentbit.io/manual/administration/http-proxy#no_proxy
[#3619]: https://github.com/fluent/fluent-bit/issues/3619
[#3907]: https://github.com/fluent/fluent-bit/issues/3907
[#6759]: https://github.com/fluent/fluent-bit/issues/6759
2025-08-06 10:46:03 -05:00
Dustin dcef009353 fluent-bit: send md alerts to ntfy
For machines that have Linux MD RAID arrays, I want to receive
notifications about the status of the arrays immediately via _ntfy_.  I
had this before with `journal2ntfy`, but I never got around to setting
it up for the current generation of machines (_nvr2_, _chromie_).  Now
that we have `fluent-bit` deployed, we can use its pipeline capabilities
to select the subset of messages for which we want immediate alerts and
send them directly to _ntfy_.  We use a Lua function to transform the
log record into a body compatible with _ntfy_'s JSON publish request;
`fluent-bit` doesn't have any other way to set array values, as needed
for the `tags` member.
2025-08-05 10:28:20 -05:00
Dustin 0fe296f7f3 fluent-bit: Deploy log collector for Victoria Logs
[fluent-bit][0] is a generic, highly-configurable log collector.  It was
apparently initially developed for fluentd, but is has so many output
capabilities that it works wil many different log aggregation systems,
including Victoria Logs.

Although Victoria Logs supports the Loki input format, and therefore
_Promtail_ would work, I want to try to avoid depending on third-party
repositories.  _fluent-bit_ is packaged by Fedora, so there shouldn't be
any dependency issues, etc.

[0]: https://fluentbit.io
2025-08-05 07:14:08 -05:00
Dustin 9e7b9420f4 k8s-iot-net-ctrl: Add node role taints
Previously, _node-474c83.k8s.pyrocufflink.black_ was tainted
`du5t1n.me/machine=raspberrypi`, which prevented arbitrary pods from
being scheduled on it.  Now that there are two more Raspberry Pi nodes
in the cluster, and arbitrary pods _should_ be scheduled on them, this
taint no longer makes sense.  Instead, having specific taints for the
node's roles is more clear.
2025-07-29 21:44:29 -05:00
Dustin 2b12ce769c remote-blackbox: Scrape Invoice Ninja 2025-07-28 18:28:30 -05:00
Dustin 0ef65e4e5d vm-hosts: Update vm_autostart list
I never remember to update this list when I add/remove VMs.

* _bw0_ has been decommissioned; Vaultwarden now runs in Kubernetes
* _unifi3_ has been replaced by _unifi-nuptials_
* _logs-dusk_ runs Victoria Logs, which will evenutally replace Loki
* _node-refrain_ has been replaced by _node-direction_
* _k8s-ctrl0_ has been replaced by _ctrl-crave_ and _ctrl-sycamore_
2025-07-28 18:12:09 -05:00
Dustin e1c157ce87 raspberry-pi: Add collectd sensors, thermal plugins
All the Raspberry Pi machines should have the _sensors_ and _thermal_
plugins enabled so we can monitor their CPU etc. temperatures.
2025-07-28 17:50:39 -05:00
Dustin b2d35ac881 victoria-logs: Listen for Linux netconsole logs
The Linux [netconsole][0] protocol is a very simple plain-text UDP
stream, with no real metadata to speak of.  Although it's not really
syslog, Victoria Logs is able to ingest the raw data into the `_msg`
field, and uses the time of arrival as the `_time` field.

_netconsole_ is somewhat useful for debugging machines that do not have
any other console (no monitor, no serial port), like the Raspberry Pi
CM4 modules in the DeskPi Super 6c cluster.  Unfortunately, its
implementation in the kernel is so simple, even the source address isn't
particularly useful as an identifier, and since Victoria Logs doesn't
track that anyway, we might as well just dump all the messages into a
single stream.

It's not really discussed in the Victora Logs documentation, but any
time multiple syslog listeners with different properties, _all_ of the
listeners _must_ specify _all_ of those properties.  The defaults will
_not_ be used for any stream; the value provided for one stream will be
used for all the others unless they specify one themselves.  Thus, in
order to use the default stream fields for the "regular" syslog
listener, we have to explicitly set them.

[0]: https://www.kernel.org/doc/html/latest/networking/netconsole.html
2025-07-27 17:47:31 -05:00
Dustin c67e5f4e0c cm4-k8s-node: Add group
The Raspberry Pi CM4 nodes on the DeskPi Super 6c cluster board are
members of the _cm4-k8s-node_ group.  This group is a child of
_k8s-node_ which overrides the data volume configuration and node
labels.
2025-07-27 17:45:46 -05:00
Dustin 0eb6220672 r/mod_md: Configure Apache for ACME certificates
Apache supports fetching server certificates via ACME (e.g. from Let's
Encrypt) using a new module called _mod_md_.  Configuring the module is
fairly straightforward, mostly consisting of `MDomain` directives that
indicate what certificates to request.  Unfortunately, there is one
rather annoying quirk: the certificates it obtains are not immediately
available to use, and the server must be reloaded in order to start
using them.  Fortunately, the module provides a notification mechanism
via the `MDNotifyCmd` directive, which will run the specified command
after obtaining a certificate.  The command is executed with the
privileges of the web server, which does not have permission to reload
itself, so we have to build in some indirection in order to trigger the
reload: the notification runs a script that creates an empty file in the
server's state directory; systemd is watching for that file to be
created, then starts another service unit to trigger the actual reload,
then removes trigger file.

Website roles, etc. that want to switch to using _mod_md_ to manage
their certificates should depend on this role and add an `MDomain`
directive to their Apache configuration file fragments.
2025-07-23 10:07:16 -05:00
Dustin c7374c8cca r/k8s-controller: Deploy HAProxy
The _haproxy_ role only installs HAProxy and provides some basic global
configuration; it expects another role to depend on it and provide
concrete proxy configuration with drop-in configuration files.  Thus, we
need a role specifically for the Kubernetes control plane nodes to
provide the configuration to proxy for the API server.
2025-07-22 16:21:49 -05:00
Dustin 381ffe7112 kubernetes: Configure keepalived on control plane
Control plane nodes will now run _keepalived_, to provide a "floating"
IP address that is assigned to one of the nodes at a time.  This
address (172.30.0.169) is now the target of the DNS A record for
_kubernetes.pyrocufflink.blue_, so clients will always communicate with
the server that currently holds the floating address, whichever that may
be.

I was originally inspired by the official Kubernetes [High Availability
Considerations][0] document when designing this.  At first, I planned to
deploy _keepalived_ and HAProxy as DaemonSets on the control plane
nodes, but this ended up being somewhat problematic whenever all of the
control plane nodes would go down at once, as the _keepalived_ and
HAProxy pods would not get scheduled and thus no clients communicate
with the API servers.

[0]: 9d7cfab6fe/docs/ha-considerations.md
2025-07-22 16:21:49 -05:00
Dustin 0e6cc4882d Add k8s-test group
This group is used for temporary machines while testing Kubernetes node
deployment changes.
2025-07-22 16:21:49 -05:00
Dustin f7546791cc kubelet: Fix CA cert for Docker Hub proxy
The man page for _containers-certs.d(5)_ says that subdirectories of
`/etc/containers/certs.d` should be named `host:port`, however, this is
a bit misleading.  It seems instead, the directory name must match the
name of the registry server as specified, so in the case of a server
that supports HTTPS on port 443, where the port would be omitted from
the image name, it must also be omitted from the `certs.d` subdirectory
name.
2025-07-16 16:05:19 -05:00
Dustin b9a046c7f4 plugins: Add lookup cache plugin
One major weakness with Ansible's "lookup" plugins is that they are
evaluated _every single time they are used_, even indirectly.  This
means, for example, a shell command could be run many times, potentially
resulting in different values, or executing a complex calculation that
always provides the same result.  Ansible does not have a built-in way
to cache the result of a `lookup` or `query` call, so I created this
one.  It's inspired by [ansible-cached-lookup][0], which didn't actually
work and is apparently unmaintained.  Instead of using a hard-coded
file-based caching system, however, my plugin uses Ansible's
configuration and plugin infrastructure to store values with any
available cache plugin.

Although looking up the _pyrocufflink.net_ wildcard certificate with the
Kubernetes API isn't particularly expensive by itself right now, I can
envision several other uses that may be.  Having this plugin available
could speed up future playbooks.

[0]: https://pypi.org/project/ansible-cached-lookup
2025-07-13 16:02:57 -05:00
Dustin 906819dd1c r/apache: Use variables for HTTPS cert/key content
Using files for certificates and private keys is less than ideal.
The only way to "share" a certificate between multiple hosts is with
symbolic links, which means the configuration policy has to be prepared
for each managed system.  As we're moving toward a much more dynamic
environment, this becomes problematic; the host-provisioner will never
be able to copy a certificate to a new host that was just created.
Further, I have never really liked the idea of storing certificates and
private keys in Git anyway, even if it is in a submodule with limited
access.
2025-07-13 16:02:57 -05:00
Dustin 6667066826 kubelet: Configure cri-o container registries
The _containers-image_ role configures _containers-registries.conf(5)_ and
_containers-cert.d(5)_, which are used by CRI-O (and `podman`).
Specifically, we'll use these to redirect requests for images on Docker
Hub (docker.io) to the internal caching proxy.
2025-07-12 16:45:47 -05:00
Dustin f8f3dd5f83 docker-proxy: Deploy a proxy/cache for Docker Hub
Docker Hub's rate limits are so low now that they've started to affect
my home lab.  Deploying a caching proxy and directing all pull requests
through it should prevent exceeding the limit.  It will also help
prevent containers from starting if access to the Internet is down, as
long as their images have been cached recently.
2025-07-12 16:45:47 -05:00
Dustin 6447ff5f4b v-l: Add data volume for logs storage 2025-07-12 16:08:40 -05:00
Dustin 87d90a617d minio-backups: Disable nginx access logs entirely
The _nginx_ access log files are absolutely spammed with requets from
Restic and WAL-G, to the point where they fill the log volume on
_chromie_ every day.  They're not particularly useful anyway; I've never
looked at them, and any information they contain can be obtained in
another way, if necessary, for troubleshooting.
2025-07-03 11:15:40 -05:00
Dustin d4d3f0ef81 r/victoria-logs: Deploy VictoriaLogs
I've become rather frusted witih Grafana Loki lately.  It has several
bugs that affect my usage, including issues with counting and
aggregation, completely broken retention and cleanup, spamming itself
with bogus error log messages, and more.  Now that VitoriaLogs has
first-class support in Grafana and support for alerts, it seems like a
good time to try it out.  It's under very active development, with bugs
getting fixed extremely quickly, and new features added constantly.
Indeed, as I was experimenting with it, I thought, "it would be nice if
the web UI could decode ANSI escapes for terminal colors," and just a
few days later, that feature was added!  Native support for syslog is
also a huge benefit, as it will allow me to collect logs directly from
network devices, without first collecting them into a file on the Unifi
controller.

This new role deploys VictoriaLogs in a manner very similar to how I
have Loki set up, as a systemd-managed Podman container.   As it has no
built-in authentication or authorization, we rely on Caddy to handle
that.  As with Loki, mTLS is used to prevent anonymous access to
querying the logs, however, authentication via Authelia is also an
option for human+browser usage.  I'm re-using the same certificate
authority as with Loki to simplify Grafana configuration.  Eventually, I
would like to have a more robust PKI, probably using OpenBao, at which
point I will (hopefully) have decided which log database I will be
using, and can use a proper CA for it.
2025-05-30 21:19:05 -05:00
Dustin 1768678213 frigate: Set logout URL
Although I'm sure it will never be used, we might as well set the logout
URL to the correct value.  When the link is clicked, the browser will
navigate to the Authelia logout page, which will invalidate all SSO
sessions.
2025-04-21 08:28:49 -05:00
Dustin 113ffa2b96 r/frigate: Update to v0.15
Frigate has evolved a lot over the past year or so since v0.13.
Notably, some of the configuration options have been renamed, and
_events_ have become _alerts_ and _detections_.  There's also now
support for authenication, though we don't need it because we're using
Authelia.
2025-04-20 16:23:04 -05:00
Dustin 1b94530b1f frigate: Add front yard camera
We're trying to sell the Hustler lawn mower, so we plan to set it out
at the end of the driveway for passers-by to see.  I've temporarily
installed one of the Annke cameras in the kitchen, pointed out the
front window, to monitor it.
2025-04-20 14:10:27 -05:00
Dustin 6df0cc39da unifi: Back up with Restic
The Unifi Network data will now be backed up by Restic.
2025-03-29 09:36:37 -05:00
Dustin cdd64b6309 unifi: Fix Promtail log scrape paths
The linuxserver.io Unifi container stored Unifi server and device logs
under `/var/lib/unifi/logs`, while the new container stores them under
`/var/log/unifi`.
2025-03-29 09:28:48 -05:00
Dustin db5d1fb91a unifi: Switch from nginx to Caddy
Mostly for built-in ACME support.
2025-03-16 17:17:00 -05:00
Dustin c300dc1b6c chrony: Add role/PB for chrony
I continually struggle with machines' (physical and virtual, even the
Roku devices!) clocks getting out of sync.  I have been putting off
fixing this because I wanted to set up a Windows-compatible NTP server
(i.e. on the domain controllers, with Kerberos signing), but there's
really no reason to wait for that to fix the clocks on all the
non-Windows machines, especially since there are exactly 0 Windows
machines on the network right now.

The *chrony* role and corresponding `chrony.yml` playbook are generic,
configured via the `chrony_pools`, `chrony_servers`, and `chrony_allow`
variables.  The values for these variables will configure the firewall
to act as an NTP server, synchronizing with the NTP pool on the
Internet, while all other machines will synchronize with it.  This
allows machines on networks without Internet access to keep their clocks
in sync.
2025-03-16 16:37:19 -05:00
Dustin e4a4944fbc postgresql: Add receipts/user DB
The Receipt application needs a PostgreSQL database on the central
server.
2025-03-16 14:47:30 -05:00
Dustin e9d6020563 all: Set root authorized keys
The `root_authorized_keys` variable was originally defined only for the
*pyrocufflink* group.  This used to effectively be "all" machines, since
everything was a member of the AD domain.  Now that we're moving away
from that deployment model, we still want to have the break-glass
option, so we need to define the authorized keys for the _all_ group.
2025-02-08 15:29:57 -06:00
Dustin d916545e29 synapse: Remove group variables
This was the last group that had an entire file encrypted with Ansible
Vault.  Now that the Synapse server is long gone, rather than convert it
to having individually-encrypted values, we can get rid of it entirely.
2025-02-08 15:29:57 -06:00
Dustin 8bd0722422 pyrocufflink: Remove root password
While having a password set for _root_ provides a convenient way of
accessing a machine even if it is not available via SSH, using a static
password in this way is quite insecure and not worth the risk.  I may
try to come up with a better way to set a unique password for each
machine eventually, but for now, having this password here is too
dangerous to keep.
2025-02-08 15:29:57 -06:00
Dustin 164d86d646 r/postgresql-data: Manage users and databases
This role can ensure PostgreSQL users and databases are created for
applications that are not themselves managed by Ansible.  Notably, we
need to do this for anything deployed in Kubernetes that uses the
central database server.
2025-02-01 17:36:58 -06:00
Dustin f705e98fab hosts: Add k8s-iot-net-ctrl group
The *k8s-iot-net-ctrl* group is for the Raspberry Pi that has the Zigbee
and Z-Wave controllers connected to it.  This node runs the Zigbee2MQTT
and ZWaveJS2MQTT servers as Kubernetes pods.
2025-01-31 19:49:51 -06:00
Dustin 272e89d65a Merge remote-tracking branch 'refs/remotes/origin/master' 2025-01-28 17:34:37 -06:00
Dustin 33f315334e users: Configure sudo on some machines
`doas` is not available on Alma Linux, so we still have to use `sudo` on
the VPS.
2025-01-26 13:08:59 -06:00
Dustin 304cacb95b dch-proxy: Proxy Victoria Metrics
Need to expose Victoria Metrics to the Internet so the `vmagent` process
on the VPS can push the metrics it has scraped from its Blackbox
exporter.  Authelia needs to allow access to the `/insert/` paths, of
course.
2025-01-26 13:08:59 -06:00
Dustin ad0bd7d4a5 remote-blackbox: Add group
The _remote-blackbox_ group defines a system that runs
_blackbox-exporter_ and _vmagent_ in a remote (cloud) location.  This
system will monitor our public web sites.  This will give a better idea
of their availability from the perspective of a user on the Internet,
which can be by factors that are necessarily visible from within the
network.
2025-01-26 13:08:59 -06:00
Dustin 3ebf91c524 dch-proxy: Update Vaultwarden backend
Vaultwarden is now hosted in Kubernetes.  The old
_bw0.pyrocufflink.blue_ will be decommissioned.
2025-01-10 20:03:35 -06:00
Dustin d2e8b9237f Enable doas become plugin for non AD members
The new servers that are not members of the AD domain use `doas` instead
of `sudo`.
2024-11-25 22:01:40 -06:00
Dustin d993d59bee Deploy new Kubernetes nodes
The *stor-* nodes are dedicated to Longhorn replicas.  The other nodes
handle general workloads.
2024-11-24 10:33:21 -06:00
Dustin 7a5f01f8a3 r/doas: Configure sudo alternative
In the spirit of replacing bloated tools with unnecessary functionality
with smaller, more focused alternatives, we can use `doas` instead of
`sudo`.  Originally, it was a BSD tool, but the Linux port supports PAM,
so we can still use `pam_auth_ssh_agent` for ppasswordless
authentication.
2024-11-24 10:33:21 -06:00
Dustin c95a96a33c users: Manage static user accounts
The Samba AD domain performs two important functions: centralized user
identity mapping via LDAP, and centralized authentication via
Kerberos/GSSAPI.  Unfortunately, Samba, on both domain controllers and
members, is quite frustrating.  The client, _winbind_, frequently just
stops working and needs to have its cache flushed in order to resolve
user IDs again.  It also takes quite a lot of memory, something rather
precious on Raspberry Pis.  The DC is also somewhat flaky at times, and
cumbersome to upgrade.  In short, I really would like to get rid of as
much of it as possible.

For most use cases, OIDC can replace Kereros.  For SSH specifically, we
can use SSH certificates (which are issued to OIDC tokens).
Unfortunately, user and group accounts still need ID numbers assigned,
which is what _winbind_ does.  In reality, there's only one user that's
necessary: _dustin_.  It doesn't make sense to bring along all the
baggage of Samba just to map that one account.  Instead, it's a lot
simpler and more robust to create it statically.
2024-11-24 10:33:21 -06:00
Dustin 0f600b9e6e kubernetes: Manage worker nodes
So far, I have been managing Kubernetes worker nodes with Fedora CoreOS
Ignition, but I have decided to move everything back to Fedora and
Ansible.  I like the idea of an immutable operating system, but the FCOS
implementation is not really what I want.  I like the automated updates,
but that can be accomplished with _dnf-automatic_.  I do _not_ like
giving up control of when to upgrade to the next Fedora release.
Mostly, I never did come up with a good way to manage application-level
configuration on FCOS machines.  None of my experiments (Cue+tmpl,
KCL+etcd+Luci) were successful, which mostly resulted in my manually
managing configuration on nodes individually.  Managing OS-level
configuration is also rather cumbersome, since it requires redeploying
the machine entirely.  Altogether, I just don't think FCOS fits with my
model of managing systems.

This commit introduces a new playbook, `kubernetes.yml`, and a handful of
new roles to manage Kubernetes worker nodes running Fedora Linux.  It
also adds two new deploy scripts, `k8s-worker.sh` and `k8s-longhorn.sh`,
which fully automate the process of bringing up worker nodes.
2024-11-24 10:33:21 -06:00
Dustin 164f3b5e0f r/wal-g-pg: Handle versioned storage locations
The target location for WAL archives and backups saved by WAL-G should
be separated based on the major version of PostgreSQL with which they
are compatible.  This will make it easier to restore those backups,
since they can only be restored into a cluster of the same version.

Unfortunately, WAL-G does not natively handle this.  In fact, it doesn't
really have any way of knowing the version of the PostgreSQL server it
is backing up, at least when it is uploading WAL archives.  Thus, we
have to include the version number in the target path (S3 prefix)
manually.  We can't rely on Ansible to do this, because there is no way
to ensure Ansible runs at the appropriate point during the upgrade
process.  As such, we need to be able to modify the target location as
part of the upgrade, without causing a conflict with Ansible the next
time it runs.

To that end, I've changed how the _wal-g-pg_ role creates the
configuration file for WAL-G.  Instead of rendering directly to
`wal-g.yml`, the role renders a template, `wal-g.yml.in`.  This template
can include a `@PGVERSION@` specifier.  The `wal-g-config` script will
then use `sed` to replace that specifier with the version of PostgreSQL
installed on the server, rendering the final `wal-g.yml`.  This script
is called both by Ansible in a handler after generating the template
configuration, and also as a post-upgrade action by the
`postgresql-upgrade` script.

I originally wanted the `wal-g-config` script to use the version of
PostgreSQL specified in the `PG_VERSION` file within the data directory.
This would ensure that WAL-G always uploads/downloads files for the
matching version.  Unfortunately, this introduced a dependency conflict:
the WAL-G configuration needs to be present before a backup can be
restored, but the data directory is empty until after the backup has
been restored.  Thus, we have to use the installed server version,
rather than the data directory version.  This leaves a small window
where WAL-G may be configured to point to the wrong target if the
`postgresql-upgrade` script fails and thus does not trigger regenerating
the configuration file.  This could result in new WAL archives/backups
being uploaded to the old target location.  These files would be
incompatible with the other files in that location, and could
potentially overwrite existing files.  This is rather unlikely, since
the PostgreSQL server will not start if the _postgresql-upgrade.service_
failed.  The only time it should be possible is if the upgrade fails in
such a way that it leaves an empty but valid data directory, and then
the machine is rebooted.
2024-11-17 10:27:31 -06:00
Dustin c1dc52ac29 Merge branch 'loki' 2024-11-05 07:01:13 -06:00
Dustin 39d9985fbd r/loki-caddy: Caddy reverse proxy for Loki
Caddy handles TLS termination for Loki, automatically requesting and
renewing its certificate via ACME.
2024-11-05 06:54:27 -06:00