Commit Graph

893 Commits

Author SHA1 Message Date
8dfb2e3e4f r/frigate: Clean up Frigate role
* Switch to Quadlet-style `.container` for systemd unit
* Update to new image tag naming scheme (not arch-specific)
* Use environment variables for secrets
* Allow the entire `frigate_config` variable to be overridden
2024-08-12 18:47:04 -05:00
7b61a7da7e r/useproxy: Configure system-wide proxy
The *useproxy* role configures the `http_proxy` et al. environmet
variables for systemd services and interactive shells.  Additionally, it
configures Yum repositories to use a single mirror via the `baseurl`
setting, rather than a list of mirrors via `metalink`, since the proxy
a) the proxy only allows access to _dl.fedoraproject.org_ and b) the
proxy caches RPM files, but this is only effective if all clients use
the same mirror all the time.

The `useproxy.yml` playbook applies this role to servers in the
*needproxy* group.
2024-08-12 18:47:04 -05:00
f51e0fe2a9 r/samba-dc: Enable auto-restart for samba.service
Setting `RestartSec` is not enough to enable auto-restart; the `Restart`
option is also required, as it defaults to `no`.
2024-08-09 08:11:39 -05:00
3214d4b9b2 gw1/squid: Allow UniFi controller to OCI registries
The UniFi Network server needs to be able access the
_linuxserver.io_/GitHub and Docker Hub OCI image registries for the
Unifi Network and Caddy container images, respectively.
2024-07-31 18:41:13 -05:00
805a900f8a gw1/squid: Allow Invoice Ninja to Stripe API
HLC uses Invoice Ninja Stripe integration to process credit card
payments from parents.
2024-07-14 15:45:36 -05:00
ed22f6311c r/samba-dc: Auto restart samba
Although it's rare, sometimes Samba crashes or fails to start.  When
this happens, restarting it is almost always enough to get it working
again.  Since all sorts of authentication problems can occur if one of
the domain controllers is down, it's probably best to just have systemd
automatically restart _samba.service_ if it ever stops for any reason.
2024-07-03 10:30:20 -05:00
96bc8c2c09 vm-hosts: Update autostart list
*k8s-amd64-n0*, *k8s-amd64-n1*, and *k8s-amd64-n2* have been replaced by
*k8s-amd64-n4*, *k8s-amd64-n5*, *k8s-amd64-n6*, respectively.  *db0* is
the new database server, which needs to be up before anything in
Kubernetes starts, since a lot of applications running there depend on
it.
2024-07-03 08:52:15 -05:00
afb7030e44 migration: Add PostgreSQL server migration script
This script captures the steps taken to migrate from the PostgreSQL
server in the Kubernetes cluster, managed by _postgres operator_, to the
dedicated server on _db0.pyrocufflink.blue_.  The data were restored
from the backups created by _wal-e_, and then the new server was
promoted to primary.  Finally, I cleaned up the roles and databases that
are no longer needed.
2024-07-02 20:45:12 -05:00
4f202c55e4 r/postgres-exporter: Deploy postgres-exporter
The [postgres-exporter][0] exposes PostgreSQL server statistics to
Prometheus.  It connects to a specified PostgreSQL server (in this
case, a server on the local machine via UNIX socket) and collects data
from the `pg_stat_activity`, et al. views.  It needs the `pg_monitor`
role in order to be allowed to read the relevant metrics.

Since we're setting up the exporter to connect via UNIX socket, it needs
a dedicated OS user to match the PostgreSQL user in order to
authenticate via the _peer_ method.

[0]: https://github.com/prometheus-community/postgres_exporter/
2024-07-02 20:44:29 -05:00
3f5550ee6c postgresql: wal-g: Set PGHOST
By default, WAL-G tries to connect to the PostgreSQL server via TCP
socket on the loopback interface.  Our HBA configuration requires
certificate authentication for TCP sockets, so we need to configure
WAL-G to use the UNIX socket.
2024-07-02 20:44:29 -05:00
6caf28259e hosts: db0: Promote to primary
All data have been migrated from the PostgreSQL server in Kubernetes and
the three applications that used it (Firefly-III, Authelia, and Home
Assistant) have been updated to point to the new server.

To avoid comingling the backups from the old server with those from the
new server, we're reconfiguring WAL-G to push and pull from a new S3
prefix.
2024-07-02 20:44:29 -05:00
090ebb0c1b r/wal-g-pg: Schedule daily backups
WAL archives are not much good without a base backup onto which they
can be applied.  Thus, we need to schedule WAL-G to create and upload a
backup periodically.
2024-07-02 20:44:29 -05:00
b83c6de28a gw1/squid: Add more URLs for Fedora/CoreOS updates
After adding these, *unifi2.pyrocufflink.blue* (FCOS) was finally able
to update successfully.
2024-07-02 20:44:29 -05:00
dfc1a36ee5 deploy.sh: Wrapper for deployment scripts
The `deploy.sh` script ensures the execution environment is correct by
configuring the Ansible Vault secret, unlocking the `rbw` vault, and
requesting an SSH client certificate.  It then runs the specified
end-to-end deployment script from the `deploy` directory.
2024-07-02 20:44:29 -05:00
2ce211b5ea hosts: Add db0.p.b
*db0.pyrocufflink.blue* will be the primary server in the new PostgreSQL
database cluster.  We're starting with Fedora 39 so we can have
PostgreSQL 15, to match the version managed by the Postgres Operator in
the Kubernetes cluster right now.
2024-07-02 20:44:29 -05:00
d8472c64a2 wait-for-host: PB to wait for a host to come up
This playbook just waits for a machine to become available.  It's useful
when running Ansible immediately after creating a new machine.
2024-07-02 20:44:29 -05:00
5958904fa3 bootstrap: PB to bootstrap a new machine
I've actually had this playbook for a _long_ time, just never bothered
to commit it.  It's useful for the very first time Ansible is run for a
managed node to configure all the basic stuff.
2024-07-02 20:44:29 -05:00
056548e3e0 newvm: Add script to run virt-install
For the longest time, whenever I needed to create a new virtual machine,
I just used `Ctrl+R` to find the last `virt-install` command I had run
and tweaked it for the new machine.  At some point, my `~/.zsh_history`
overflowed, though, so the command I had been running got lost.  To
avoid this silliness in the future, I've created a script that runs
`virt-manager` for me.  As a bonus, it has some configuration flags for
the parameters that often vary between machines.  For most machines,
though, the script can be run as simply as `newvm.sh name`.
2024-07-02 20:44:29 -05:00
208fadd2ba postgresql: Configure for dedicated DB servers
I am going to use the *postgresql* group for the dedicated database
servers.  The configuration for those machines will be quite a bit
different than for the one existing machine that is a member of that
group already: the Nextcloud server.  Rather than undefine/override all
the group-level settings at the host level, I have removed the Nextcloud
server from the *postgresql* group, and updated the `nextcloud.yml`
playbook to apply the *postgresql-server* role itself.

Eventually, I want to move the Nextcloud database to the central
database servers.  At that point, I will remove the *postgresql-server*
role from the `nextcloud.yml` playbook.
2024-07-02 20:44:29 -05:00
54ad68b5bb datavol: Playbook to provision a data volume
The `datavol.yml` playbook can provision one or more data volumes on
a managed node, using the definitions in the `data_volumes` variable.
This variable must contain a list of dictionaries with the following
keys:

* `dev`: The block device where the data volume is stored (e.g.
  `/dev/vdb`)
* `fstype`: The type of filesystem to create on the block device
* `mountpoint`: The location in the filesystem hierarchy where the
  volume is mounted
* `opts`: (Optional) options to pass to the `mkfs` program when
  formatting the device
* `mountopts`: (Optional) additional options to pass to the `mount`
  program when mounting the filesystem
2024-07-02 20:44:29 -05:00
edffaf258b r/wal-g-pg: Deploy WAL-G for PostgreSQL
This role installs `wal-g` from the DCH Yum repository, and creates a
configuration file for it in `/etc/postgresql`.  Additionally, it
installs a custom SELinux policy module that allows `wal-g` to run in
the `postgresql_t` domain (i.e. when spawned by the PostgreSQL server).
2024-07-02 20:44:29 -05:00
99c309240c r/postgresql-cert: ACME certificates using certbot
This role can be used to get a server certificate for PostgreSQL from an
ACME CA using `certbot`.  It fetches the initial certificate and copies
it to the PostgreSQL configuration directory.  It also sets up a
post-renewal hook script that copies updated certificates and reload
the server.
2024-07-02 20:44:29 -05:00
9e742dc217 roles/postgresql-server: Rewrite role
This rewrite brings a lot of improvements and new functionality to the
*postgresql-server* role.  The most noticeable change is the
introduction of the `postgresql_config_dir` variable, which can be used
to specify a different location for the PostgreSQL server configuration
files, separate from the data storage directory.  By default, the
variable is set to `/etc/postgresql`.  For some situations, it may be
necessary to disable this functionality, which can be accomplished by
setting the value of `postgresql_config_dir` to the same path as
`pgdata_dir`.  Note also that the `postgresql-setup` tool, and the
corresponding `postgresql-check-db-dir` script, which are included in
the Fedora/Red Hat distribution of PostgreSQL, do not support having
separate configuration and data directories, so their use has to be
disabled.

Another significant improvement is to how the `postgresql.conf` file
is generated.  Any setting can be set now, using the `postgresql_config`
variable; any key in this dictionary will be written to the
configuration file.  Note that configuration file syntax requires
single quotes around string values, so these have to be included in the
YAML value.

To support deploying standby servers, the role now supports running a
command to restore from a backup instead of running `initdb`.
Additionally, the `postgresql_standby` variable can be set to `true`
to create the `recovery.signal` file, configuring the server as a
standby.
2024-07-02 20:44:29 -05:00
93eeaaaed4 gw1: Allow access to DCH yum repo via proxy
Allows installing _sshca-cli-systemd_ from Kickstart.
2024-06-26 18:39:25 -05:00
c25a88bb4d create-dc: Add PB for creating new DCs
The `create-dc.yml` playbook is just a wrapper for all the other
playbooks that need to be run when creating a new domain controller.
2024-06-23 10:43:15 -05:00
0af8a28f1a vmhost: Run on a single host at a time
This will help avoid complete outages in case of a bad configuration.
2024-06-23 10:43:15 -05:00
24a0dfa750 samba-dc: Gather facts for all DCs
Since the `samba-dc.yml` playbook executes on a single host at a time,
if the fact cache is not current, only the facts for the current host
will be available.  This prevents some tasks, especially the
configuration of the trusted SSH host keys for `sysvolsync`, to have
incorrect data.  To avoid this, we need to explicitly gather facts for
all of the domain controllers before starting to configure any of them.
2024-06-23 10:43:15 -05:00
b5eac25f14 r/minio: Fix ExecReload command
Sending SIGHUP to the main PID (i.e. conmon) ends up stopping the
service.  What we really want is to send the signal to main PID _inside_
the container.  We can achieve this by using `podman kill` instead of
`kill`.
2024-06-23 10:43:15 -05:00
332ef18600 hosts: Decommission old Kubernetes workers
*k8s-amd64-n0.pyrocufflink.blue*, *k8s-amd64-n1.pyrocufflink.blue*, and
*k8s-amd64-n2.pyrocufflink.blue*, which ran Fedora Linux, have been
replaced by *k8s-amd64-n4.pyrocufflink.blue*,
*k8s-amd64-n5.pyrocufflink.blue*, and *k8s-amd64-n6.pyrocufflink.blue*,
respectively.  The new machines run Fedora CoreOS, and are thus not
managed by the Ansible configuration policy.
2024-06-23 10:43:15 -05:00
7201f7ed5c vm-hosts: Expose storage VLAN to VMs
To improve the performance of persistent volumes accessed directly from
the Synology by Kubernetes pods, I've decided to expose the storage
network to the Kubernetes worker node VMs.  This way, iSCSI traffic does
not have to go through the firewall.

I chose not to use the physical interfaces that are already directly
connected to the storage network for this for two reasons: 1) I like
the physical separation of concerns and 2) it would add complexity to
the setup by introducing a bridge on top of the existing bond.
2024-06-23 10:43:15 -05:00
6520b86958 k8s-controller: Do not reboot after auto-updates
I don't want the Kubernetes control plane servers rebooting themselves
randomly; I need to coordinate that with other goings-on on the network.
2024-06-23 10:43:15 -05:00
f0445ebe53 nextcloud: Do not auto-update Nextcloud
Nextcloud usually (always?) wants the `occ upgrade` command to be run
after an update.  If the *nextcloud* package gets updated along with
the rest of the OS, Nextcloud will be down until I manually run that
command hours/days later.
2024-06-23 10:43:15 -05:00
0464041cf8 r/dnf-automatic: Allow excluding packages
Some hosts may have packages that we do not want to have updated
automatically.  For those, we can set `dnf_automatic_exclude`.
2024-06-23 10:43:15 -05:00
24bf145a34 all: Do not auto-update on weekends
I don't want machines updating themselves, rebooting, and potentially
breaking stuff over the weekend.
2024-06-21 22:08:03 -05:00
7579429a0d r/samba-cert: Save firewall configuration
Without making the firewall changes permanent, when a server tries to
renew its certificate after rebooting, it will fail as the ACME server
cannot connect to the HTTP port.
2024-06-20 19:42:13 -05:00
88c45e22b6 vm-hosts: Update VM autostart for new DCs 2024-06-20 18:49:04 -05:00
4bdd00d339 gw1: Do not reboot after dnf automatic updates
We don't want the firewall rebooting itself after kernel updates.
Instead, I will reboot it manually at the next appropriate time.
2024-06-13 08:10:55 -05:00
8400024249 cloud0: Exclude Nextcloud trash from backups
Files in the Nextcloud trash bin do not need to be backed up.  They are
often large (i.e. my Signal backups), and presumably, they are not
needed anyway; why would they be in the trash otherwise?
2024-06-12 19:04:46 -05:00
7b24babfab r/collectd-version: Auto-restart service
Sometimes, the `collectd-version` script crashes or fails to start at
boot.  Configuring systemd to automatically restart it will ensure that
it's always running, so machines' versions are consistently inventoried.
2024-06-12 19:03:11 -05:00
afcd2f2f05 hosts: Replace domain controllers
New AD DC servers run Fedora 40.  Their LDAP server certificates are
issued by *step-ca* via ACME, signed by *dch-ca r2*.

I've changed the naming convention for domain controllers again.  I
found the random sequenc of characters to be too difficult to remember
and identify.  Using a short random word (chosen from the EFF word list
used by Diceware) should be a lot nicer.  These names are chosen by the
`create-dc.sh` script.
2024-06-12 19:01:37 -05:00
eb9db2d729 create-dc: Add script to provision DC VMs
Since I don't like to update Samba Active Directory Domain Controller
servers in-place (it's never worked as well as you would think it
should), I want the process for replacing them to be as automated as
possible.  To that end, I've written `create-dc.sh`, which handles the
whole process of creating and configuring a new ADDC VM.  The only
things it doesn't do are transfer the FSMO roles and demote existing DC
servers.
2024-06-12 19:00:43 -05:00
292ab4585c all: promtail: Update trusted CA certificate
Loki uses a certificate signed by *dch-ca r2* now (actually has for
quite some time...)
2024-06-12 18:57:01 -05:00
092dcee508 fixup-dch-root-ca-r2 2024-06-12 18:56:41 -05:00
1babedaf55 gw1: squid: Cache RPMs and installer images
Installing Fedora on a bunch of machines, simultaneously or in rapid
succession, can be painfully slow, as several large files need to be
downloaded.  To speed this up, we download those files via the proxy and
cache them on the proxy server.

As a side-effect, the proxy needs to allow access to the Kickstart
"server" (i.e. my workstation, at least for now), since Anaconda will
use the configured proxy for everything it downloads.
2024-06-12 18:54:29 -05:00
9365fd2dd5 gw1: squid: Allow access to FCOS update servers
*unifi2.pyrocufflink.blue*, which is connected to the management
network, can only access the Internet via the proxy.  In order for
Zincati/`rpm-ostree` to automatically update the machine, the proxy
needs to allow access to the FCOS update servers.
2024-06-12 18:52:54 -05:00
74e4a4d898 r/squid: Let squid initialize cache dirs
The `squid.service` systemd unit now correctly initializes the
configured cache directories, so we do not need to do it explicitly
before starting the server.
2024-06-12 18:43:23 -05:00
f54858ee5e r/dch-selinux: Install from dch-yum repository
The *dch-selinux* package is now published to the same Yum repository as
other packages (e.g. *sshca-cli*, etc.), rather than its own repository.
2024-06-12 18:42:22 -05:00
9fdd2243a6 pyrocufflink: Trust DCH Root CA R2
Now that the domain controller servers use certificates issued by
*step-ca*, client applications need to trust that root CA certificate.
2024-06-12 18:40:17 -05:00
ffe972d79b r/samba-cert: Obtain LDAP/TLS cert via ACME
The *samba-cert* role configures `lego` and HAProxy to obtain an X.509
certificate via the ACME HTTP-01 challenge.  HAProxy is necessary
because LDAP server certificates need to have the apex domain in their
SAN field, and the ACME server may contact *any* domain controller
server with an A record for that name.  HAProxy will forward the
challenge request on to the first available host on port 5000, where
`lego` is listening to provide validation.

Issuing certificates this way has a couple of advantages:

1. No need for the wildcard certificate for the *pyrocufflink.blue*
   domain any more
2. Renewals are automatic and handled by the server itself rather than
   Ansible via scheduled Jenkins job

Item (2) is particularly interesting because it avoids the bi-monthly
issue where replacing the LDAP server certificate and restarting Samba
causes the Jenkins job to fail.

Naturally, for this to work correctly, all LDAP client applications
need to trust the certificates issued by the ACME server, in this case
*DCH Root CA R2*.
2024-06-12 18:33:24 -05:00
7b6e0bd100 r/haproxy: Support configuring resolvers
HAProxy uses a special configuration block, `resolvers`, to specify
how it should look up names in DNS.  This configuration is used for
e.g. dynamically discovering backend servers via DNS A or SRV records.
Since resolvers are global, they need to be specified in the global
configuration file, rather than a per-application drop-in.

We will use this functionality for the ACME HTTP-01 challenge solver
for Samba AD domain controllers.
2024-06-12 18:29:56 -05:00