Commit Graph

136 Commits (61844e8a95aa3d7243b7565ae02273bfc1fb3e3e)

Author SHA1 Message Date
Dustin 4776303db2 k8s-node: Deploy NFS client
Longhorn's new RWX (read-write many) mode requires the NFS client
utilities installed on the host machine.
2023-06-08 10:06:02 -05:00
Dustin bf4d57b5cb frigate: Configure journal2ntfy for MD RAID
The Frigate server has a RAID array that it uses to store video
recordings.  Since there have been a few occasions where the array has
suddenly stopped functioning, probably because of the cheap SATA
controller, it will be nice to get an alert as soon as the kernel
detects the problem, so as to minimize data loss.
2023-06-08 10:05:36 -05:00
Dustin 87e8ec2ed4 synapse: Back up data using BURP
Most of the Synapse server's state is in its SQLite database.  It also
has a `media_store` directory that needs to be backed up, though.

In order to back up the SQLite database while the server is running, the
database must be in "WAL mode."  By default, Synapse leaves the database
in the default "rollback journal mode," which disallows multiple
processes from accessing the database, even for read-only operations.
To change the journal mode:

```sh
sudo systemctl stop synapse
sudo -u synapse sqlite3 /var/lib/synapse/homeserver.db 'PRAGMA journal_mode=WAL;'
sudo systemctl start synapse
```
2023-05-23 09:52:50 -05:00
Dustin a7319c561d journal2ntfy: Script to send log messagess via ntfy
The `journal2ntfy.py` script follows the systemd journal by spawning
`journalctl` as a child process and reading from its standard output
stream.  Any command-line arguments passed to `journal2ntfy` are passed
to `journalctl`, which allows the caller to specify message filters.
For any matching journal message, `journal2ntfy` sends a message via
the *ntfy* web service.

For the BURP server, we're going to use `journal2ntfy` to generate
alerts about the RAID array.  When I reconnect the disk that was in the
fireproof safe, the kernel will log a message from the *md* subsystem
indicating that the resynchronization process has begun.  Then, when
the disks are again in sync, it will log another message, which will
let me know it is safe to archive the other disk.
2023-05-17 14:51:21 -05:00
Dustin a3ea838cac burp-server: Deploy MinIO
We're going to run MinIO on the BURP server to provide a backup target
for the [Postgres Operator][0]/[WAL-E][1].  Although the Postgres
Operator also supports backups via [WAL-G][2], which supports more
backup targets like SFTP, the operator does not support restoring from
those targets.  As such, the best way to get fully-featured backups for
the Postgres Operator, including environment cloning, etc., is to use
S3.  Since I absolutely do not want to store my backups "in the cloud,"
using MinIO seems a decent alternative.  Running it on the BURP server
allows the backups to be stored and rotated along with regular system
backups.

[0]: https://github.com/zalando/postgres-operator/
[1]: https://github.com/wal-e/wal-e
[2]: https://github.com/wal-g/wal-g
2023-05-09 21:55:25 -05:00
Dustin 78f65355fa gitea: Back up with BURP 2023-04-12 14:07:51 -05:00
Dustin 10344b07c7 hosts: Add ag62kz.p.b
New domain controller (Fedora 37): *ag62kz.pyrocufflink.blue*
2022-12-23 06:56:52 -06:00
Dustin 8e1c67f591 hosts: Remove dc2
*dc2.pyrocufflink.blue* has been decommissioned.
2022-12-23 06:56:52 -06:00
Dustin 066a68318c hosts: Add dc-4k6s8e.p.b
This is a new domain controller running Fedora 37.
2022-12-18 22:49:44 -06:00
Dustin da2b3c4d59 hosts: Remove dc0
Finally got this guy shut down!
2022-12-18 19:12:58 -06:00
Dustin 11e26c3189 hosts: Remove jenkins0, build0
The Jenkins controller is now hosted in Kubernetes.  Relatedly, jobs
all run in Kubernetes pods, and there is no longer any need for static
agents.
2022-11-27 17:21:03 -06:00
Dustin e09e684fd8 hosts: Update mtrcs0 FQDN
I moved the metrics Pi from the red network to the blue network.  I
started to get uncormfortable with the firewall changes that were
required to host a service on the red network.  I think it makes the
most sense to define the red network as egress only.
2022-11-09 18:56:05 -06:00
Dustin a433d1b01b hosts: remove dns0.p.b
I've moved handling of DNS to the border firewall instead of a dedicated
virtual machine.  Originally, the VM was necessary because the UniFi
Security Gateway sucked and could not (easily) handle the complex
configuration I wanted to use.  Since moving to the new firewall, this
is no longer a problem.

Having DNS on a VM is problematic when full-network outages occur, like
the one that happened on 16 August 2022.  When everything starts back
up, DNS is unavailable.  libvirt VM autostart does not work for machines
that have been migrated between hosts (the auto-start flag is not
migrated, and libvirt "forgets" that the VM was supposed to autostart if
it is migrated away and back).  I plan to script a solution for this at
some point, but I still think it makes more sense for the firewall to
handle it.  It will certainly make it come up quicker regardless.
2022-08-20 18:20:06 -05:00
Dustin 20fd03795b hosts: add pxe0.p.b
*pxe0.pyrocufflink.blue* hosts TFTP and NBD for network-booted devices.
2022-08-15 17:13:56 -05:00
Dustin 02e4df023c r/pxe: Set up a PXE server
The *pxe* role configures the TFTP and NBD stages of PXE network
booting.  The TFTP server provides the files used for the boot stage,
which may either be a kernel and initramfs, or another bootloader like
SYSLINUX/PXELINUX or GRUB.  The NBD server provides the root filesystem,
typically mounted by code in early userspace/initramfs.

The *pxe* role also creates a user group called *pxeadmins*.  Users in
this group can publish content via TFTP; they have write-access to the
`/var/lib/tftpboot` directory.
2022-08-15 17:12:35 -05:00
Dustin b4f752acbd hosts: remove stats0
*stats0.pyrocufflink.blue* is being decommissioned.  It has been
completely replaced by *mtrcs0.pyrocufflink.red* at this point.
2022-08-12 13:18:04 -05:00
Dustin 4ddbc9f256 hosts: Add mtrcs0.p.r
*mtrcs0.pyrocufflink.red* is a Raspberry Pi CM4 on a Waveshare
CM4-IO-BASE-B carrier board with a NVMe SSD.  It runs a custom OS built
using Buildroot, and is not a member of the *pyrocufflink.blue* AD
domain.

*mtrcs0.p.r* hosts Victoria Metrics/`vmagent`, `vmalert`, AlertManager,
and Grafana.  I've created a unique group and playbook for it,
*metricspi*, to manage all these applications together.
2022-08-11 21:40:19 -05:00
Dustin 0785fda26b r/v-m: Add role for Victoria Metrics
The *victoria-metrics* role deploys a single-server instance of the
Victoria Metrics time series database server.  It installs the selected
version by downloading the binary release from Github and copying it to
`/usr/local/sbin` on the managed node.  Scrape configuration is optional
and can be specified with the `scrape_configs` variable.
2022-08-10 19:47:12 -05:00
Dustin c8e89a4b16 hosts: Add Kubernetes machines
There is no specific playbook or role for Kubernetes.  All OS
configuration is done at install time via kickstart scripts, and
deploying Kubernetes itself is done (manually) using `kubeadm init` and
`kubeadm join`.
2022-08-03 20:52:01 -05:00
Dustin 6f95a595b2 hosts: Add nvr1.p.b to rw-root group
*nvr1.pyrocufflink.blue* has a single btrfs filesystem which cannot be
mounted read-only.
2022-07-24 16:44:06 -05:00
Dustin 797cc2092f hosts: Add nvr1.p.b
*nvr1.pyrocufflink.blue* is the new video recording server.  It is a
1U rack-mounted physical machine based on the [Jetway
JBC150F596-3160-B][0] barebone system.  It replaces
*nvr0.pyrocufflink.blue* in this role.

[0]: https://www.jetwaycomputer.com/JBC150F596.html
2022-07-23 17:52:26 -05:00
Dustin ee0e6873ad r/collectd-sensors: Install collectd sensors plugin
The *sensors* plugin for collectd reads temperature information from the
I²C/SMBus using *lm_sensors*.  Naturally, it is only useful on physical
machines, so it is not installed or enabled by default.
2022-07-21 13:14:25 -05:00
Dustin 5f6e2e774c hosts: Remove build2-armv7hl
This machine has a hardware problem.
2022-07-05 20:30:19 -05:00
Dustin d446653296 homeassistant: Split out Zigbee/Zwave playbooks
To make it simpler to apply policy specifically for Zibgee2MQTT or
ZWaveJS2MQTT, I've split them into their own playbooks.
2021-12-18 16:45:52 -06:00
Dustin 739ffb2845 home-assistant: Configure BURP backups
Take a snapshot of the history database first, then back up everything
in `/var/lib/homeassistant`.
2021-12-17 20:57:38 -06:00
Dustin c882ac45e7 nut: Add playbook for NUT
NUT runs on *serial0.pyrocufflink.blue* and monitors the two UPSes on
the server rack.
2021-10-31 14:28:27 -05:00
Dustin d3592f1162 hosts: Add serial0.pyrocufflink.blue
This is a Raspberry Pi 3 on the DIN rail that has USB-to-RS232 adapters
connected to the VM hosts and the Firewall.
2021-10-31 00:54:10 -05:00
Dustin 881c8de625 Switch Prometheus/collectd to pull
Transitioning from push-based to pull-based monitoring with
Prometheus/collectd.  The *write_prometheus* plugin will be installed on
all hosts, and Prometheus will be configured to scrape them directly.
2021-10-30 16:41:17 -05:00
Dustin 86aa292931 hosts: Remove dc2 from ntpd group
Fedora 34 does not include the *ntp* package, as it has been "obsoleted
by ntpsec."  Until I can create a role for *ntpsec*,
*dc2.pyrocufflink.blue* cannot be an NTP server.
2021-10-17 14:11:00 -05:00
Dustin e2c5549f35 hosts: Add dc2.p.b
*dc2.pyrocufflink.blue* acts as a second Active Directory Domain
Controller with *samba*.
2021-10-16 21:53:02 -05:00
Dustin 11357a9df3 hosts: Remove vpn0.p.b
The firewall provides both the Wireguard and IKEv2 VPN services.
2021-08-30 08:50:18 -05:00
Dustin d78005fac6 hosts: remove hass{1,db0}.p.b
*hass1.pyrocufflink.blue* and *hassdb0.pyrocufflink.blue* were part of
the old Home Assistant deployment.  Everything has been migrated to
*hass2.pyrocufflink.blue*, so these machines can be decommissioned now.
2021-08-24 20:03:59 -05:00
Dustin c1a7105d09 hosts: Add hass2.p.b to rw-root
*hass2.pyrocufflink.blue* only has a single filesystem, stored on its
NVMe disk.
2021-08-22 10:15:10 -05:00
Dustin c4cd9f13f5 hosts: Remove build1-aarch64.p.b
This machine keeps crashing.  It's mostly unused for now anyway.  I'll
probably eventually replace it with a Raspberry Pi 4.
2021-08-22 09:54:18 -05:00
Dustin 4301391a79 hosts: Remove zezere0, motion0
Didn't end up needing Zezere, as I gave up on Fedora IoT.

No longer using motionEye since switching to Frigate.
2021-08-21 17:32:49 -05:00
Dustin b7ba6a59ab hosts: Add nvr0.p.b
*nvr0.pyrocufflink.blue* hosts Frigate.  It is deployed on a separate
subnet, for two reasons:

* To avoid streaming video from the cameras through the firewall
* To prevent any hosts on the LAN except Home Assistant from
  communicating with Frigate, since it does not have any kind of
  authentication or access control
2021-08-21 17:20:19 -05:00
Dustin 997760968e r/frigate: Add role to deploy Frigate
Frigate is an NVR that uses machine learning to detect objects on camera
in real time.  It integrates with Home Assistant to expose sensors which
can be used for automation, etc.

The only official way to deploy Frigate is with a container, so we use
Podman and systemd to manage it.
2021-08-21 17:16:58 -05:00
Dustin b826d8355e hosts: Add hass2.p.b
*hass2.pyrocufflink.blue* is a Raspberry Pi Compute Module 4-based
system, currently mounted in a WaveShare CM4 Mini Base Board (A).  With
an NVMe SSD for primary storage, it runs significantly faster than a
standard Raspberry Pi 4, and blows the old Raspberry Pi 3-based Home
Assistant deployment out of the water. It has a Zooz 700 series Z-Wave
Plus S2 USB stick and a ConBee II Zigbee USB stick attached to its USB
2.0 ports.  It runs a customized Fedora Minimal distribution.
2021-07-19 15:58:58 -05:00
Dustin 4aa3cdddd9 hosts: Add zezere0.p.b
*zezere0.pyrocufflink.blue* hosts the local deployment of the Fedora
Zezere provisioning service.
2021-07-05 09:34:25 -05:00
Dustin 3d9d7423ef hosts: Add stats0.p.b to prometheus group
Although configuration policy is not yet available for Prometheus
itself, the `collectd.yml` playbook also uses the *prometheus* host
group.  Specifically, hosts in this group are configured to receive
collectd data from other hosts and expose those data through the
`write_prometheus` plugin.
2021-07-05 09:34:25 -05:00
Dustin 9565b740b0 hosts: add stats0.p.b
*stats0.pyrocufflink.blue* hosts Grafana (for now, adding Victoria
Metrics, etc. later)
2021-07-02 21:55:02 -05:00
Dustin 5e61e93cea roles/grafana: Deploy Grafana
This commit introduces the *grafana* role and the corresponding
`grafana.yml` playbook.  The role installs Grafana using the system
package manager, and configures the server (including LDAP
authentication).
2021-07-02 21:47:33 -05:00
Dustin 2df1605421 hosts: Add matrix0.p.b
*matrix0.pyrocufflink.blue* hosts the Matrix homeserver for
*hatch.chat*.
2020-12-30 22:04:51 -06:00
Dustin 371305bed4 roles/synapse: Deploy the Matrix homeserver
The *synapse* role and the corresponding `synapse.yml` playbook deploy
Synapse, the reference Matrix homeserver implementation.

Deploying Synapse itself is fairly straightforward: it is packaged by
Fedora and therefore can simply be installed via `dnf` and started by
`systemd`.  Making the service available on the Internet, however, is
more involved.  The Matrix protocol mostly works over HTTPS on the
standard port (443), so a typical reverse proxy deployment is mostly
sufficient.  Some parts of the Matrix protocol, however, involve
communication over an alternate port (8448).  This could be handled by a
reverse proxy as well, but since it is a fairly unique port, it could
also be handled by NAT/port forwarding.  In order to support both
deployment scenarios (as well as the hypothetical scenario wherein the
Synapse machine is directly accessible from the Internet), the *synapse*
role supports specifying an optional `matrix_tls_cert` variable.  If
this variable is set, it should contain the path to a certificate file
on the Ansible control machine that will be used for the "direct"
connections (i.e. on port 8448).  If it is not set, the default Apache
certificate will be used for both virtual hosts.

Synapse has a pretty extensive configuration schema, but most of the
options are set to their default values by the *synapse* role.  Other
than substituting secret keys, the only exposed configuration option is
the LDAP authentication provider.
2020-12-30 21:54:02 -06:00
Dustin 621035e89b Merge branch 'collectd' into master 2020-12-23 21:26:01 -06:00
Dustin debf7d8e5a hosts: Add pyrocufflink to collectd group
We want collectd deployed on all production machines.
2020-12-23 21:04:49 -06:00
Dustin 19f0d713f4 hosts: Move koji0.p.b to offline
I doubt I will be using Koji much if at all any more.  In preparation
for decommissioning it, I am moving the Koji inventory to hosts.offline,
to prevent Jenkins jobs from failing.
2020-12-13 09:54:01 -06:00
Dustin 3a36d6b7ff hosts: Add motion0.p.b
*motion0.pyrocufflink.blue* hosts motionEye
2020-10-03 11:30:38 -05:00
Dustin ef4e769ed2 motioneye: Deploy motionEye camera software
The *motioneye* role installs motionEye on a Fedora machine using `pip`.
It configures Apache to proxy for motionEye for outside (HTTPS) access.

The official installation instructions and default configuration for
motionEye assume it will be running as root.  There is, however, no
specific reason for this, as it works just fine as an unprivileged user.
The only minor surprise is that the `conf_path` configuration setting
must be writable, as this is where motionEye places generated
configuration for `motion`.  This path does not, however, have to
include the `motioneye.conf` file itself, which can still be read-only.
2020-10-03 11:29:39 -05:00
Dustin 8ca093050b pyrocufflink-dns: Cloudflare over ProtonVPN
This commit adds a new playbook, `protonvpn.yml`, and its supporting
roles *strongswan-swanctl* and *protonvpn*.  This playbook configures
strongSwan to connect to ProtonVPN using IPsec/IKEv2.

With this playbook, we configure the name servers on the Pyrocufflink
network to route all DNS requests through the Cloudflare public DNS
recursive servers at 1.1.1.1/1.0.0.1 over ProtonVPN.  Using this setup,
we have the benefit of the speed of using a public DNS server (which is
*significantly* faster than running our own recursive server, usually by
1-2 seconds per request), and the benefit of anonymity from ProtonVPN.

Using the public DNS server alone is great for performance, but allows
the server operator (in this case Cloudflare) to track and analyze usage
patterns.  Using ProtonVPN gives us anonymity (assuming we trust
ProtonVPN not to do the very same tracking), but can have a negative
performance impact if its used for all Internet traffic.  By combining
these solutions, we can get the benefits of both!
2020-09-06 11:06:58 -05:00