The *nextcloud* role installs Nextcloud from the specified release
archive, downloading it to the control machine first if necessary, and
configures Apache and PHP-FPM to serve it.
The `nextcloud.yml` playbook uses the *cert* role to install the X.509
certificate for the Nextcloud server, sets up Apache HTTPD with the
*apache* role, and installs Nextcloud using the *nextcloud* role.
The host *cloud0.pyrocufflink.blue* is the Nextcloud server for
Pyrocufflink.
*burp1.pyrocufflink.blue* will replace *burp0.pyrocufflink.blue* as the
BURP server for Pyrocufflink. It is a physical machine (Fitlet), making
it simpler to manage the USB drives. The old virtual machine will be
decommissioned soon.
Having an empty (therefore undefined) group as the child of another
group causes Ansible to emit a "warning" (really an error) indicating
that it cannot parse the inventory file:
[WARNING]: * Failed to parse
/var/lib/jenkins/workspace/CfgMgmt/pyrocufflink/hosts with ini plugin:
/var/lib/jenkins/workspace/CfgMgmt/pyrocufflink/hosts:60: Section
[smtp- relay:children] includes undefined group: zabbix-server
This commit configures *bw0.pyrocufflink.blue* as a BURP client, so that
the Bitwarden data can be backed up. A pre-backup script is used to
take a consistent snapshot of the SQLite database before copying it to
the BURP server.
*cm0.pyrocufflink.blue* has been deprecated and shut down.
Configuration Management jobs now run on regular Jenkins nodes, and are
serialized using "lockable resources" instead of a single executor.
*dns1.pyrocufflink.blue* has been decommissioned. Having a second DNS
server never really worked correctly for some reason, and the
maintenance overhead of the Raspberry Pi is just not worth it right now.
The DHCP service has been moved to *dns0.pyrocufflink.blue*.
The point of the "wheel host" is to serve as a repository of Python
packages (wheels) built by Jenkins for consumption by `pip` et al. For
applications and libraries that do not provide all of their dependencies
as binary packages, this makes a convenient way to install them without
requiring all of the build tools and dependencies on the destination
machine.
The idea here is that a Jenkins job runs `pip wheel` for a distribution
package name or `requirements.txt` file and then uploads the resulting
wheel files using `rsync`. Apache is configured to serve the upload
directory with an index compatible with `pip`'s `--find-links`.
*hass0.pyrocufflink.blue* is a virtual machine that runs Home Assistant.
It is dual-homed on the *pyrocufflink.blue* network and the isolated IoT
network.
*vmhost0.pyrocufflink.blue* is currently offline for maintenance. To
avoid the unending stream of failed continuous enforcement Jenkins jobs,
it has been removed from the main inventory file and moved to "offline"
inventory.
The VPN capability of the UniFi Security Gateway is extremely limited.
It does not support road-warrior IPsec/IKEv2 configuration, and its
OpenVPN configuration is inflexible. As with DHCP, the best solution is
to simply move service to another machine.
To that end, I created a new VM, *vpn0.pyrocufflink.blue*, to host both
strongSwan and OpenVPN. For this to work, the necessary TCP/UDP ports
need to be forwarded, of course, and all of the remote subnets need
static routes on the gateway, specifying this machine as the next hop.
Additionally, ICMP redirects need to be disabled, to prevent confusing
the routing tables of devices on the same subnet as the VPN gateway.
The DHCP server on the UniFi Security Gateway is pretty limited; it
cannot manage static leases (reservations), and does not offer any way
to build dynamic values for e.g. hostname or boot filename. Rather than
give up these features, I decided to just move the DHCP server to one of
the Raspberry Pis; the DNS server made the most sense.
To facilitate this move, I created the *pyrocufflink-dhcp* host group,
and moved the DHCP configuration variables there. Thus, it was a simple
matter of adding *dns1.pyrocufflink.blue* to this group to relocate the
service.
Of course, to serve clients on the other subnets, the gateway needs to
have DHCP relay enabled and pointing to the new server.
The *aria2* role installs the *aria2* download manager and sets it up to
run as a system service with RPC enabled. It also sets up the web UI,
though that must be installed manually from an archive, for now.
To avoid having a single point of failure, a second recursive DNS server
is necessary. This will be useful in cases where the VM hosts must both
be taken offline, but Internet access is still required.
The new server, *dns1.pyrocufflink.blue*, has all the same zones defined
as the original. It forwards the *pyrocufflink.blue* zone and
corresponding reverse zones to the domain controllers, and acts as a
slave for the *pyrocufflink.red* zone.
*smtp1.pyrocufflink.blue* is a VM that will replace
*smtp0.pyrocufflink.blue*, a Raspberry Pi.
I decided that there is little use in having the availability guarantee of
a discreet machine for the SMTP relay. The only system that would NEED
to send mail if the VM host fails is Zabbix, which operates as its own
relay anyway. As such, the main relay can be a VM, and the Raspberry Pi
can be repurposed as a recursive DNS server.
The `koji.yml` playbook can be used to deploy an entire Koji ecosystem.
It is composed of three smaller playbooks:
* `koji-hub.yml`: Deploys the Koji hub, GC, and Kojira
* `koji-web.yml`: Deploys the Koji Web GUI
* `koji-builder.yml`: Deploys the Koji builder
The `burp-client.yml` and `burp-server.yml` playbooks apply the
*burp-client* and *burp-server* roles to BURP clients and servers,
respectively. The server playbook also applies the *postfix* role to
ensure that SMTP is configured and backup notifications can be sent.
Because *vmhost1.pyrocufflink.blue* is usually sleeping, continuous
enforcement jobs always fail. By keeping it in a separate inventory
file, configuration policy can still be applied to it manually, but it
will be ignored by continuous enforcement.
The gateway device is now monitored by Zabbix. Adding it to the *zabbix*
group ensures that the Zabbix agent is installed and configured
correctly.
Because the *zabbix-agent* role has a task to configure FirewallD, the
`host_uses_firewalld` variable needs to be set to `false` for *gw0*,
since it does not use FirewallD.
The host *zbx0.pyrocufflink.blue* (a Raspberry Pi) runs the Zabbix
server and web UI. It has a reserved IPv4 address to simplify reverse
DNS management for now, since Samba's dynamic DNS client does not
register PTR records.