The Zabbix server also serves an SMTP relay, to minimize reliance on
external services when sending notifications. Since it inherits
configuration of the relay from the general *smtp-relay* group, it ends
up allowing all hosts to relay off of it. To avoid this, we set
`smtp_rmynetworks` at the *zabbix-server* group level to only allow the
local machine to relay messages.
Moving the route definitions to global scope, and defining an address
pool, will allow other clients besides *dhatch-d4b* to connect to and
use the OpenVPN tunnel service. This may be useful in situations where
IPsec is blocked
The VPN capability of the UniFi Security Gateway is extremely limited.
It does not support road-warrior IPsec/IKEv2 configuration, and its
OpenVPN configuration is inflexible. As with DHCP, the best solution is
to simply move service to another machine.
To that end, I created a new VM, *vpn0.pyrocufflink.blue*, to host both
strongSwan and OpenVPN. For this to work, the necessary TCP/UDP ports
need to be forwarded, of course, and all of the remote subnets need
static routes on the gateway, specifying this machine as the next hop.
Additionally, ICMP redirects need to be disabled, to prevent confusing
the routing tables of devices on the same subnet as the VPN gateway.
The DHCP server on the UniFi Security Gateway is pretty limited; it
cannot manage static leases (reservations), and does not offer any way
to build dynamic values for e.g. hostname or boot filename. Rather than
give up these features, I decided to just move the DHCP server to one of
the Raspberry Pis; the DNS server made the most sense.
To facilitate this move, I created the *pyrocufflink-dhcp* host group,
and moved the DHCP configuration variables there. Thus, it was a simple
matter of adding *dns1.pyrocufflink.blue* to this group to relocate the
service.
Of course, to serve clients on the other subnets, the gateway needs to
have DHCP relay enabled and pointing to the new server.
This commit updates the list of FireMon networks to include the Caverns
Production (172.16.0.0/24) and Caverns Admin (172.24.16.0/20) networks.
This is necessary to ensure OpenVPN routes are created for these
networks.
The routes to FireMon networks are now defined using the
`firemon_networks` Ansible variable. The global `iroute` and
client-specific `route` options are generated from the CIDR blocks
specified in this list.
The *aria2* role installs the *aria2* download manager and sets it up to
run as a system service with RPC enabled. It also sets up the web UI,
though that must be installed manually from an archive, for now.
The *rng-tools* package provides `rngd`, a userspace process that feeds
the Linux entropy pool from additional sources. Notably, it mixes
entropy gathered from hardware random number generators, such as the one
on the BCM2835 SoC on the Raspberry Pi, and the KVM paravirtual random
number generator.
The `rngd.yml` playbook installs *rng-tools* and starts the `rngd`
service.
For hosts that do not have any TSIG keys, the `named_keys` variable
still must be defined (to an empty iterable) in order for the template
to expand properly.
To avoid having a single point of failure, a second recursive DNS server
is necessary. This will be useful in cases where the VM hosts must both
be taken offline, but Internet access is still required.
The new server, *dns1.pyrocufflink.blue*, has all the same zones defined
as the original. It forwards the *pyrocufflink.blue* zone and
corresponding reverse zones to the domain controllers, and acts as a
slave for the *pyrocufflink.red* zone.
In order to support adding a second DNS server, the BIND zone
configuration needs to be partially modularized. While the forwarder
definitions for *pyrocufflink.blue*, etc. will remain the same, the
*pyrocufflink.red* zone will be different, as it will be a slave zone on
the second server. This commit breaks up the definition of the
`named_zones` variable into two parts:
* `pyrocufflink_red_zones`: This is a list of zone objects for
*pyrocufflink.red* and its corresponding reverse zone. On
*dns1.pyrocufflink.blue*, these are master zones. On the new server,
these will be slaves.
* `pyrocufflink_common_zones`: This is a list of zone objects for the
zones that are the same on both servers, since they are all forwarding
zones.
Similarly, the `named_keys` variable only needs to be defined on the
master, since DHCP will only send updates there.
*smtp1.pyrocufflink.blue* is a VM that will replace
*smtp0.pyrocufflink.blue*, a Raspberry Pi.
I decided that there is little use in having the availability guarantee of
a discreet machine for the SMTP relay. The only system that would NEED
to send mail if the VM host fails is Zabbix, which operates as its own
relay anyway. As such, the main relay can be a VM, and the Raspberry Pi
can be repurposed as a recursive DNS server.
*file0.pyrocufflink.blue* hosts syncthing. Forwarding the transport is
not strictly required, as syncthing can use relays to encapsulate
traffic in HTTPS, but allowing direct access improves performance.
The `koji.yml` playbook can be used to deploy an entire Koji ecosystem.
It is composed of three smaller playbooks:
* `koji-hub.yml`: Deploys the Koji hub, GC, and Kojira
* `koji-web.yml`: Deploys the Koji Web GUI
* `koji-builder.yml`: Deploys the Koji builder
The *koji-web* role installs and configures the Koji Web GUI front-end
for Koji. It requires Apache and mod_wsgi. A client certificate is
required for authentication to the hub, and must be placed in the
host-specific subdirectory of `certs/koji`.
The *koji-client* role is a generic role that can be used to configure
the Koji client library/`koji` CLI tool. By default, it manages the
default configuration at `/etc/koji`, but by using the
`koji_client_dir`, `koji_client_user`, and `koji_client_id` variables,
it can be used to configure per-user client configuration as well.
The *kojira* role sets up the Koji repository agent to manage
repository metadata for build tags. It runs as a daemon, usually on the
same machine as the Koji hub. A client certificate is required for
authentication, and must be supplied by placing it in the
`certs/koji/{{ inventory_hostname }}` directory.
The *koji-gc* role sets up the Koji garbage collector utility to run
periodically. It uses cron for scheduling. A client certificate is
required for authentication, and must be supplied by placing it in the
`certs/koji/{{ inventory_hostname }}` directory.
The minimal Fedora installation does not include a cron implementation.
The *cronie* role can be applied to hosts installed in this way to
ensure that cron is available for task scheduling.
The `burp-client.yml` and `burp-server.yml` playbooks apply the
*burp-client* and *burp-server* roles to BURP clients and servers,
respectively. The server playbook also applies the *postfix* role to
ensure that SMTP is configured and backup notifications can be sent.
The *burp-client* role installs and configures a BURP client. It should
support RHEL/CentOS/Fedora and Gentoo.
To manage the client password and other server-mandated configuration,
the role uses Ansible's delegation feature to generate a configuration
file in the "clientconfdir" on the BURP server.
An hourly cron task is scheduled that runs `burp -a t` every hour. This
allows the server to configure backup timebands and intervals.
The *burp-server* role installs and configures a BURP server. It is
adapted from a previous iteration, and should support CentOS/RHEL/Fedora
and Gentoo, as well as both BURP 1.x and 2.x (depending on which version
gets installed by the system package manager).
To manage the certificate authority, the *burp-server* role uses the
`burp_ca` command. This has the advantage of not requiring any external
certificate management, but effectively binds the CA to a specific
machine.
The value of the `shlib_directory` is dependent the system architecture.
Specifically, x86_64 machines use `/usr/lib64/postfix`, while everything
else uses `/usr/lib/postfix`. This role was originally deployed on a
Raspberry Pi, so the original path was correct. Attempting to deploy it
on an x86_64 machine revealed the error.
This commit adds a new task that loads a variables file based on the
architecture. Each option defines an `arch_libdir` variable, which can
be expanded in the `postfix_shlib_directory` variable as needed.
Because *vmhost1.pyrocufflink.blue* is usually sleeping, continuous
enforcement jobs always fail. By keeping it in a separate inventory
file, configuration policy can still be applied to it manually, but it
will be ignored by continuous enforcement.
*myala.pyrocufflink.jazz* no longer hosts any public-facing websites,
and is in fact shut down. To prevent HAproxy from failing to start
because it cannot resolve the name, this backend needs to be removed.
Usually, the *samba* role is deployed as a dependency of the *winbind*
role, which explicitly sets `samba_security` to `ads`. The new
*fileserver* role also depends on the *samba* role, but it does NOT sett
that variable. This can cause `smb.conf` to be rewritten with a
different value whenever one or the other role is applied.
Explicitly setting the `samba_security` variable at the group level
ensures that the value is consistent no matter how the *samba* role is
applied. Since all domain member machines need the same value,
regardless of what function they perform, this is safe.
The *fileserver* role configures Samba as a file sharing server. It uses
the *samba* role to handle cross-distribution installation of Samba
itself, and is focused primarily on configuring shared folders.