Since the `burp` client command is scheduled to run using Cron, Cronie
needs to be installed and set up in order for the *burp-client* role to
install its cron table file.
The BURP server runs as user *burp*, and nas such, requires that the
client-specific configuration files be owned by that user so they can be
read when a client connects.
Newer versions of the BURP client require `status_port` to be set. This
commit updates the `burp.conf.j2` template to more closely match the
default configuration shipped with the *burp* package, including setting
this new value.
*cm0.pyrocufflink.blue* has been deprecated and shut down.
Configuration Management jobs now run on regular Jenkins nodes, and are
serialized using "lockable resources" instead of a single executor.
*dns1.pyrocufflink.blue* has been decommissioned. Having a second DNS
server never really worked correctly for some reason, and the
maintenance overhead of the Raspberry Pi is just not worth it right now.
The DHCP service has been moved to *dns0.pyrocufflink.blue*.
It is important that only one configuration management job run at a
time. Currently, this is enforced by having only one agent with the
*ansible* label, and that agent has only one executor. This is not an
ideal solution, because it requires maintaining a separate machine for
this purpose.
The *Lockable Resources Plugin* provides an alternate solution to this
problem. Using this plugin, jobs can acquire an exclusive lock on a
"resource" that prevents other jobs that require the same resource from
running. Any job that starts while the lock is held will wait until it
is released before executing. This will enforce the same serial
execution policy, but does not require a separate, dedicated machine.
Jobs will be able to run on any executor with the appropriate label.
Using this option, it is now possible to run configuration management
jobs on the normal agents, defining the execution environment in a
Docker image, so the *cm0.pyrocufflink.blue* agent can be
decommissioned.
Newer versions of Gitea need a JWT secret for Oauth2. Gitea will
attempt to generate one at startup if it is not already specified in the
configuration file, but this will fail since the file is not writable by
the user running the service. As such, it must be set via configuration
policy.
The Zabbix server resolves *localhost* to `::1`, but Postfix resolves it
to `127.0.0.1`. This causes Postfix to reject incoming mail from Zabbix
with "Relay access denied." Explicitly setting the `mynetworks` setting
to include both the IPv4 and IPv6 loopback addresses will ensure that no
mail is rejected from local processes, regardless of how name resolution
happens.
The point of the "wheel host" is to serve as a repository of Python
packages (wheels) built by Jenkins for consumption by `pip` et al. For
applications and libraries that do not provide all of their dependencies
as binary packages, this makes a convenient way to install them without
requiring all of the build tools and dependencies on the destination
machine.
The idea here is that a Jenkins job runs `pip wheel` for a distribution
package name or `requirements.txt` file and then uploads the resulting
wheel files using `rsync`. Apache is configured to serve the upload
directory with an index compatible with `pip`'s `--find-links`.
The *hass-dhcp* role installs dnsmasq and configures it to serve DHCP
requests on the Home Assistant network. Since this network is not
routed, the regular DHCP relay/server setup will not work.
This commit adds a systemd unit to enable the Kernel Same-page Merging
daemon on VM hosts. This allows much greater virtual machine density,
especially when many VMs are running the same guest OS.
Debian does not support system-wide SSL cipher suite profiles of course,
so these options need to be specified explicitly when deploying HAProxy
on Debian-based machines.
This commit updates the net-ifaces scripts for both *vmhost0* and
*vmhost1* to create VLAN and bridge interfaces for the Management and
Home Assistant networks.
The *taiga* role installs the three components of Taiga:
* taiga-back
* taiga-events
* taiga-front
*taiga-back* is a Python application. Its dependencies are installed via
`pip` in the *taiga* user's site-packages, and the application itself is
installed by unpacking the archive. *taiga-events* is a Node.js
application. Its dependencies are installed by `npm`, and is itself
installed by unpacking the archive. Finally, *taiga-front* is a
single-page browser application that is installed by unpacking the
archive, and served by Apache.
Taiga requires PostgreSQL and RabbitMQ.
*hass0.pyrocufflink.blue* is a virtual machine that runs Home Assistant.
It is dual-homed on the *pyrocufflink.blue* network and the isolated IoT
network.
DHCP is provided by *dns1.pyrocufflink.blue* now, not the gateway. To
allow dynamic DNS updates from it, the correct source address must be
listed in the zone configuration for *pyrocufflink.red*.
Because *jenkins0.pyrocufflink.blue* runs Docker, it has an extra
virtual interface and IP address, for container communication. By
default, Samba registers all IP addresses in DNS, and cannot
differentiate between the actual interface and the Docker bridge. This
can cause other hosts to attempt to contact *jenkins0.pyrocufflink.blue*
using the wrong address.
The `samba_interfaces` variable controls the value of the `interfaces`
global configuration option for Samba. One of the things this option
controls is which addresses to register in DNS. By setting it to the
network address of the *pyrocufflink.blue* network, we can prevent the
virtual address from being used at all.
The `smtp_mynetworks` variable expects a list. Setting it to a string
resulted in each character in the string being interpreted as an item in
the list.
The *websites/pyrocufflink.net* role configures the public web server to
host *pyrocufflink.net*. This site has two functions:
* It redirects `/` to http://dustin.hatch.name/
* It proxies user home directories (i.e. /~dustin/) to the file server
This commit sets the `apache_userdir` variable, which enables the
per-user directories feature. This allows users to serve content via
HTTP by placing it in the `public_html` directory within their home
directories.
Although Apache is already installed on the file server in order to
serve the Aria2 web UI, it is not explicitly included in the
`fileserver.yml` playbook.
The `ServerName` directive needs to be set inside the default SSL vhost,
as this property does not get inherited from the global configuration,
and it is needs to be set in order for SNI to work correctly.
By default, per-user directories (i.e. `/~username/`) are disabled in
Fedora's configuration of Apache. This commit introduces a new variable,
`apache_userdir`, which can be used to enable this feature. It should be
set to a string other than *disabled*, which is the path under users'
home directories that will be served, if it is accessible. Normally, the
value would be `public_html`.
To support multiple websites with separate Let's Encrypt certificates,
the *certbot* role needs to be applied as a dependency of each
individual website role. This will allow each application to specify a
different value for `certbot_domains`.