Send logs to the systemd journal for easier viewing and disable logging
to a file. Also, the `samba_dc_log_level` variable can control the log
level (0-10, 0 being off, 10 being insane debugging).
Docker is effectively deprecated by Fedora/Red Hat. It is a pain in the
ass to work with anyway. Podman integrates better with systemd, and is
in general more aligned with how I prefer to deploy and manage
applications.
I am following the same pattern here that I have used for Home
Assistant, ZWaveJS2MQTT, etc. The systemd service starts the container
with `podman`, passing the necessary arguments for UID/GID mapping, etc.
Note that, by default, Vaultwarden expects to be able to bind to port
80; since the container is unprivileged, we have to configure it (or
rather, its embedded HTTP server [Rocket](https://rocket.rs)) to listen
on a different port. We also configure it to listen only on the
loopback, since it is being proxied by Apache to the outside network.
To migrate the data from the Docker volume, we just have to copy the
files and fix their ownership.
The *bitwarden_rs* project was recently renamed to *Vaultwarden*, so I
took this opportunity to update the name in most places within the
*bitwarden_rs* role.
Sometimes I need to configure a machine to be a domain member without
actually adding it to the domain. Now I can by running
`ansible-playbook` with `--skip-tags domain-join`
Since the `remount.yml` playbook now tries to avoid redundant remounts,
it needs up-to-date facts about mounted filesystems and their options.
If the cached facts are out of date, it may incorrectly skip remounting
a filesystem. Besides, having up-to-date facts in the CI pipeline is
probably a good thing anyway.
I honestly don't remember why the `use rfc2307` setting was only enabled
on the first DC. All DCs seem to need this setting in order to use the
UID/GID numbers from the directory, instead of using auto-generated
numbers.
If the remote address configuration for strongSwan is not valid when the
Proton VPN watchdog starts, it will now regenerate it immediately. This
can happen, for example, if the Internet has been down for a while, and
the watchdog has iterated through all of the servers in the cache.
Restarting the service will now force it to reconfigure the tunnel and
bring the VPN back up.
The groups in `hosts.offline` need to match those in `hosts`.
Specifically, the *collectd* group needs to be a child of the
*collectd-prometheus* group so that the *write_prometheus* plugin is
installed and configured on hosts defined in this inventory file.
Using *systemd-networkd* to configure network interfaces on *vmhost0* is
working really well. It is decidedly more stable than *dhcpcd* was, and
certainly easier to work with than NetworkManager. Let's go ahead and
switch *vmhost1* as well.
The `collectd-version` script uses the *collectd* UNIX socket to send
custom values to *collectd* to track the OS version. Since these values
obviously cannot change while the system is running, the values are
specified with a very long interval. This avoids having to continuously
insert the values, either with a long-running process or by repeatedly
running a script. The values only need to be inserted once when
*collectd* starts.
All values sent to *collectd* must have an associated type. The type
defines the acceptable range of values. Types are defined in a simple
text file database. *collectd* loads all of the databases specified by
`TypesDB` directives in its configuration file. When configuring a
custom types database, the default database needs to be specified
explicitly; it will not be loaded automatically if there are any
`TypesDB` directives in the configuration.
The *unixsock* plugin for *collectd* provides a socket-based interface
that other software can use to communicate with *collectd*. Notably,
this can be used to publish custom values, query existing values, and
flush caches.
The socket is created at `/run/collectd/socket`. The `/run/collectd`
directory is managed by systemd; it will be created automatically when
the service starts and cleaned up when it stops.
Transitioning from push-based to pull-based monitoring with
Prometheus/collectd. The *write_prometheus* plugin will be installed on
all hosts, and Prometheus will be configured to scrape them directly.
The *collectd-prometheus* role now has a
`collectd_prometheus_allow_outsize` variable. This variable controls
whether or not external hosts are allowed to scrape data from *collectd*.
When set to `false`, as is the default value, *collectd* will be
configured to listen on the loopback interface only, and the TCP port
will not be opened in the firewall.
Synapse supports exporting metrics in Prometheus format. It can do this
either as part of the main server, or in a separate listener. I chose
to use a separate listener so that the metrics are not exposed
publicly.
Fedora 34 does not include the *ntp* package, as it has been "obsoleted
by ntpsec." Until I can create a role for *ntpsec*,
*dc2.pyrocufflink.blue* cannot be an NTP server.
The *processes* plugin for collectd can be configured to monitor
additional information about specific processes. By specifying one or
more `Process` or `ProcessMatch` directives in the plugin configuration,
collectd will start monitoring the listed processes in detail.
The `collectd_processes` Ansible variable can contain a list of
processes to monitor. Each item must at least have a `name` property,
and may also have a `regex` property. If the latter is present, a
`ProcessMatch` directive will be emitted instead of a `Process`
directive.
Having this option enabled dramatically improves the reliability of
collectd multicast traffic from physical machines and VMs on a separate
VM host from the receiving machine.
1. Set a password for *root* on all machines (useful for logging in via
serial console if network is down)
2. Set an authorized SSH key for root on all machines:
* For Fedora 34, use my FIDO2 security token key
* For all other hosts, use my ED25519 key
The *base* role will now set the password for the *root* user, if the
`root_password_hash` variable is defined. This ensures that there is a
way to log into machines directly, even if other authentication
mechanisms like Active Directory are unavailable.
Occasionally, VMs running on the main libvirt VM hosts will freeze or
otherwise become unavailable via network. Sometimes, when this happens,
their normal consoles are unresponsive as well. Having the serial
console available as a fallback can sometimes be helpful in recovering
from such situations.
To ensure the serial console is available on all VMs, we use a "dynamic"
group, based on the virtualization type and role of the managed node.
All KVM-based virtual machines are included in a group named *kvm-vm*.
A play in `base.yml` applies the *serial-console* role to members of
this group.
The *serial-console* Ansible role enables and starts a systemd service
unit to activate a console getty on the specified serial console device
(by default: ttyS0). This is particularly useful for virtual machines,
allowing one to control them in absence of a graphical VM management
tool.
The `hassdb.yml` playbook is no longer used; the new Home Assistant
deployment uses the built-in database again, since it is stored on NVMe
instead of an SD card.
Further, the current deployment is hosted by a machine with a single
filesystem, which thus cannot be remounted read-only after applying
policy.
Some playbooks apply only to hosts that do not have read-only root
filesystems. For these, the `rw_limit` pattern will be empty. The
*Remount R/W* and *Remount R/O* stages should be skipped when this is
the case.
Filesystems like NFS and CIFS require "helper" utilities (i.e.
`mount.nfs` and `mount.cifs`, respectively). These need to be installed
in order for a system to be able to mount those filesystems.
The current shared storage system uses NFSv4, and as such, the
*nfs-utils* package needs to be installed on the VM hosts.
With the transition away from *dhcpcd* on the VM hosts, there is no
longer any need for a custom wait script that must run prior to
attempting to mount the shared filesystem. This dramatically simplifies
the configuration necessary for shared storage.
I don't really see any reason why the shared storage configuration needs
to be managed by a separate role. The *vmhost* role is not really
generic anyway, and will probably not work for any other VM host
deployment besides the two machines running now. As such, I think it
makes sense to move the task to mount the shared filesystem into the
*vmhost* role and drop the *dch-storage-net* role.
The *libvirt-daemon-driver-network* package provides support for
managing virtual networks with libvirt. It is necessary in order to use
managed networks in VM configuration, as opposed to directly specifying
VM network interfaces in their domain configuration.