Commit Graph

8 Commits (c95a96a33c50c9bec15f904dd2e9fd72d8690c66)

Author SHA1 Message Date
Dustin 164f3b5e0f r/wal-g-pg: Handle versioned storage locations
The target location for WAL archives and backups saved by WAL-G should
be separated based on the major version of PostgreSQL with which they
are compatible.  This will make it easier to restore those backups,
since they can only be restored into a cluster of the same version.

Unfortunately, WAL-G does not natively handle this.  In fact, it doesn't
really have any way of knowing the version of the PostgreSQL server it
is backing up, at least when it is uploading WAL archives.  Thus, we
have to include the version number in the target path (S3 prefix)
manually.  We can't rely on Ansible to do this, because there is no way
to ensure Ansible runs at the appropriate point during the upgrade
process.  As such, we need to be able to modify the target location as
part of the upgrade, without causing a conflict with Ansible the next
time it runs.

To that end, I've changed how the _wal-g-pg_ role creates the
configuration file for WAL-G.  Instead of rendering directly to
`wal-g.yml`, the role renders a template, `wal-g.yml.in`.  This template
can include a `@PGVERSION@` specifier.  The `wal-g-config` script will
then use `sed` to replace that specifier with the version of PostgreSQL
installed on the server, rendering the final `wal-g.yml`.  This script
is called both by Ansible in a handler after generating the template
configuration, and also as a post-upgrade action by the
`postgresql-upgrade` script.

I originally wanted the `wal-g-config` script to use the version of
PostgreSQL specified in the `PG_VERSION` file within the data directory.
This would ensure that WAL-G always uploads/downloads files for the
matching version.  Unfortunately, this introduced a dependency conflict:
the WAL-G configuration needs to be present before a backup can be
restored, but the data directory is empty until after the backup has
been restored.  Thus, we have to use the installed server version,
rather than the data directory version.  This leaves a small window
where WAL-G may be configured to point to the wrong target if the
`postgresql-upgrade` script fails and thus does not trigger regenerating
the configuration file.  This could result in new WAL archives/backups
being uploaded to the old target location.  These files would be
incompatible with the other files in that location, and could
potentially overwrite existing files.  This is rather unlikely, since
the PostgreSQL server will not start if the _postgresql-upgrade.service_
failed.  The only time it should be possible is if the upgrade fails in
such a way that it leaves an empty but valid data directory, and then
the machine is rebooted.
2024-11-17 10:27:31 -06:00
Dustin a0c5ffc869 postgresql: Collect Wal-G metrics with statsd_exporter
_wal-g_ can send StatsD metrics when it completes an upload/backup/etc.
task.  Using the `statsd_exporter`, we can capture these metrics and
make them available to Victoria Metrics.
2024-10-13 20:01:19 -05:00
Dustin 72936b3868 postgresql: Allow access by IPv6
Since LAN clients have IPv6 addresses now, some may try to connect to
the database over IPv6, so we need to allow this in the host-based
authentication rules.
2024-09-02 21:20:26 -05:00
Dustin e323324c54 postgresql: Switch wal-g to use new MinIO server
Switching to the MinIO server on _chromie.pyrocufflink.blue_ as
_burp1.pyrocufflink.blue_ is being decommissioned.
2024-09-01 09:01:04 -05:00
Dustin 4f202c55e4 r/postgres-exporter: Deploy postgres-exporter
The [postgres-exporter][0] exposes PostgreSQL server statistics to
Prometheus.  It connects to a specified PostgreSQL server (in this
case, a server on the local machine via UNIX socket) and collects data
from the `pg_stat_activity`, et al. views.  It needs the `pg_monitor`
role in order to be allowed to read the relevant metrics.

Since we're setting up the exporter to connect via UNIX socket, it needs
a dedicated OS user to match the PostgreSQL user in order to
authenticate via the _peer_ method.

[0]: https://github.com/prometheus-community/postgres_exporter/
2024-07-02 20:44:29 -05:00
Dustin 3f5550ee6c postgresql: wal-g: Set PGHOST
By default, WAL-G tries to connect to the PostgreSQL server via TCP
socket on the loopback interface.  Our HBA configuration requires
certificate authentication for TCP sockets, so we need to configure
WAL-G to use the UNIX socket.
2024-07-02 20:44:29 -05:00
Dustin 6caf28259e hosts: db0: Promote to primary
All data have been migrated from the PostgreSQL server in Kubernetes and
the three applications that used it (Firefly-III, Authelia, and Home
Assistant) have been updated to point to the new server.

To avoid comingling the backups from the old server with those from the
new server, we're reconfiguring WAL-G to push and pull from a new S3
prefix.
2024-07-02 20:44:29 -05:00
Dustin 208fadd2ba postgresql: Configure for dedicated DB servers
I am going to use the *postgresql* group for the dedicated database
servers.  The configuration for those machines will be quite a bit
different than for the one existing machine that is a member of that
group already: the Nextcloud server.  Rather than undefine/override all
the group-level settings at the host level, I have removed the Nextcloud
server from the *postgresql* group, and updated the `nextcloud.yml`
playbook to apply the *postgresql-server* role itself.

Eventually, I want to move the Nextcloud database to the central
database servers.  At that point, I will remove the *postgresql-server*
role from the `nextcloud.yml` playbook.
2024-07-02 20:44:29 -05:00