We don't want to pull public container images that already exist. This
creates prevents pods from starting if there is any connectivity issue
with the upstream registry.
If there is an issue with the in-cluster database server, accessing the
Kubernetes API becomes impossible by normal means. This is because the
Kubernetes API uses Authelia for authentication and authorization, and
Authelia relies on the in-cluster database server. To solve this
chicken-and-egg scenario, I've set up a dedicated PostgreSQL database
server on a virtual machine, totally external to the Kubernetes cluster.
With this commit, I have changed the Authelia configuration to point at
this new database server. The contents of the new database server were
restored from a backup from the in-cluster server, so of Authelia's
state was migrated automatically. Thus, updating the configuration is
all that is necessary to switch to using it.
The new server uses certificate-based authentication. In order for
Authelia to access it, it needs a certificate issued by the
_postgresql-ca_ ClusterIssuer, managed by _cert-manager_. Although the
environment variables for pointing to the certificate and private key
are not listed explicitly in the Authelia documentation, their names
can be inferred from the configuration document schema and work as
expected.
By default, Authelia uses a local SQLite database for persistent data
(e.g. authenticator keys, TOTP secrets, etc.) and keeps session data in
memory. Together, these have some undesirable side effects. First,
since needing access to the filesystem to store the SQLite database
means that the pod has to be managed by a StatefulSet. Restarting
StatefulSet pods means stopping them all and then starting them back up,
which causes downtime. Additionally, the SQLite database file needs to
be backed up, which I never got around to setting up. Further, any time
the service is restarted, all sessions are invalidated, so users have to
sign back in.
All of these issues can be resolved by configuring Authelia to store all
of its state externally. The persistent data can be stored in a
PostgreSQL database and the session state can be stored in Redis. Using
a database managed by the existing Postgres Operator infrastructure
automaticaly enables high availability and backups as well.
To migrate the contents of the database, I used [pgloader]. With
Authelia shut down, I ran the migration job. Authelia's database schema
is pretty simple, so there were no problems with the conversion.
Authelia started back up with the new database configuration without any
issues.
Session state are still stored only in memory of the Redis process.
This is probably fine, since Redis will not need restarted often, except
for updates. At least restarting Authelia to adjust its configuration
will not log everyone out.
[pgloader]: https://pgloader.readthedocs.io/en/latest/ref/sqlite.html
Without `disableNameSuffixHash` enabled, Kustomize will create a unique
ConfigMap any time the contents of source file change. It will also
update any Deployment, StatefulSet, etc resources to point to the new
ConfigMap. This has the effect of restarting any pods that refer to the
ConfigMap whenever its contents change.
I had avoided using this initially because Kustomize does *not* delete
previous ConfigMap resources whenever it creates a new one. Now that we
have Argo CD, though, this is not an issue, as it will clean up the old
resources whenever it synchronizes.
Enabling OpenID Connect authentication for the Kubernetes API server
will allow clients, particularly `kubectl` to log in without needing
TLS certificates and private keys.
Authelia can act as an Open ID Connect identity provider. This allows
it to provide authentication/authorization for other applications
besides those inside the Kubernetes cluster using it for Ingress
authentication.
To start with, we'll configure an OIDC client for Jenkins.
Authelia is a general authentication provider that works (primarily)
by integrating with *nginx* using its subrequest mechanism. It works
great with Kubernetes/*ingress-nginx* to provide authentication for
services running in the cluster, especially those that do not provide
their own authentication system.
Authelia needs a database to store session data. It supports various
engines, but since we're only running a very small instance with no real
need for HA, SQLite on a Longhorn persistent volume is sufficient.
Configuration is done mostly through a YAML document, although some
secret values are stored in separate files, which are pointed to by
environment variables.