It turns out, we do NOT want to keep one single, global OIDC client data
structure. There are two major problems with this:
1. If the OIDC IdP happens to be unavailable when the process starts,
Rocket will fail to ignite and the process will exit. This is
unnecessary, since the only functionality that will be unavailable
without the IdP is new logins; existing sessions/tokens will still be
valid.
2. Identity providers can change keys, URLs, etc. at any time. If we
cache everything and never look it up again, all future login
attempts will fail until the server is restarted.
The official recommendation for caching OIDC IdP configuration and keys
is to use native HTTP cache control. Unfortunately, most IdPs
explicitly disable caching of their HTTP responses.
The `UserClaims` structure is an implementation detail of how the JWT
encoding process works. We do not need to expose all of the details of
the JWT, such as issuer, audience, expiration, etc. to rest of the
application. Route handlers should only be concerned with the
information about the user, rather than the metadata about how the user
was authenticated.
This commit adds two path operations, *GET /login* and *GET
/oidc-callback*, which initiate and complete the OpenID connect login
flow, respectively. Only the *Authorization Code* flow is supported,
since this is the only flow implemented by Authelia.
There is quite a bit of boilerplate required to fully implement an OIDC
relying party, especially in Rust. The documentation for
`openidconnect` is decent, but it still took quite a bit of trial and
error to get everything working.
After successfully finishing the OIDC login, the client will receive a
cookie containing a JWT that can be used for further communication with
the server. We're not using the OIDC tokens themselves for
authorization.
For development and testing, Dex is a simple and convenient OIDC IdP.
The only caveat is its configuration file must contain list the TCP port
clients will use to connect to it, meaning we cannot use Podman dynamic
port allocation like we do for Meilisearch. Ultimately, this just means
the integration tests will fail if there is another process already
listening on 5556.
For some reason, using the `thiserror::Error` derive macro causes the
syntax highlighting to fail for the rest of the code in the file, at
least in Neovim. Having all the errors in one module will consolidate
this effect to that one file.
When reading the Meilisearch token from the file specified in the
configuration, we need to ensure any whitespace are trimmed from the
string. If the token file was created with a text editor, or even a
shell pipeline, it's likely to have a trailing newline character. If we
do not remove this, authenticated requests to Meilisearch will fail.
The `run-tests.sh` script sets up a full environment for the integration
tests. This includes starting Meilisearch (with a master key to enable
authentication) and generating an ephemeral JWT secret. After the tests
are run, the environment is cleaned up.
```sh
just test
just unit-tests
just integration-tests
```
Refactoring the code a bit here to make the `Rocket` instance available
to the integration tests. To do this, we have to convert to a library
crate (`lib.rs`) with an executable entry point (`main.rs`). This
allows the tests, which are separate crates, to import types and
functions from the library.
Besides splitting the `rocket` function into two parts (one in `lib.rs`
that creates the `Rocket<Build>` and another in `main.rs` that becomes
the process entry point), I have reworked the initialization process to
make better use of Rocket's "fairings" feature. We don't want to call
`process::exit()` in a test, so if there is a problem reading the
configuration or initializing the context, we need to report it to
Rocket instead.
We'll use a JWT in the `Authorization` request header to identify the
user saving a page. The token will need to be set in the _authorization
token_ field in the SingleFile configuration so it will be included when
uploading.
The default messages printed when the process panics because the
configuration could not be loaded or the application context could not
be initialized are somewhat difficult to read. Instead of calling
`unwrap` in these cases, we need to explicitly handle the errors and
print more appropriate messages.