The _pyrocufflink.net_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
To avoid having separate certificates for the canonical
_www.hatchlearningcenter.org_ site and all the redirects, we'll combine
these virtual hosts into one. We can use a `RewriteCond` to avoid the
redirect for the canonical name itself.
Since the reverse proxy does TLS pass-through instead of termination,
the original source address is lost. Since the source address is
important for logging, rate limiting, and access control, we need to use
the HAProxy PROXY protocol to pass it along to the web server.
Since the PROXY protocol works at the TCP layer, _all_ connections must
use it. Fortunately, all of the sites hosted by the public web server
are in fact public and only accessed through HAProxy. Similarly,
enabling it for one named virtual host enables it for all virtual hosts
on that port. Thus, we only have to explicitly set it for one site, and
all the rest will use it as well.
We don't really use this site for screenshot sharing any more. It's
cool to keep to look at old screenshots, so I've saved a static snapshot
of it that can be hosted by plain ol' Apache.
The _dustinandtabitha.com_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
The _dustin.hatch.name_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
The _chmod777.sh_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
The _apps.du5t1n.xyz_ site now obtains its certificate from Let's
Encrypt using the Apache _mod_md_ (managed domain) module. This
dramatically simplifies the deployment of this certificate, eliminating
the need for _cert-manager_ to obtain it, _cert-exporter_ to add it to
_certs.git_, and Jenkins to push it out to the web server.
I want to publish the _20125_ Status application to an F-Droid
repository to make it easy for Tabitha to install and update. F-Droid
repositories are similar to other package repositories: a collection of
packages and some metadata files. Although there is a fully-fledged
server-side software package that can manage F-Droid repositories, it's
not required: the metadata files can be pre-generated and then hosted by
a static web server just fine.
This commit adds configuration for the web server and reverse proxy to
host the F-Droid repository at _apps.du5t1n.xyz_.
The summer 2024 enrollment form is more complicated than the other
forms on the HLC site, as it integrates directly with Invoice Ninja. As
such, it's handled by a different backend, which runs in Kubernetes.
The *dustinandtabitha.com* website no longer uses *formsubmit* (the time
for RSVP has **long** passed). Removing the configuration so the
file encrypted with Ansible Vault can go away.
I've moved the Dark Chest of Wonders website to run in a container on
Kubernetes. This will keep it from breaking every time the OS is
updated on the web server, when the version of Python in Fedora changes.
The HTTP->HTTPS redirect for chmod777.sh was only working by
coincidence. It needs its own virtual host to ensure it works
irrespective of how other websites are configured.
Tabitha's Hatch Learning Center site has two user submission forms: one
for signing in/out students for class, and another for parents to
register new students for the program. These are handled by
*formsubmit* and store data in CSV spreadsheets.
Promoting the new site I have been working on at *dustin.hatch.is* to my
main domain, *dustin.hatch.name*. The new site is just static content,
generated and uploaded by a Jenkins job.
Finally have a certificate for *dustin.hatch.name* now, too!
To handle the RSVP form on *dustinandtabitha.com*, we are going to use
*formsubmit*. It runs on the same machine that hosts the website, so
there's no dealing with CORS. The */submit/rsvp* path, which is proxied
to the backend, is the RSVP form's target.
Since there are no other plain HTTP virtual hosts, the one defined for
chmod777.sh became the "default." Since it explicitly redirects all
requests to https://chmod777.sh, it caused all non-HTTPS requests to be
redirected there, regardless of the requested name. This was
particularly confusing for Tabitha, as she frequently forgets to put
https://…, and would find herself at my stupid blog instead of
Nextcloud.
The *websites/proxy-matrix* role configures the Internet-facing reverse
proxy to handle the *hatch.chat* domain. Most Matrix communication
happens over the default HTTPS port, and as such will be directed
through the reverse proxy.
Because the various "webapp.*" users' home directories are under
`/srv/www`, the default SELinux context type is `httpd_sys_content_t`.
The SSH daemon is not allowed to read files with this label, so it
cannot load the contents of these users' `authorized_keys` files. To
address this, we have to explicitly set the SELinux type to
`ssh_home_t`.
Changing/renewing a certificate generally requires restarting or
reloading some service. Since the *cert* role is intended to be generic
and reusable, it naturally does not know what action to take to effect
the change. It works well for the initial deployment of a new
application, since the service is reloaded anyway in order for the new
configuration to be applied. It fails, however, for continuous
enforcement, when a certificate is renewed automatically (i.e. by
`lego`) but no other changes are being made. This has caused a number
of disruptions when some certificate expires and its replacement is
available but has not yet been loaded.
To address this issue, I have added a handler "topic" notification to
the *certs* role. When either the certificate or private key file is
replaced, the relevant task will "notify" a generic handler "topic."
This allows some other role to define a specific handler, which
"listens" for these notifications, and takes the appropriate action for
its respective service.
For this mechanism to work, though, the *cert* role can only be used as
a dependency of another role. That role must define the handler and
configure it to listen to the generic "certificate changed" topic. As
such, each of the roles that are associated with a certificate deployed
by the *cert* role now declare it as a dependency, and the top-level
playbooks only include those roles.
For some time, I have been trying to design a new configuration for the
reverse proxy on port 443 to correctly handle all the types of traffic
on that port. In the original implementation, all traffic on port 443
was forwarded by the gateway to HAProxy. HAproxy then used TLS SNI to
route connections to the correct backend server based the requested host
name. This allowed both HTTPS and OpenVPN-over-TLS to use the same
port, however it was not without issues. A layer 4 (TCP) proxy like
this "hides" the real source address of clients connecting to the
backend, which makes IP-based security (e.g. rate limiting, blacklists,
etc.) impossible at the application level. In particular, Nextcloud,
which implements rate limiting was constantly imposing login delays on
all users, because legitimate traffic was indistinguishable from
Internet background noise.
To alleviate these issues, I needed to change the proxy to operate in
layer 7 (HTTP) mode, so that headers like *X-Forwarded-For* and
*X-Forwarded-Host* could be added. Unfortunately, this was not easy,
because of the simultaneous requirement to forward OpenVPN traffic.
HAProxy can only do SNI inspection in TCP mode. So, I began looking for
an alternate way to proxy both HTTP and non-HTTP traffic on the same
port.
The HTTP protocol defines the `CONNECT` method, which is used by forward
proxies to tunnel HTTPS over plain HTTP. OpenVPN clients support
tunneling OpenVPN over HTTP using this method as well. HAProxy has
limited support for the CONNECT method (i.e. it doesn't do DNS
resolution, and I could find no way of restricting the destination) with
the `http_proxy` option, so I looked for alternate proxy servers that
had more complete support. Unsurprisingly, Apache HTTPD has the most
complete implementation of the `CONNECT` method (Nginx doesn't support
it at all). Using a name-based virtual host on port 443, Apache will
accept requests for *vpn.pyrocufflink.net* (using TLS SNI) and allow the
clients to use the `CONNECT` method to create a tunnel to the OpenVPN
server. This requires OpenVPN clients to a) use *stunnel* to wrap plain
HTTP proxy connections in TLS and b) configure OpenVPN to use the
TLS-wrapped HTTP proxy.
With Apache accepting all incoming connections, it was trivial to also
configure it as a layer 7 forward proxy for Bitwarden, Gitea, Jenkins,
and Nextcloud. Unfortunately, proxying for the other websites
(darkchestofwonders.us, chmod777.sh, dustin.hatch.name) was not quite as
straightforward. These websites would need to have an internal name
that differed from their external name, and thus a certificate valid for
that name. Rather than reconfigure all of these sites and set all of
that up, I decided to just move the responsibility for handling direct
connections from outside to the *web0* and eliminate the dedicated
reverse proxy. This was not possible before, because Apache could not
forward the OpenVPN traffic directly, but now with the forward proxy
configuration, there is no reason to have a separate server for these
connections.
Overall, I am pleased with how this turned out. It makes the OpenVPN
configuration simpler (*stunnel* no longer needs to run on the OpenVPN
server itself, since Apache is handling TLS termination), eliminates a
network hop for the websites, makes the reverse proxy configuration for
the other web applications much easier to understand, and resolves the
original issue of losing client connection information.
A name-based HTTP (not HTTPS) virtual host for *pyrocufflink.net* is
necessary to ensure requests are handled properly, now that there is
another HTTP virtual host (chmod777.sh) defined on the same server.
This commit updates the configuration for *pyrocufflink.net* to use the
wildcard certificate managed by *lego* instead of an unique certificate
managed by *certbot*.
*chmod777.sh* is a simple static website, generated by Hugo. It is
built and published from a Jenkins pipeline, which runs automatically
when new commits are pushed to Gitea.
The HTTPS certificate for this site is signed by Let's Encrypt and
managed by `lego` in the `certs` submodule.
The *websites/pyrocufflink.net* role configures the public web server to
host *pyrocufflink.net*. This site has two functions:
* It redirects `/` to http://dustin.hatch.name/
* It proxies user home directories (i.e. /~dustin/) to the file server
To support multiple websites with separate Let's Encrypt certificates,
the *certbot* role needs to be applied as a dependency of each
individual website role. This will allow each application to specify a
different value for `certbot_domains`.
The *websites/darkchestofwonders.us* role prepares a machine to host
http://darkchestofwonders.us/. The website itself is published via rsync
by Jenkins.
The *websites/dustin.hatch.name* role configures a server to host
http://dustin.hatch.name/. The role only applies basic configuration;
the actual website application is published by Jenkins.