Jenkins needs to be able to patch the Deployment to trigger a restart
after it builds a new container image for _dch-webhooks_.
Note that this manifest must be applied on its own **without
Kustomize**. Kustomize seems to think the `dch-webhooks` in
`resourceNames` refers to the ConfigMap it manages and "helpfully"
renames it with the name suffix hash. It's _not_ the ConfigMap, though,
but there's not really any way to tell it this.
The _k8s-worker_ Ansible role in the configuration policy now uses the
Kubernetes API to create bootstrap tokens for adding worker nodes to the
cluster. For this to work, the pod running the host-provisioner must be
associated with a service account that has the correct permissions to
create secrets and access the `cluster-info` ConfigMap.
By default, the _pyrocufflink_ Ansible inventory plugin ignores VMs
whose names begin with `test-`. This prevents Jenkins from failing to
apply policy to machines that it should not be managing. The host
provisioner job, though, should apply policy to those machines, so we
need to disable that filter.
The *dch-webhooks* server now has a _POST /host/online_ hook that can
be triggered by a new machine when it first comes online. This hook
starts an automatic provisioning process by creating a Kubernetes Job
to run Ansible and publishing information about the host to provision
via AMQP. Thus, the server now needs access to the Kubernetes API in
order to create the Job and access to RabbitMQ in order to publish the
task parameters.
The *dch-webhooks* tool now provides an operation for hosts to request a
signed SSH certificate from the SSH CA. It's primarily useful for
unattended deployments like CoreOS Ignition, where hosts do not have
any credentials to authenticate with the CA directly.
The *dch-webhooks* service is a generic tool I've written to handle
various automation flows. For now, it only has one feature: when a
transaction is created in Firefly-III, it searches Paperless-ngx for a
matching receipt, and if found, attaches it to the transaction.