infra/dch-webhooks/pipeline/head This commit looks goodDetails
This is _also_ required for _setuptools_scm_ to work. I guess all this
time, it's not been working, but silently. They must have changed
something in 9.1.1 to make it fail loudly.
Setting messages to expire after 10 minutes without being consumed. If
they haven't been consumed by then, there must be something wrong with
the host provisioner. Since each host provisioner process only
processes a single message, placing more messages onto the queue without
an expiration will cause a backlog of messages that cannot be processed.
In order to support testing new policy on a development branch, the
_POST /host/online_ hook now accepts an optional `branch` parameter.
The value of this parameter is passed to the host provisioner via the
host info message published on the message queue. In this way, new
machines can request a specific branch of the policy, providing a
method for automated testing prior to merging the branch in to the
main development line.
How hosts themselves know what branch to request is of course another
matter, and will depend on how they are provisioned, whether they are
physical or virtual, etc.
The `AMQPContext` now supports reading connection and authentication
information from environment variables, allowing it to connect to
RabbitMQ servers other than the default (`localhost:6372`, user
_guest_). It supports plain and TLS connection modes, as well as plain
username+password or EXTERNAL authentication.
The _POST /host/online_ webhook now creates a Kubernetes Job to run the
host provisioner. The Job resource is defined in a YAML document, and
will be created in the Kubernetes namespace specified by the
`ANSIBLE_JOB_NAMESPACE` environment variable (defaults to `ansible`).
When a new machine is provisioned, it will trigger the _POST
/host/online_ webhook, indicating that it is online and ready to be
provisioned via configuration policy. It submits its hostname and SSH
public keys so the Ansible controller can connect to it. This
information is passed to the controller via an AMQP message, published
to a queue which the controller will consume in order to being
provisioning.
The controller itself will eventually be scheduled as a Kubernetes Job.
I want to get an alert whenever a new transaction is added to Firefly.
This will be particularly helpful now that _xactmon_ is creating
transactions automatically based on notifications from Commerce, Chase,
etc.
These notifications are really only useful for real-time monitoring of
builds starting and finishing. There's no reason to cache them for
clients who were not connected when they were originally sent.
Using the [Generic Event Plugin][0], we can receive a notification from
Jenkins when builds start and finish. We'll relay these to *ntfy* on a
unique topic that I will subscribe to on my desktop. That way, I can
get desktop notifications about jobs while I am working, which will be
particularly useful while developing and troubleshooting pipelines.
[0]: https://plugins.jenkins.io/generic-event/
The *POST /sshkeys/sign* operation accepts a host name and a list of SSH
host public keys and returns a signed SSH host certificate for each key.
It uses the `step ssh certificate` command to sign the certificates,
which in turn contacts the configured *step-ca* service. This operation
will allow hosts to obtain their initial certificates. Once obtained,
the certificates can be renewed directly using the `step ssh renew`
command with the SSH private keys themselves for authentication.