Commit Graph

2 Commits (f531b03e7ccab5dcea0ff512a658ac64ab26021a)

Author SHA1 Message Date
Dustin c48076b8f0 test: Adjust k8s roles for integration tests
Initially, I thought it was necessary to use a ClusterRole in order to
assign permissions in one namespace to a service account in another.  It
turns out, this is not necessary, as RoleBinding rules can refer to
subjects in any namespace.  Thus, we can limit the privileges of the
*dynk8s-provisioner* service account by only allowing it access to the
Secret and ConfigMap resources in the *kube-system* and *kube-public*
namespaces, respectively, plus the Secret resources in its own
namespace.
2022-10-11 21:08:49 -05:00
Dustin d85f314a8b tests: Begin integration tests
dustin/dynk8s-provisioner/pipeline/head There was a failure building this commit Details
Cargo uses the sources in the `tests` directory to build and run
integration tests.  For each `tests/foo.rs` or `tests/foo/main.rs`, it
creates an executable that runs the test functions therein.  These
executables are separate crates from the main package, and thus do not
have access to its private members.  Integration tests are expected to
test only the public functionality of the package.

Application crates do not have any public members; their public
interface is the command line.  Integration tests would typically run
the command (e.g. using `std::process::Command`) and test its output.

Since *dynk8s-provisioner* is not really a command-line tool, testing it
this way would be difficult; each test would need to start the server,
make requests to it, and then stop it.  This would be slow and
cumbersome.

In order to avoid this tedium and be able to use Rocket's built-in test
client, I have converted *dynk8s-provisioner* into a library crate that
also includes an executable.  The library makes the `rocket` function
public, which allows the integration tests to import it and pass it to
the Rocket test client.

The point of integration tests, of course, is to validate the
functionality of the application as a whole.  This necessarily requires
allowing it to communicate with the Kubernetes API.  In the Jenkins CI
environment, the application will need the appropriate credentials, and
will need to use a separate Kubernetes namespace from the production
deployment.  The `setup.yaml` manifest in the `tests` directory defines
the resources necessary to run integration tests, and the
`genkubeconfig.sh` script can be used to create the appropriate
kubeconfig file containing the credentials.  The kubeconfig is exposed
to the tests via the `KUBECONFIG` environment variable, which is
populated from a Jenkins secret file credential.

Note: The `data` directory moved from `test` to `tests` to avoid
duplication and confusing names.
2022-10-07 07:37:20 -05:00