I've noticed that from time to time, the container storage volume seems to accumulate "dangling" containers. These are paths under `/var/lib/containers/storage/overlay` that have a bunch of content in their `diff` sub-directory, but nothing else, and do not seem to be mounted into any running containers. I have not identified what causes this, nor a simple and reliable way to clean them up. Fortunately, wiping the entire container storage graph with `crio wipe` seems to work well enough. The `crio-clean.sh` script takes care of safely wiping the container storage graph on a given node. It first drains the node and then stops any running containers that were left. Then, it uses `crio wipe` to clean the entire storage graph. Finally, it restarts the node, allowing Kubernetes to reschedule the pods that were stopped.
Dustin's Kubernetes Cluster
This repository contains resources for deploying and managing my on-premises Kubernetes cluster
Cluster Setup
The cluster primarily consists of libvirt/QEMU+KVM virtual machines. The Control Plane nodes are VMs, as are the x86_64 worker nodes. Eventually, I would like to add Raspberry Pi or Pine64 machines as aarch64 nodes.
All machines run Fedora, using only Fedora builds of the Kubernetes components
(kubeadm, kubectl, and kubeadm).
See Cluster Setup for details.
Jenkins Agents
One of the main use cases for the Kubernetes cluster is to provide dynamic agents for Jenkins. Using the Kubernetes Plugin, Jenkins will automatically launch worker nodes as Kubernetes pods.
See Jenkins Kubernetes Integration for details.
Persistent Storage
Persistent storage for pods is provided by Longhorn. Longhorn runs within the cluster and provisions storage on worker nodes to make available to pods over iSCSI.
See Persistent Storage Using Longorn for details.