kubernetes: Manage worker nodes

So far, I have been managing Kubernetes worker nodes with Fedora CoreOS
Ignition, but I have decided to move everything back to Fedora and
Ansible.  I like the idea of an immutable operating system, but the FCOS
implementation is not really what I want.  I like the automated updates,
but that can be accomplished with _dnf-automatic_.  I do _not_ like
giving up control of when to upgrade to the next Fedora release.
Mostly, I never did come up with a good way to manage application-level
configuration on FCOS machines.  None of my experiments (Cue+tmpl,
KCL+etcd+Luci) were successful, which mostly resulted in my manually
managing configuration on nodes individually.  Managing OS-level
configuration is also rather cumbersome, since it requires redeploying
the machine entirely.  Altogether, I just don't think FCOS fits with my
model of managing systems.

This commit introduces a new playbook, `kubernetes.yml`, and a handful of
new roles to manage Kubernetes worker nodes running Fedora Linux.  It
also adds two new deploy scripts, `k8s-worker.sh` and `k8s-longhorn.sh`,
which fully automate the process of bringing up worker nodes.
This commit is contained in:
2024-11-21 06:24:53 -06:00
parent 164f3b5e0f
commit 0f600b9e6e
18 changed files with 377 additions and 1 deletions

45
deploy/k8s-longhorn.sh Normal file
View File

@@ -0,0 +1,45 @@
#!/bin/sh
# vim: set ts=4 sw=4 noet :
name=${1:-stor-$(diceware -n1 --no-caps)}
hostname=${name}.k8s.pyrocufflink.black
if ! virsh list --all --name | grep -qF ${name}; then
./newvm.sh ${name} \
--fedora 40 \
--memory 4096 \
--vcpus 4 \
--no-console \
--network network=kube \
-- \
--disk pool=default,size=4,cache=none \
--disk pool=default,size=8,cache=none \
--disk pool=default,size=512,cache=none \
|| exit
sleep 15
fi
if ! grep -q "${hostname}" hosts; then
sed -i '/\[k8s-longhorn\]/a'"${hostname}" hosts
fi
ansible-playbook \
-l ${hostname} \
wait-for-host.yml \
|| exit
printf 'Waiting for SSH host certificate to be signed ... '
until ssh-keyscan -c ${hostname} 2>/dev/null | grep -q cert; do
sleep 1
done
echo done
ansible-playbook \
-l ${hostname} \
bootstrap.yml \
datavol.yml \
kubernetes.yml \
collectd.yml \
btop.yml \
-u root \
|| exit

47
deploy/k8s-node.sh Normal file
View File

@@ -0,0 +1,47 @@
#!/bin/sh
# vim: set ts=4 sw=4 noet :
name=${1:-node-$(diceware -n1 --no-caps)}
hostname=${name}.k8s.pyrocufflink.black
if ! virsh list --all --name | grep -qF ${name}; then
./newvm.sh ${name} \
--domain k8s.pyrocufflink.black \
--fedora 40 \
--memory 16384 \
--vcpus 8 \
--no-console \
--network network=kube \
-- \
--network network=storage \
--disk pool=default,size=32,cache=none \
--disk pool=default,size=32,cache=none \
|| exit
sleep 15
fi
if ! grep -q "${hostname}" hosts; then
sed -i '/\[k8s-node\]/a'"${hostname}" hosts
fi
ansible-playbook \
-l ${hostname} \
wait-for-host.yml \
|| exit
printf 'Waiting for SSH host certificate to be signed ... '
until ssh-keyscan -c ${hostname} 2>/dev/null | grep -q cert; do
sleep 1 || exit
done
echo done
ansible-playbook \
-l ${hostname} \
bootstrap.yml \
datavol.yml \
users.yml \
kubernetes.yml \
collectd.yml \
btop.yml \
-u root \
|| exit