1
0
Fork 0
kubernetes/setup
Dustin 8b8ae3df04 setup: Use kickstart instead of Ansible
Kubernetes, or rather mostly Calico, does not play well on a machine
with an immutable root filesyste.  Specifically, Calico needs write
access to a couple of paths on the root filesystem, such as
`/etc/cni/net.d`, `/opt/cni/bin`, and
`/usr/libexec/kubernetes/kubelet-plugins/volume`.  Some of those paths
can be configured, but doing so is quite cumbersome.  While these paths
could be made writable, e.g. using symlinks or bind mounts, it would add
a lot of complexity to the *kubelet* Ansible role.  After considering
the options for a while, I decided that the best approach was probably
to mount specific filesystems at these paths.  Instead of using small
LVM logical volumes for each one, I thought it would be better to use a
single *btrfs* filesystem for all the mutable storage locations.  This
way, if I discover more paths that need to be writable, I can create
subvolumes for them, without having to try to move or resize the
existing volumes.

Now that the Kubernetes nodes need their own special kickstart file for
the disk layout, it also makes sense to handle the rest of the machine
setup there, too.  This eliminates the need for the *kubelet* Ansible
role altogether.  Any machine provisioned with this kickstart
configuration is immediately ready to become a Kubernetes control plane
or worker node.
2022-07-26 22:29:50 -05:00
..
README.md setup: Use kickstart instead of Ansible 2022-07-26 22:29:50 -05:00
fedora-k8s.ks setup: Use kickstart instead of Ansible 2022-07-26 22:29:50 -05:00

README.md

Cluster Setup

  • Fedora 35
  • Fedora Kubernetes packages 1.22

Installation

Use the fedora-k8s.ks kickstart file

Machine Setup

Add to pyrocufflink.blue domain:

ansible-playbook \
    -l k8s-amd64-ctrl0.pyrocufflink.blue \
    remount.yml \
    bootstrap.yml \
    pyrocufflink.yml \
    -e ansible_host=172.30.0.167/28 \
    -u root \
    -e @join.creds

Initialize cluster

Run on k8s-ctrl0.pyrocufflink.blue:

kubeadm init \
    --control-plane-endpoint kubernetes.pyrocufflink.blue \
    --upload-certs \
    --kubernetes-version=$(rpm -q --qf '%{V}' kubernetes-node) \
    --pod-network-cidr=10.149.0.0/16

Configure Pod Networking

Calico seems to be the best choice, based on its feature completeness, and a couple of performance benchmarks put it basically at the top.

curl -fL\
    -O 'https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml' \
    -O 'https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml'
sed -i 's/192\.168\.0\.0\/16/10.149.0.0\/16/' custom-resources.yaml
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml

Wait for Calico to deploy completely, then restart CoreDNS:

kubectl wait -n calico-system --for=condition=ready \
    $(kubectl get pods -n calico-system -l k8s-app=calico-node -o name)
kubectl -n kube-system rollout restart deployment coredns
unset calico_node

Add Worker Nodes

kubeadm join kubernetes.pyrocufflink.blue:6443 \
    --token xxxxxx.xxxxxxxxxxxxxxxx \
    --discovery-token-ca-cert-hash sha256:…

Add Control Plane Nodes

kubeadm join kubernetes.pyrocufflink.blue:6443 \
    --token xxxxxx.xxxxxxxxxxxxxxxxx \
    --discovery-token-ca-cert-hash sha256:… \
    --control-plane \
    --certificate-key …

Create Admin user

cat < kubeadm-user.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: kubernetes
controlPlaneEndpoint: kubernetes.pyrocufflink.blue:6443
certificatesDir: /etc/kubernetes/pki
EOF
kubeadm kubeconfig user \
    --client-name dustin \
    --config kubeadm-user.yaml \
    --org system:masters \
    > dustin.kubeconfig