We're going to be using Longhorn for persistent storage. Longhorn allocates space on worker nodes and exposes iSCSI LUNs to other worker nodes. It creates sparse filesystem images under `/var/lib/longhorn` for each volume. Thus, we need to mount a large filesystem at that path on each worker node for Longhorn to use. Using two different kickstart scripts, one for the control plane nodes, and one for the worker nodes, we can properly mount the Longhorn data directory only on machines that will be running the Longhorn manager. Longhorn only supports *ext4* and *XFS* filesystem types. |
||
---|---|---|
.. | ||
README.md | ||
fedora-k8s-ctrl.ks | ||
fedora-k8s-node.ks |
README.md
Cluster Setup
- Fedora 35
- Fedora Kubernetes packages 1.22
Installation
For control plane nodes, use the fedora-k8s-ctrl.ks
kickstart file. For
worker nodes, use [fedora-k8s-node.ks
][1].
Machine Setup
Add to pyrocufflink.blue domain:
ansible-playbook \
-l k8s-ctrl0.pyrocufflink.blue \
remount.yml \
base.yml \
hostname.yml \
pyrocufflink.yml \
-e ansible_host=172.30.0.170 \
-u root \
-e @join.creds
Initialize cluster
Run on k8s-ctrl0.pyrocufflink.blue:
kubeadm init \
--control-plane-endpoint kubernetes.pyrocufflink.blue \
--upload-certs \
--kubernetes-version=$(rpm -q --qf '%{V}' kubernetes-node) \
--pod-network-cidr=10.149.0.0/16
Configure Pod Networking
Calico seems to be the best choice, based on its feature completeness, and a couple of performance benchmarks put it basically at the top.
curl -fL\
-O 'https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml' \
-O 'https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml'
sed -i 's/192\.168\.0\.0\/16/10.149.0.0\/16/' custom-resources.yaml
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml
Wait for Calico to deploy completely, then restart CoreDNS:
kubectl wait -n calico-system --for=condition=ready \
$(kubectl get pods -n calico-system -l k8s-app=calico-node -o name)
kubectl -n kube-system rollout restart deployment coredns
unset calico_node
Add Worker Nodes
kubeadm join kubernetes.pyrocufflink.blue:6443 \
--token xxxxxx.xxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash sha256:…
Add Control Plane Nodes
kubeadm join kubernetes.pyrocufflink.blue:6443 \
--token xxxxxx.xxxxxxxxxxxxxxxxx \
--discovery-token-ca-cert-hash sha256:… \
--control-plane \
--certificate-key …
Create Admin user
cat < kubeadm-user.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: kubernetes
controlPlaneEndpoint: kubernetes.pyrocufflink.blue:6443
certificatesDir: /etc/kubernetes/pki
EOF
kubeadm kubeconfig user \
--client-name dustin \
--config kubeadm-user.yaml \
--org system:masters \
> dustin.kubeconfig