1
0
Fork 0
kubernetes/setup
Dustin 102d1fb919 setup: ks: Generate iSCSI initiator name
The iSCSI initiator needs a unique name.  It will generate one the first
time it starts if one does not already exist.  Since it tries to write
it to a file under `/etc`, this will fail, since the root filesystem is
read-only.  As such, we need to generate the name during installation,
when the filesystem is still writable.
2022-08-23 21:22:01 -05:00
..
README.md setup: Convert tabs to spaces 2022-07-31 01:40:16 -05:00
fedora-k8s-ctrl.ks setup: switch back to ext4 on lvm 2022-07-31 17:09:03 -05:00
fedora-k8s-node.ks setup: ks: Generate iSCSI initiator name 2022-08-23 21:22:01 -05:00

README.md

Cluster Setup

  • Fedora 35
  • Fedora Kubernetes packages 1.22

Installation

For control plane nodes, use the fedora-k8s-ctrl.ks kickstart file. For worker nodes, use fedora-k8s-node.ks.

Use virt-manager to create the virtual machines.

Control Plane

name=k8s-ctrl0; virt-install \
    --name ${name} \
    --memory 4096 \
    --vcpus 2 \
    --cpu host \
    --location http://dl.fedoraproject.org/pub/fedora/linux/releases/35/Everything/x86_64/os \
    --extra-args "ip=::::${name}::dhcp inst.ks=http://rosalina.pyrocufflink.blue/~dustin/kickstart/fedora-k8s-ctrl.ks" \
    --os-variant fedora34 \
    --disk pool=default,size=16,cache=none \
    --network network=kube,model=virtio,mac=52:54:00:be:29:76 \
    --sound none \
    --redirdev none \
    --rng /dev/urandom \
    --noautoconsole \
    --wait -1

Worker

Be sure to set the correct MAC address for each node!

name=k8s-amd64-n0; virt-install \
    --name ${name} \
    --memory 4096
    --vcpus 2 \
    --cpu host \
    --location http://dl.fedoraproject.org/pub/fedora/linux/releases/35/Everything/x86_64/os \
    --extra-args "ip=::::${name}::dhcp inst.ks=http://rosalina.pyrocufflink.blue/~dustin/kickstart/fedora-k8s-node.ks" \
    --os-variant fedora34 \
    --disk pool=default,size=64,cache=none \
    --disk pool=default,size=256,cache=none \
    --network network=kube,model=virtio,mac=52:54:00:67:ce:35 \
    --sound none \
    --redirdev none \
    --rng /dev/urandom \
    --noautoconsole \
    --wait -1

Machine Setup

Add to pyrocufflink.blue domain:

ansible-playbook \
    -l k8s-ctrl0.pyrocufflink.blue \
    remount.yml \
    base.yml \
    hostname.yml \
    pyrocufflink.yml \
    -e ansible_host=172.30.0.170 \
    -u root \
    -e @join.creds

Initialize cluster

Run on k8s-ctrl0.pyrocufflink.blue:

kubeadm init \
    --control-plane-endpoint kubernetes.pyrocufflink.blue \
    --upload-certs \
    --kubernetes-version=$(rpm -q --qf '%{V}' kubernetes-node) \
    --pod-network-cidr=10.149.0.0/16

Configure Pod Networking

Calico seems to be the best choice, based on its feature completeness, and a couple of performance benchmarks put it basically at the top.

curl -fL\
    -O 'https://projectcalico.docs.tigera.io/manifests/tigera-operator.yaml' \
    -O 'https://projectcalico.docs.tigera.io/manifests/custom-resources.yaml'
sed -i 's/192\.168\.0\.0\/16/10.149.0.0\/16/' custom-resources.yaml
kubectl create -f tigera-operator.yaml
kubectl create -f custom-resources.yaml

Wait for Calico to deploy completely, then restart CoreDNS:

kubectl wait -n calico-system --for=condition=ready \
    $(kubectl get pods -n calico-system -l k8s-app=calico-node -o name)
kubectl -n kube-system rollout restart deployment coredns

Add Worker Nodes

kubeadm join kubernetes.pyrocufflink.blue:6443 \
    --token xxxxxx.xxxxxxxxxxxxxxxx \
    --discovery-token-ca-cert-hash sha256:…

Add Control Plane Nodes

kubeadm join kubernetes.pyrocufflink.blue:6443 \
    --token xxxxxx.xxxxxxxxxxxxxxxxx \
    --discovery-token-ca-cert-hash sha256:… \
    --control-plane \
    --certificate-key …

Create Admin user

cat > kubeadm-user.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
clusterName: kubernetes
controlPlaneEndpoint: kubernetes.pyrocufflink.blue:6443
certificatesDir: /etc/kubernetes/pki
EOF
kubeadm kubeconfig user \
    --client-name dustin \
    --config kubeadm-user.yaml \
    --org system:masters \
    > dustin.kubeconfig