.. | ||
bgp-place7-v2.yaml | ||
bgp-place7.yaml | ||
bgpconfig.yaml | ||
bird.conf | ||
calico.yaml | ||
kubeadm-config-p7-v2.yaml | ||
kubeadm-config.yaml | ||
nginx-test-deployment.yaml | ||
README.md |
Introduction
The following guide shows how to setup an IPv6 only cluster at ungleich.
Steps
- Boot Alpine
- Configure with cdist to get cri-o configured
Control plane
Initialise with all components:
kubeadm init --config kubeadm-config.yaml
We need to specify the --config option to inject correct cgroupdriver!
We cannot yet skip kube-proxy, because calico does not support eBPF for IPv6. Cilium supports IPv6 eBPF, but on the other hand does not support automatic BGP peering. So the following does not work:
kubeadm init --skip-phases=addon/kube-proxy --service-cidr 2a0a:e5c0:13:aaa::/108 --pod-network-cidr 2a0a:e5c0:13:bbb::/64
kubeadm init --service-cidr 2a0a:e5c0:13:aaa::/108 --pod-network-cidr 2a0a:e5c0:13:bbb::/64
Worker nodes
kubeadm join [2a0a:e5c0:13:0:225:b3ff:fe20:38cc]:6443 --token bw3x98.chp31kcgcd4b5fpf --discovery-token-ca-cert-hash sha256:...
CNI/networking
kubectl apply -f calico.yaml
Warning: needs to be updated:
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
Checking pods:
[21:53] server47.place7:~/v3-calico# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6d8ccdbf46-4xzz9 0/1 Pending 0 60s
calico-node-5gkp9 0/1 Init:0/3 0 60s
calico-node-8lct9 0/1 Init:0/3 0 60s
calico-node-jmjhn 0/1 Init:0/3 0 60s
calico-node-krnzr 0/1 Init:ErrImagePull 0 60s
coredns-558bd4d5db-4rvrf 0/1 Pending 0 3m40s
coredns-558bd4d5db-g9lbx 0/1 Pending 0 3m40s
etcd-server47 1/1 Running 0 3m56s
kube-apiserver-server47 1/1 Running 0 3m55s
kube-controller-manager-server47 1/1 Running 0 3m56s
kube-scheduler-server47 1/1 Running 0 3m55s
[21:54] server47.place7:~/v3-calico#
Getting calicoctl
kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml
And alias it:
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
Configuring BGP routing
calicoctl create -f - < bgp....yaml
Setup a test deployment
Do NOT use https://k8s.io/examples/application/deployment.yaml. It contains an outdated nginx container that has no IPv6 listener. You will get results such as
[19:03] server47.place7:~/ungleich-k8s/v3-calico# curl http://[2a0a:e5c0:13:bbb:176b:eaa6:6d47:1c41]
curl: (7) Failed to connect to 2a0a:e5c0:13:bbb:176b:eaa6:6d47:1c41 port 80: Connection refused
if you use that deployment. Instead use something on the line of the included nginx-test-deployment.yaml:
kubectl apply -f nginx-test-deployment.yaml
Kubevirt
Based on https://kubevirt.io/user-guide/operations/installation/:
export RELEASE=v0.41.0
# Deploy the KubeVirt operator
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
# Create the KubeVirt CR (instance deployment request) which triggers the actual installation
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml
This never happens:
# wait until all KubeVirt components are up
$ kubectl -n kubevirt wait kv kubevirt --for condition=Available
Old/obsolete information
Alpine / kubelet hack
Due to some misconfiguration on alpine, DURING the kubeadm init we need to modify the generated /var/lib/kubelet/config.yaml to replace "cgroupDriver: systemd" with "cgroupDriver: cgroupfs".
The same is necessary on the worker nodes, however that can be done anytime before you plan to schedule containers on them, after the kubeadm join request.
THIS is fixed if we use a kubeadm config file specifying the cgroupdriver.
Calico notes
Requires
mount --make-shared /sys
Kubevirt nodes
Requires
mount --make-shared /
Manual / post boot changes for place7-v2 cluster
mount --make-shared /
mount --make-shared /sys
sysctl net.ipv6.conf.eth0.accept_ra=2
modprobe br_netfilter
sysctl -p /etc/sysctl.d/99-Z-sysctl-cdist.conf
Docker based
mount --make-shared /sys
mount --make-shared /run/