|
|
|
@ -1,3 +1,4 @@
|
|
|
|
|
|
|
|
|
|
## Introduction |
|
|
|
|
|
|
|
|
|
The following guide shows how to setup an IPv6 only cluster at ungleich. |
|
|
|
@ -12,9 +13,11 @@ The following guide shows how to setup an IPv6 only cluster at ungleich.
|
|
|
|
|
Initialise with all components: |
|
|
|
|
|
|
|
|
|
``` |
|
|
|
|
kubeadm init --service-cidr 2a0a:e5c0:13:aaa::/108 --pod-network-cidr 2a0a:e5c0:13:bbb::/64 |
|
|
|
|
kubeadm init --config kubeadm-config.yaml |
|
|
|
|
``` |
|
|
|
|
|
|
|
|
|
We need to specify the **--config** option to inject correct cgroupdriver! |
|
|
|
|
|
|
|
|
|
We cannot yet skip kube-proxy, because calico does not support eBPF |
|
|
|
|
for IPv6. Cilium supports IPv6 eBPF, but on the other hand does not |
|
|
|
|
support automatic BGP peering. So the following **does not** work: |
|
|
|
@ -24,16 +27,6 @@ kubeadm init --skip-phases=addon/kube-proxy --service-cidr 2a0a:e5c0:13:aaa::/10
|
|
|
|
|
kubeadm init --service-cidr 2a0a:e5c0:13:aaa::/108 --pod-network-cidr 2a0a:e5c0:13:bbb::/64 |
|
|
|
|
``` |
|
|
|
|
|
|
|
|
|
## Alpine / kubelet hack |
|
|
|
|
|
|
|
|
|
Due to some misconfiguration on alpine, **DURING** the **kubeadm |
|
|
|
|
init** we need to modify the **generated** |
|
|
|
|
/var/lib/kubelet/config.yaml to replace "cgroupDriver: systemd" with |
|
|
|
|
"cgroupDriver: cgroupfs". |
|
|
|
|
|
|
|
|
|
The same is necessary on the worker nodes, however that can be done |
|
|
|
|
anytime before you plan to schedule containers on them, after the |
|
|
|
|
**kubeadm join** request. |
|
|
|
|
|
|
|
|
|
## Worker nodes |
|
|
|
|
|
|
|
|
@ -110,3 +103,37 @@ included **nginx-test-deployment.yaml**:
|
|
|
|
|
``` |
|
|
|
|
kubectl apply -f nginx-test-deployment.yaml |
|
|
|
|
``` |
|
|
|
|
|
|
|
|
|
## Kubevirt |
|
|
|
|
|
|
|
|
|
Based on https://kubevirt.io/user-guide/operations/installation/: |
|
|
|
|
|
|
|
|
|
``` |
|
|
|
|
export RELEASE=v0.35.0 |
|
|
|
|
# Deploy the KubeVirt operator |
|
|
|
|
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml |
|
|
|
|
# Create the KubeVirt CR (instance deployment request) which triggers the actual installation |
|
|
|
|
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml |
|
|
|
|
``` |
|
|
|
|
|
|
|
|
|
This never happens: |
|
|
|
|
|
|
|
|
|
``` |
|
|
|
|
# wait until all KubeVirt components are up |
|
|
|
|
$ kubectl -n kubevirt wait kv kubevirt --for condition=Available |
|
|
|
|
``` |
|
|
|
|
|
|
|
|
|
## Old/obsolete information |
|
|
|
|
|
|
|
|
|
### Alpine / kubelet hack |
|
|
|
|
|
|
|
|
|
Due to some misconfiguration on alpine, **DURING** the **kubeadm |
|
|
|
|
init** we need to modify the **generated** |
|
|
|
|
/var/lib/kubelet/config.yaml to replace "cgroupDriver: systemd" with |
|
|
|
|
"cgroupDriver: cgroupfs". |
|
|
|
|
|
|
|
|
|
The same is necessary on the worker nodes, however that can be done |
|
|
|
|
anytime before you plan to schedule containers on them, after the |
|
|
|
|
**kubeadm join** request. |
|
|
|
|
|
|
|
|
|
**THIS is fixed if we use a kubeadm config file specifying the cgroupdriver**. |
|
|
|
|