Merge branch 'master' of code.ungleich.ch:ungleich-public/ungleich-k8s
This commit is contained in:
commit
2ea8169e60
3 changed files with 64 additions and 2 deletions
|
@ -1,15 +1,40 @@
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
The following guide shows how to setup an IPv6 only cluster at ungleich.
|
||||||
|
|
||||||
## Steps
|
## Steps
|
||||||
|
|
||||||
- Boot Alpine
|
- Boot Alpine
|
||||||
- Configure with cdist
|
- Configure with cdist to get cri-o configured
|
||||||
|
|
||||||
## Control plane
|
## Control plane
|
||||||
|
|
||||||
|
Initialise with all components:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubeadm init --service-cidr 2a0a:e5c0:13:aaa::/108 --pod-network-cidr 2a0a:e5c0:13:bbb::/64
|
||||||
|
```
|
||||||
|
|
||||||
|
We cannot yet skip kube-proxy, because calico does not support eBPF
|
||||||
|
for IPv6. Cilium supports IPv6 eBPF, but on the other hand does not
|
||||||
|
support automatic BGP peering. So the following **does not** work:
|
||||||
|
|
||||||
```
|
```
|
||||||
kubeadm init --skip-phases=addon/kube-proxy --service-cidr 2a0a:e5c0:13:aaa::/108 --pod-network-cidr 2a0a:e5c0:13:bbb::/64
|
kubeadm init --skip-phases=addon/kube-proxy --service-cidr 2a0a:e5c0:13:aaa::/108 --pod-network-cidr 2a0a:e5c0:13:bbb::/64
|
||||||
kubeadm init --service-cidr 2a0a:e5c0:13:aaa::/108 --pod-network-cidr 2a0a:e5c0:13:bbb::/64
|
kubeadm init --service-cidr 2a0a:e5c0:13:aaa::/108 --pod-network-cidr 2a0a:e5c0:13:bbb::/64
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Alpine / kubelet hack
|
||||||
|
|
||||||
|
Due to some misconfiguration on alpine, **DURING** the **kubeadm
|
||||||
|
init** we need to modify the **generated**
|
||||||
|
/var/lib/kubelet/config.yaml to replace "cgroupDriver: systemd" with
|
||||||
|
"cgroupDriver: cgroupfs".
|
||||||
|
|
||||||
|
The same is necessary on the worker nodes, however that can be done
|
||||||
|
anytime before you plan to schedule containers on them, after the
|
||||||
|
**kubeadm join** request.
|
||||||
|
|
||||||
## Worker nodes
|
## Worker nodes
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -67,3 +92,21 @@ alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"
|
||||||
```
|
```
|
||||||
calicoctl create -f - < bgp....yaml
|
calicoctl create -f - < bgp....yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Setup a test deployment
|
||||||
|
|
||||||
|
Do *NOT* use https://k8s.io/examples/application/deployment.yaml. It
|
||||||
|
contains an outdated nginx container that has no IPv6 listener. You
|
||||||
|
will get results such as
|
||||||
|
|
||||||
|
```
|
||||||
|
[19:03] server47.place7:~/ungleich-k8s/v3-calico# curl http://[2a0a:e5c0:13:bbb:176b:eaa6:6d47:1c41]
|
||||||
|
curl: (7) Failed to connect to 2a0a:e5c0:13:bbb:176b:eaa6:6d47:1c41 port 80: Connection refused
|
||||||
|
```
|
||||||
|
|
||||||
|
if you use that deployment. Instead use something on the line of the
|
||||||
|
included **nginx-test-deployment.yaml**:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl apply -f nginx-test-deployment.yaml
|
||||||
|
```
|
||||||
|
|
|
@ -6,7 +6,7 @@ metadata:
|
||||||
spec:
|
spec:
|
||||||
logSeverityScreen: Info
|
logSeverityScreen: Info
|
||||||
nodeToNodeMeshEnabled: true
|
nodeToNodeMeshEnabled: true
|
||||||
asNumber: 213081
|
asNumber: 65534
|
||||||
serviceClusterIPs:
|
serviceClusterIPs:
|
||||||
- cidr: 2a0a:e5c0:13:aaa::/108
|
- cidr: 2a0a:e5c0:13:aaa::/108
|
||||||
serviceExternalIPs:
|
serviceExternalIPs:
|
||||||
|
|
19
v3-calico/nginx-test-deployment.yaml
Normal file
19
v3-calico/nginx-test-deployment.yaml
Normal file
|
@ -0,0 +1,19 @@
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: nginx-deployment
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: nginx
|
||||||
|
replicas: 2 # tells deployment to run 2 pods matching the template
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: nginx
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: nginx
|
||||||
|
image: nginx:1.20.0-alpine
|
||||||
|
ports:
|
||||||
|
- containerPort: 80
|
Loading…
Reference in a new issue