This commit is contained in:
Nico Schottelius 2021-06-20 10:27:20 +02:00
parent 8110edc659
commit 30384ff32a
7 changed files with 106 additions and 19 deletions

View File

@ -11,11 +11,12 @@ This project is testing, deploying and using IPv6 only k8s clusters.
* networking (calico)
* ceph with rook (cephfs, rbd)
* letsencrypt (nginx, certbot, homemade)
* k8s test on arm64
## Not (yet) working or tested
* virtualisation (VMs, kubevirt)
* letsencrypt
* network policies
* prometheus in the cluster
* argocd (?) for CI and upgrades
@ -34,7 +35,7 @@ IPv6 only kubernetes cluster "c2.k8s.ooo".
We are using a custom kubeadm.conf to
* configure the cgroupdriver
* configure the cgroupdriver (for alpine)
* configure the IP addresses
* configure the DNS domain (c2.k8s.ooo)

View File

@ -30,7 +30,7 @@ Helm seems to cope with the anchor case easily using values.yaml as the
anchor.
## Handling of configmap updates
## Use case 2: Handling of configmap updates
Assuming one updates a configmap (new configuration), what happens to
the old configmap? Is it left alone, is it deleted automatically?
@ -48,3 +48,64 @@ configmap behind.
Helm does not have a concept of generating confimaps, so in theory it
would update the same configmap (depending on how you named it).
## Use case 3: Handling of out-of-band updates
Using secondary Jobs or Cronjobs data like Letsencrypt certificates
can be updated. Pods using these certificates should then be
deleted/replaced or the services that use the certificates need to be
reloaded.
### In pod solution
One solution can be instead of launching something like nginx directly
to wrap it into a shell script looking like this:
```
file=/etc/letsencrypt/live/${DOMAIN}/fullchain.pem
while [ ! -f $file ]; do
echo "Waiting for ${file} ..."
sleep 2
done
# Now we can start nginx as a daemon
nginx
cksum=$(cksum $file)
cksum_new=$cksum
# Check every 10 minutes for new certs
# If they are there, reload nginx
while true; do
cksum_new=$(cksum $file)
if [ $cksum != $cksum_new ]; then
nginx -s reload
cksum=$cksum_new
fi
sleep 600
done
```
Advantage: everything is handled inside the container, no pod
deletes/rollover necessary.
Disadvantage: it requires patching of almost every container out there.
### Pod replacment
In theory, if a Cronjob knows that resources are updated for a
specific use case, the cronjob could start deleting the relevant
pods. Using a deployment, they'd be restarted.
Advantage: this might be a cron job specific solution and is probably
going to work with every container without modifications.
Disadvantage: the Job pod needs to modify cluster resources.
## Use case 4: Name space placement
Both kustomize and helm seem to support adjusting the namespace for
resources easily.

View File

@ -1 +0,0 @@

View File

@ -1,15 +0,0 @@
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name tls1.default.svc.c2.k8s.ooo;
ssl_certificate /etc/letsencrypt/live/tls1.default.svc.c2.k8s.ooo/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/tls1.default.svc.c2.k8s.ooo/privkey.pem;
client_max_body_size 256m;
root /usr/share/nginx/html;
autoindex on;
}

View File

@ -36,3 +36,10 @@ name.
kubectl apply -f
https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml
```
## Container flow / certificate renewal
* Assume a shell script as init
* checking for the required certificate at /etc/letsencrypt/...
* starting nginx when available, caching the checksum (in a shell variable)
* Check the file once per hour, reload nginx if it happened

View File

@ -0,0 +1,4 @@
FROM nginx:1.21-alpine
COPY watch-and-run.sh /
ENTRYPOINT /docker-entrypoint.sh /watch-and-run.sh

View File

@ -0,0 +1,30 @@
#!/bin/sh
if [ -z ${DOMAIN} ]; then
exit 0
fi
file=/etc/letsencrypt/live/${DOMAIN}/fullchain.pem
while [ ! -f $file ]; do
echo "Waiting for ${file} ..."
sleep 2
done
# Now we can start nginx as a daemon
nginx
cksum=$(cksum $file)
cksum_new=$cksum
# Check every 10 minutes for new certs
# If they are there, reload nginx
while true; do
cksum_new=$(cksum $file)
if [ $cksum != $cksum_new ]; then
nginx -s reload
cksum=$cksum_new
fi
sleep 600
done