114 lines
3.1 KiB
Markdown
114 lines
3.1 KiB
Markdown
## Apps
|
|
|
|
This directory contains test applications which are using kustomize or
|
|
helm for testing. The objective of these apps is to create typical
|
|
flows for adding apps into clusters.
|
|
|
|
## Use case 1: common anchor
|
|
|
|
We want to achieve the following:
|
|
|
|
* have a common anchor to define the name of a service ("service1")
|
|
* That anchor steers generation of configuration files in config maps
|
|
("nginx config with hostname
|
|
service1.$namespace.svc.$clusterdomain")
|
|
|
|
Best case: $clusterdomain can be queried from the cluster.
|
|
|
|
### kustomize
|
|
|
|
It does not seem kustomize has a logical way to support this, as it
|
|
does not have variables that can be injected into fields.
|
|
|
|
One can use the name-prefix or similar for modifying the service name,
|
|
but that prefix is not automatically injected into the relevant
|
|
configuration within a configmap.
|
|
|
|
### helm
|
|
|
|
Helm seems to cope with the anchor case easily using values.yaml as the
|
|
anchor.
|
|
|
|
|
|
## Use case 2: Handling of configmap updates
|
|
|
|
Assuming one updates a configmap (new configuration), what happens to
|
|
the old configmap? Is it left alone, is it deleted automatically?
|
|
|
|
### kustomize
|
|
|
|
Kustomize usually generates the ConfigMaps and appends a hash to its
|
|
name. Thus the referencing objects (Pods mostly) will also get updated
|
|
and refer to a new configmap.
|
|
|
|
Untested, but the assumption is that kustomize will leave the old
|
|
configmap behind.
|
|
|
|
Update: it is possible to avoid appending the hash to the name, which
|
|
might solve the problem of leaving the ConfigMap behind.
|
|
|
|
### helm
|
|
|
|
Helm does not have a concept of generating confimaps, so in theory it
|
|
would update the same configmap (depending on how you named it).
|
|
|
|
## Use case 3: Handling of out-of-band updates
|
|
|
|
Using secondary Jobs or Cronjobs data like Letsencrypt certificates
|
|
can be updated. Pods using these certificates should then be
|
|
deleted/replaced or the services that use the certificates need to be
|
|
reloaded.
|
|
|
|
### In pod solution
|
|
|
|
One solution can be instead of launching something like nginx directly
|
|
to wrap it into a shell script looking like this:
|
|
|
|
```
|
|
file=/etc/letsencrypt/live/${DOMAIN}/fullchain.pem
|
|
|
|
while [ ! -f $file ]; do
|
|
echo "Waiting for ${file} ..."
|
|
sleep 2
|
|
done
|
|
|
|
# Now we can start nginx as a daemon
|
|
nginx
|
|
|
|
cksum=$(cksum $file)
|
|
cksum_new=$cksum
|
|
|
|
# Check every 10 minutes for new certs
|
|
# If they are there, reload nginx
|
|
while true; do
|
|
cksum_new=$(cksum $file)
|
|
|
|
if [ $cksum != $cksum_new ]; then
|
|
nginx -s reload
|
|
cksum=$cksum_new
|
|
fi
|
|
sleep 600
|
|
done
|
|
```
|
|
|
|
Advantage: everything is handled inside the container, no pod
|
|
deletes/rollover necessary.
|
|
|
|
Disadvantage: it requires patching of almost every container out there.
|
|
|
|
### Pod replacment
|
|
|
|
In theory, if a Cronjob knows that resources are updated for a
|
|
specific use case, the cronjob could start deleting the relevant
|
|
pods. Using a deployment, they'd be restarted.
|
|
|
|
Advantage: this might be a cron job specific solution and is probably
|
|
going to work with every container without modifications.
|
|
|
|
Disadvantage: the Job pod needs to modify cluster resources.
|
|
|
|
|
|
## Use case 4: Name space placement
|
|
|
|
Both kustomize and helm seem to support adjusting the namespace for
|
|
resources easily.
|