197 lines
5.5 KiB
Markdown
197 lines
5.5 KiB
Markdown
## Apps
|
|
|
|
This directory contains test applications which are using kustomize or
|
|
helm for testing. The objective of these apps is to create typical
|
|
flows for adding apps into clusters.
|
|
|
|
## Use case 1: common anchor
|
|
|
|
We want to achieve the following:
|
|
|
|
* have a common anchor to define the name of a service ("service1")
|
|
* That anchor steers generation of configuration files in config maps
|
|
("nginx config with hostname
|
|
service1.$namespace.svc.$clusterdomain")
|
|
|
|
Best case: $clusterdomain can be queried from the cluster.
|
|
|
|
### kustomize
|
|
|
|
It does not seem kustomize has a logical way to support this, as it
|
|
does not have variables that can be injected into fields.
|
|
|
|
One can use the name-prefix or similar for modifying the service name,
|
|
but that prefix is not automatically injected into the relevant
|
|
configuration within a configmap.
|
|
|
|
### helm
|
|
|
|
Helm seems to cope with the anchor case easily using values.yaml as the
|
|
anchor.
|
|
|
|
|
|
## Use case 2: Handling of configmap updates
|
|
|
|
Assuming one updates a configmap (new configuration), what happens to
|
|
the old configmap? Is it left alone, is it deleted automatically?
|
|
|
|
### kustomize
|
|
|
|
Kustomize usually generates the ConfigMaps and appends a hash to its
|
|
name. Thus the referencing objects (Pods mostly) will also get updated
|
|
and refer to a new configmap.
|
|
|
|
Untested, but the assumption is that kustomize will leave the old
|
|
configmap behind.
|
|
|
|
Update: it is possible to avoid appending the hash to the name, which
|
|
might solve the problem of leaving the ConfigMap behind.
|
|
|
|
### helm
|
|
|
|
Helm does not have a concept of generating confimaps, so in theory it
|
|
would update the same configmap (depending on how you named it).
|
|
|
|
## Use case 3: Handling of out-of-band updates
|
|
|
|
Using secondary Jobs or Cronjobs data like Letsencrypt certificates
|
|
can be updated. Pods using these certificates should then be
|
|
deleted/replaced or the services that use the certificates need to be
|
|
reloaded.
|
|
|
|
### In pod solution
|
|
|
|
One solution can be instead of launching something like nginx directly
|
|
to wrap it into a shell script looking like this:
|
|
|
|
```
|
|
file=/etc/letsencrypt/live/${DOMAIN}/fullchain.pem
|
|
|
|
while [ ! -f $file ]; do
|
|
echo "Waiting for ${file} ..."
|
|
sleep 2
|
|
done
|
|
|
|
# Now we can start nginx as a daemon
|
|
nginx
|
|
|
|
cksum=$(cksum $file)
|
|
cksum_new=$cksum
|
|
|
|
# Check every 10 minutes for new certs
|
|
# If they are there, reload nginx
|
|
while true; do
|
|
cksum_new=$(cksum $file)
|
|
|
|
if [ $cksum != $cksum_new ]; then
|
|
nginx -s reload
|
|
cksum=$cksum_new
|
|
fi
|
|
sleep 600
|
|
done
|
|
```
|
|
|
|
Advantage: everything is handled inside the container, no pod
|
|
deletes/rollover necessary.
|
|
|
|
Disadvantage: it requires patching of almost every container out there.
|
|
|
|
### Pod replacment
|
|
|
|
In theory, if a Cronjob knows that resources are updated for a
|
|
specific use case, the cronjob could start deleting the relevant
|
|
pods. Using a deployment, they'd be restarted.
|
|
|
|
Advantage: this might be a cron job specific solution and is probably
|
|
going to work with every container without modifications.
|
|
|
|
Disadvantage: the Job pod needs to modify cluster resources.
|
|
|
|
|
|
## Use case 4: Name space placement
|
|
|
|
Both kustomize and helm seem to support adjusting the namespace for
|
|
resources easily.
|
|
|
|
## Helm specific notes
|
|
|
|
* Should we use the {{ .Release.name }} for matching on pods?
|
|
* What is the best strategy for naming deployments?
|
|
* What is the best strategy for naming configmaps?
|
|
* What is the best strategy for naming volumes?
|
|
|
|
Generally speaking, which resources
|
|
|
|
* stay the same when upgrading?
|
|
* should be different for different deployments?
|
|
|
|
Objective for a deployment is to continue functioning with rollover of
|
|
pods.
|
|
|
|
|
|
### Relevant objects
|
|
|
|
* .Release.Name = "per installation name"
|
|
. .Chart.Name = "name of the application (chart)"
|
|
* .Chart.Version = "version of the application (chart)"
|
|
|
|
### .Release (.Name) assumptions
|
|
|
|
Per release (name) we will include a specific service that should be
|
|
kept running and keep the same name even when the charts change
|
|
(upgrade).
|
|
|
|
### General identifier
|
|
|
|
#### v2
|
|
|
|
The .Release.Name includes the .Chart.name if we are using
|
|
--generate-name. However if we manually specify a release name
|
|
(like "rrrrrrrr") the .Chart.Name is not included.
|
|
|
|
As the admin of the cluster decides on the naming, it seems to make
|
|
sense to use .Release.Name alone as an identifier, as it can be
|
|
directly influenced by the admin.
|
|
|
|
#### v1
|
|
|
|
Using something like **{{ .Chart.Name }}-{{ .Release.Name }} as an
|
|
identifier or prefix for most "static" objects seems to make sense:
|
|
|
|
- .Chart.Name introduces the admin to where that objects belongs to
|
|
- .Release.Name makes it unique per release
|
|
|
|
In theory .Chart.Name could be omitted, but including it should make
|
|
maintaining the apps more easy.
|
|
|
|
|
|
### Service
|
|
|
|
The service name steers the DNS name
|
|
|
|
** If externally exposed, should probably stay the same during upgrade
|
|
** Should probably not change depending on the release
|
|
*** Or should it? Depends on the intent.
|
|
|
|
The service selector
|
|
|
|
* should probably target the .Release.Name and also something like
|
|
`use-as-service: true` to exclude other pods that might be around.
|
|
|
|
### Deployment
|
|
|
|
* Should probably be **named** depending on .Release.Name to allow
|
|
multiple
|
|
* Should probably **match** on .Release.Name to select/create correct pods
|
|
|
|
If there are multiple deployments within one release, probably suffix
|
|
them (for name and matching).
|
|
|
|
### Volumes
|
|
|
|
* Should probably be **named** depending on .Release.Name to allow
|
|
multiple: {{ .Release.name }}-volumename
|
|
* Will usually contain release specific data or persistent data we
|
|
want to keep
|
|
* ... unless the release is removed, then data / volumes / PV / PVC can be
|
|
deleted, too
|