ungleich-k8s/apps
2021-12-25 15:10:46 +01:00
..
alpine-helm-deployment Alpine: also add labels to PVC 2021-12-19 12:23:23 +01:00
bind9 Add working knot variant + various other stuff 2021-07-18 00:16:35 +02:00
buildbot buildbot: renam/intro 2021-07-28 20:19:27 +02:00
chartmuseum Update chartmuseum to use secret and authentication 2021-07-26 14:31:28 +02:00
docker-cache cache: fix registry url passing 2021-12-04 14:49:08 +01:00
docker-registry add new docker-registry app 2021-12-04 22:56:21 +01:00
ejabberd Add working knot variant + various other stuff 2021-07-18 00:16:35 +02:00
etherpadlite-kustomize old etherpad-lite based test 2021-07-18 11:22:41 +02:00
etherpadlite-ssl etherpad: cleanup 2021-07-28 20:19:41 +02:00
fnux-playground fnux-playground: refresh openldap instructions/example 2021-08-02 18:16:26 +02:00
jitsi begin to helm-i-fy jitsi 2021-06-24 14:42:31 +02:00
knotdns [dns] add test zone to check sync time 2021-08-13 18:29:20 +02:00
matrix [matrix] get element web proxied 2021-12-25 15:10:46 +01:00
netbox [k8s]update file 2021-10-21 08:49:53 +02:00
nextcloud nextcloud cleanup 2021-12-07 22:32:30 +01:00
nginx-certbot Need to add commonLabels so that the selector of service matches 2021-06-19 18:09:31 +02:00
nginx-certbot-helm ++stuff 2021-06-20 10:27:20 +02:00
opennebula [opennebula] ++notes 2021-08-13 18:23:35 +02:00
uptime-kuma ++uptime kuma test 2021-12-04 10:28:08 +01:00
wireguard ++readme 2021-11-21 18:38:00 +01:00
workadventure workadventure/v3: document status 2021-12-19 14:07:54 +01:00
zammad ++update 2021-10-29 09:56:58 +02:00
.gitignore ++stuff 2021-07-26 13:51:46 +02:00
README.md ++readme/doc update 2021-06-20 14:26:28 +02:00

Apps

This directory contains test applications which are using kustomize or helm for testing. The objective of these apps is to create typical flows for adding apps into clusters.

Use case 1: common anchor

We want to achieve the following:

  • have a common anchor to define the name of a service ("service1")
  • That anchor steers generation of configuration files in config maps ("nginx config with hostname service1.$namespace.svc.$clusterdomain")

Best case: $clusterdomain can be queried from the cluster.

kustomize

It does not seem kustomize has a logical way to support this, as it does not have variables that can be injected into fields.

One can use the name-prefix or similar for modifying the service name, but that prefix is not automatically injected into the relevant configuration within a configmap.

helm

Helm seems to cope with the anchor case easily using values.yaml as the anchor.

Use case 2: Handling of configmap updates

Assuming one updates a configmap (new configuration), what happens to the old configmap? Is it left alone, is it deleted automatically?

kustomize

Kustomize usually generates the ConfigMaps and appends a hash to its name. Thus the referencing objects (Pods mostly) will also get updated and refer to a new configmap.

Untested, but the assumption is that kustomize will leave the old configmap behind.

Update: it is possible to avoid appending the hash to the name, which might solve the problem of leaving the ConfigMap behind.

helm

Helm does not have a concept of generating confimaps, so in theory it would update the same configmap (depending on how you named it).

Use case 3: Handling of out-of-band updates

Using secondary Jobs or Cronjobs data like Letsencrypt certificates can be updated. Pods using these certificates should then be deleted/replaced or the services that use the certificates need to be reloaded.

In pod solution

One solution can be instead of launching something like nginx directly to wrap it into a shell script looking like this:

file=/etc/letsencrypt/live/${DOMAIN}/fullchain.pem

while [ ! -f $file ]; do
    echo "Waiting for ${file} ..."
    sleep 2
done

# Now we can start nginx as a daemon
nginx

cksum=$(cksum $file)
cksum_new=$cksum

# Check every 10 minutes for new certs
# If they are there, reload nginx
while true; do
    cksum_new=$(cksum $file)

    if [ $cksum != $cksum_new ]; then
        nginx -s reload
        cksum=$cksum_new
    fi
    sleep 600
done

Advantage: everything is handled inside the container, no pod deletes/rollover necessary.

Disadvantage: it requires patching of almost every container out there.

Pod replacment

In theory, if a Cronjob knows that resources are updated for a specific use case, the cronjob could start deleting the relevant pods. Using a deployment, they'd be restarted.

Advantage: this might be a cron job specific solution and is probably going to work with every container without modifications.

Disadvantage: the Job pod needs to modify cluster resources.

Use case 4: Name space placement

Both kustomize and helm seem to support adjusting the namespace for resources easily.

Helm specific notes

  • Should we use the {{ .Release.name }} for matching on pods?
  • What is the best strategy for naming deployments?
  • What is the best strategy for naming configmaps?
  • What is the best strategy for naming volumes?

Generally speaking, which resources

  • stay the same when upgrading?
  • should be different for different deployments?

Objective for a deployment is to continue functioning with rollover of pods.

Relevant objects

  • .Release.Name = "per installation name" . .Chart.Name = "name of the application (chart)"
  • .Chart.Version = "version of the application (chart)"

.Release (.Name) assumptions

Per release (name) we will include a specific service that should be kept running and keep the same name even when the charts change (upgrade).

General identifier

v2

The .Release.Name includes the .Chart.name if we are using --generate-name. However if we manually specify a release name (like "rrrrrrrr") the .Chart.Name is not included.

As the admin of the cluster decides on the naming, it seems to make sense to use .Release.Name alone as an identifier, as it can be directly influenced by the admin.

v1

Using something like **{{ .Chart.Name }}-{{ .Release.Name }} as an identifier or prefix for most "static" objects seems to make sense:

  • .Chart.Name introduces the admin to where that objects belongs to
  • .Release.Name makes it unique per release

In theory .Chart.Name could be omitted, but including it should make maintaining the apps more easy.

Service

The service name steers the DNS name

** If externally exposed, should probably stay the same during upgrade ** Should probably not change depending on the release *** Or should it? Depends on the intent.

The service selector

  • should probably target the .Release.Name and also something like use-as-service: true to exclude other pods that might be around.

Deployment

  • Should probably be named depending on .Release.Name to allow multiple
  • Should probably match on .Release.Name to select/create correct pods

If there are multiple deployments within one release, probably suffix them (for name and matching).

Volumes

  • Should probably be named depending on .Release.Name to allow multiple: {{ .Release.name }}-volumename
  • Will usually contain release specific data or persistent data we want to keep
  • ... unless the release is removed, then data / volumes / PV / PVC can be deleted, too