++status update k8s

This commit is contained in:
Nico Schottelius 2021-06-07 20:43:54 +02:00
parent 238c42e12c
commit a00e024998

View file

@ -121,6 +121,35 @@ having /sys not being shared not a problem for calico in cri-o?
## Log
### Status 2021-06-07
Today I have updated the ceph cluster definition in rook to
* check hosts every 10 minutes instead of 60m for new disks
* use IPv6 instead of IPv6
[20:41] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s
cluster:
id: 049110d9-9368-4750-b3d3-6ca9a80553d7
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
services:
mon: 3 daemons, quorum a,b,d (age 72m)
mgr: a(active, since 72m), standbys: b
osd: 6 osds: 6 up (since 41m), 6 in (since 42m)
data:
pools: 2 pools, 33 pgs
objects: 6 objects, 34 B
usage: 37 MiB used, 45 GiB / 45 GiB avail
pgs: 33 active+clean
The result is a working ceph clusters with RBD support. I also applied
the cephfs manifest, however RWX volumes (readwritemany) are not yet
spinning up. It seems that test [helm charts](https://artifacthub.io/)
often require RWX instead of RWO (readwriteonce) access.
### Status 2021-06-06
Today is the first day of publishing the findings and this blog
@ -128,7 +157,7 @@ article will lack quite some information. If you are curious and want
to know more that is not yet published, you can find me on Matrix
in the **#hacking:ungleich.ch** room.
### What works so far
#### What works so far
* Spawing pods IPv6 only
* Spawing IPv6 only services works
@ -174,7 +203,7 @@ Here's an output of the upstream bird process for the routes from k8s:
bird>
### What doesn't work
#### What doesn't work
* Rook does not format/spinup all disks
* Deleting all rook components fails (**kubectl delete -f cluster.yaml