Merge branch 'master' of git+ssh://staticweb.ungleich.ch:/home/services/git/nico.schottelius.org
This commit is contained in:
commit
bb32754194
|
@ -121,6 +121,69 @@ having /sys not being shared not a problem for calico in cri-o?
|
||||||
|
|
||||||
## Log
|
## Log
|
||||||
|
|
||||||
|
### Status 2021-06-07
|
||||||
|
|
||||||
|
Today I have updated the ceph cluster definition in rook to
|
||||||
|
|
||||||
|
* check hosts every 10 minutes instead of 60m for new disks
|
||||||
|
* use IPv6 instead of IPv6
|
||||||
|
|
||||||
|
The succesful ceph -s output:
|
||||||
|
|
||||||
|
[20:42] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s
|
||||||
|
cluster:
|
||||||
|
id: 049110d9-9368-4750-b3d3-6ca9a80553d7
|
||||||
|
health: HEALTH_WARN
|
||||||
|
mons are allowing insecure global_id reclaim
|
||||||
|
|
||||||
|
services:
|
||||||
|
mon: 3 daemons, quorum a,b,d (age 75m)
|
||||||
|
mgr: a(active, since 74m), standbys: b
|
||||||
|
osd: 6 osds: 6 up (since 43m), 6 in (since 44m)
|
||||||
|
|
||||||
|
data:
|
||||||
|
pools: 2 pools, 33 pgs
|
||||||
|
objects: 6 objects, 34 B
|
||||||
|
usage: 37 MiB used, 45 GiB / 45 GiB avail
|
||||||
|
pgs: 33 active+clean
|
||||||
|
|
||||||
|
|
||||||
|
The result is a working ceph clusters with RBD support. I also applied
|
||||||
|
the cephfs manifest, however RWX volumes (readwritemany) are not yet
|
||||||
|
spinning up. It seems that test [helm charts](https://artifacthub.io/)
|
||||||
|
often require RWX instead of RWO (readwriteonce) access.
|
||||||
|
|
||||||
|
Also the ceph dashboard does not come up, even though it is
|
||||||
|
configured:
|
||||||
|
|
||||||
|
[20:44] server47.place7:~# kubectl -n rook-ceph get svc
|
||||||
|
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||||
|
csi-cephfsplugin-metrics ClusterIP 2a0a:e5c0:13:e2::760b <none> 8080/TCP,8081/TCP 82m
|
||||||
|
csi-rbdplugin-metrics ClusterIP 2a0a:e5c0:13:e2::482d <none> 8080/TCP,8081/TCP 82m
|
||||||
|
rook-ceph-mgr ClusterIP 2a0a:e5c0:13:e2::6ab9 <none> 9283/TCP 77m
|
||||||
|
rook-ceph-mgr-dashboard ClusterIP 2a0a:e5c0:13:e2::5a14 <none> 7000/TCP 77m
|
||||||
|
rook-ceph-mon-a ClusterIP 2a0a:e5c0:13:e2::c39e <none> 6789/TCP,3300/TCP 83m
|
||||||
|
rook-ceph-mon-b ClusterIP 2a0a:e5c0:13:e2::732a <none> 6789/TCP,3300/TCP 81m
|
||||||
|
rook-ceph-mon-d ClusterIP 2a0a:e5c0:13:e2::c658 <none> 6789/TCP,3300/TCP 76m
|
||||||
|
[20:44] server47.place7:~# curl http://[2a0a:e5c0:13:e2::5a14]:7000
|
||||||
|
curl: (7) Failed to connect to 2a0a:e5c0:13:e2::5a14 port 7000: Connection refused
|
||||||
|
[20:45] server47.place7:~#
|
||||||
|
|
||||||
|
The ceph mgr is perfectly reachable though:
|
||||||
|
|
||||||
|
[20:45] server47.place7:~# curl -s http://[2a0a:e5c0:13:e2::6ab9]:9283/metrics | head
|
||||||
|
|
||||||
|
# HELP ceph_health_status Cluster health status
|
||||||
|
# TYPE ceph_health_status untyped
|
||||||
|
ceph_health_status 1.0
|
||||||
|
# HELP ceph_mon_quorum_status Monitors in quorum
|
||||||
|
# TYPE ceph_mon_quorum_status gauge
|
||||||
|
ceph_mon_quorum_status{ceph_daemon="mon.a"} 1.0
|
||||||
|
ceph_mon_quorum_status{ceph_daemon="mon.b"} 1.0
|
||||||
|
ceph_mon_quorum_status{ceph_daemon="mon.d"} 1.0
|
||||||
|
# HELP ceph_fs_metadata FS Metadata
|
||||||
|
|
||||||
|
|
||||||
### Status 2021-06-06
|
### Status 2021-06-06
|
||||||
|
|
||||||
Today is the first day of publishing the findings and this blog
|
Today is the first day of publishing the findings and this blog
|
||||||
|
@ -128,7 +191,7 @@ article will lack quite some information. If you are curious and want
|
||||||
to know more that is not yet published, you can find me on Matrix
|
to know more that is not yet published, you can find me on Matrix
|
||||||
in the **#hacking:ungleich.ch** room.
|
in the **#hacking:ungleich.ch** room.
|
||||||
|
|
||||||
### What works so far
|
#### What works so far
|
||||||
|
|
||||||
* Spawing pods IPv6 only
|
* Spawing pods IPv6 only
|
||||||
* Spawing IPv6 only services works
|
* Spawing IPv6 only services works
|
||||||
|
@ -174,7 +237,7 @@ Here's an output of the upstream bird process for the routes from k8s:
|
||||||
bird>
|
bird>
|
||||||
|
|
||||||
|
|
||||||
### What doesn't work
|
#### What doesn't work
|
||||||
|
|
||||||
* Rook does not format/spinup all disks
|
* Rook does not format/spinup all disks
|
||||||
* Deleting all rook components fails (**kubectl delete -f cluster.yaml
|
* Deleting all rook components fails (**kubectl delete -f cluster.yaml
|
||||||
|
|
Loading…
Reference in New Issue