From a00e024998e88af20de9b60b141ff4d290409db5 Mon Sep 17 00:00:00 2001 From: Nico Schottelius Date: Mon, 7 Jun 2021 20:43:54 +0200 Subject: [PATCH 1/3] ++status update k8s --- blog/k8s-ipv6-only-cluster.mdwn | 33 +++++++++++++++++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/blog/k8s-ipv6-only-cluster.mdwn b/blog/k8s-ipv6-only-cluster.mdwn index bc32532b..6244581c 100644 --- a/blog/k8s-ipv6-only-cluster.mdwn +++ b/blog/k8s-ipv6-only-cluster.mdwn @@ -121,6 +121,35 @@ having /sys not being shared not a problem for calico in cri-o? ## Log +### Status 2021-06-07 + +Today I have updated the ceph cluster definition in rook to + +* check hosts every 10 minutes instead of 60m for new disks +* use IPv6 instead of IPv6 + + [20:41] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s + cluster: + id: 049110d9-9368-4750-b3d3-6ca9a80553d7 + health: HEALTH_WARN + mons are allowing insecure global_id reclaim + + services: + mon: 3 daemons, quorum a,b,d (age 72m) + mgr: a(active, since 72m), standbys: b + osd: 6 osds: 6 up (since 41m), 6 in (since 42m) + + data: + pools: 2 pools, 33 pgs + objects: 6 objects, 34 B + usage: 37 MiB used, 45 GiB / 45 GiB avail + pgs: 33 active+clean + +The result is a working ceph clusters with RBD support. I also applied +the cephfs manifest, however RWX volumes (readwritemany) are not yet +spinning up. It seems that test [helm charts](https://artifacthub.io/) +often require RWX instead of RWO (readwriteonce) access. + ### Status 2021-06-06 Today is the first day of publishing the findings and this blog @@ -128,7 +157,7 @@ article will lack quite some information. If you are curious and want to know more that is not yet published, you can find me on Matrix in the **#hacking:ungleich.ch** room. -### What works so far +#### What works so far * Spawing pods IPv6 only * Spawing IPv6 only services works @@ -174,7 +203,7 @@ Here's an output of the upstream bird process for the routes from k8s: bird> -### What doesn't work +#### What doesn't work * Rook does not format/spinup all disks * Deleting all rook components fails (**kubectl delete -f cluster.yaml From 6500e2a272bca66152be2382983e06eac559c696 Mon Sep 17 00:00:00 2001 From: Nico Schottelius Date: Mon, 7 Jun 2021 20:46:26 +0200 Subject: [PATCH 2/3] ++k8s blog --- blog/k8s-ipv6-only-cluster.mdwn | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/blog/k8s-ipv6-only-cluster.mdwn b/blog/k8s-ipv6-only-cluster.mdwn index 6244581c..4b331779 100644 --- a/blog/k8s-ipv6-only-cluster.mdwn +++ b/blog/k8s-ipv6-only-cluster.mdwn @@ -150,6 +150,37 @@ the cephfs manifest, however RWX volumes (readwritemany) are not yet spinning up. It seems that test [helm charts](https://artifacthub.io/) often require RWX instead of RWO (readwriteonce) access. +Also the ceph dashboard does not come up, even though it is +configured: + + [20:44] server47.place7:~# kubectl -n rook-ceph get svc + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + csi-cephfsplugin-metrics ClusterIP 2a0a:e5c0:13:e2::760b 8080/TCP,8081/TCP 82m + csi-rbdplugin-metrics ClusterIP 2a0a:e5c0:13:e2::482d 8080/TCP,8081/TCP 82m + rook-ceph-mgr ClusterIP 2a0a:e5c0:13:e2::6ab9 9283/TCP 77m + rook-ceph-mgr-dashboard ClusterIP 2a0a:e5c0:13:e2::5a14 7000/TCP 77m + rook-ceph-mon-a ClusterIP 2a0a:e5c0:13:e2::c39e 6789/TCP,3300/TCP 83m + rook-ceph-mon-b ClusterIP 2a0a:e5c0:13:e2::732a 6789/TCP,3300/TCP 81m + rook-ceph-mon-d ClusterIP 2a0a:e5c0:13:e2::c658 6789/TCP,3300/TCP 76m + [20:44] server47.place7:~# curl http://[2a0a:e5c0:13:e2::5a14]:7000 + curl: (7) Failed to connect to 2a0a:e5c0:13:e2::5a14 port 7000: Connection refused + [20:45] server47.place7:~# + +The ceph mgr is perfectly reachable though: + + [20:45] server47.place7:~# curl -s http://[2a0a:e5c0:13:e2::6ab9]:9283/metrics | head + + # HELP ceph_health_status Cluster health status + # TYPE ceph_health_status untyped + ceph_health_status 1.0 + # HELP ceph_mon_quorum_status Monitors in quorum + # TYPE ceph_mon_quorum_status gauge + ceph_mon_quorum_status{ceph_daemon="mon.a"} 1.0 + ceph_mon_quorum_status{ceph_daemon="mon.b"} 1.0 + ceph_mon_quorum_status{ceph_daemon="mon.d"} 1.0 + # HELP ceph_fs_metadata FS Metadata + + ### Status 2021-06-06 Today is the first day of publishing the findings and this blog From 2c2ea79217b1abf2aa7f8617f4531544afb229a9 Mon Sep 17 00:00:00 2001 From: Nico Schottelius Date: Mon, 7 Jun 2021 20:49:28 +0200 Subject: [PATCH 3/3] ++format fix --- blog/k8s-ipv6-only-cluster.mdwn | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/blog/k8s-ipv6-only-cluster.mdwn b/blog/k8s-ipv6-only-cluster.mdwn index 4b331779..a596a123 100644 --- a/blog/k8s-ipv6-only-cluster.mdwn +++ b/blog/k8s-ipv6-only-cluster.mdwn @@ -128,16 +128,18 @@ Today I have updated the ceph cluster definition in rook to * check hosts every 10 minutes instead of 60m for new disks * use IPv6 instead of IPv6 - [20:41] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s +The succesful ceph -s output: + + [20:42] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s cluster: id: 049110d9-9368-4750-b3d3-6ca9a80553d7 health: HEALTH_WARN mons are allowing insecure global_id reclaim services: - mon: 3 daemons, quorum a,b,d (age 72m) - mgr: a(active, since 72m), standbys: b - osd: 6 osds: 6 up (since 41m), 6 in (since 42m) + mon: 3 daemons, quorum a,b,d (age 75m) + mgr: a(active, since 74m), standbys: b + osd: 6 osds: 6 up (since 43m), 6 in (since 44m) data: pools: 2 pools, 33 pgs @@ -145,6 +147,7 @@ Today I have updated the ceph cluster definition in rook to usage: 37 MiB used, 45 GiB / 45 GiB avail pgs: 33 active+clean + The result is a working ceph clusters with RBD support. I also applied the cephfs manifest, however RWX volumes (readwritemany) are not yet spinning up. It seems that test [helm charts](https://artifacthub.io/)