diff --git a/blog/k8s-ipv6-only-cluster.mdwn b/blog/k8s-ipv6-only-cluster.mdwn index 4b331779..a596a123 100644 --- a/blog/k8s-ipv6-only-cluster.mdwn +++ b/blog/k8s-ipv6-only-cluster.mdwn @@ -128,16 +128,18 @@ Today I have updated the ceph cluster definition in rook to * check hosts every 10 minutes instead of 60m for new disks * use IPv6 instead of IPv6 - [20:41] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s +The succesful ceph -s output: + + [20:42] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s cluster: id: 049110d9-9368-4750-b3d3-6ca9a80553d7 health: HEALTH_WARN mons are allowing insecure global_id reclaim services: - mon: 3 daemons, quorum a,b,d (age 72m) - mgr: a(active, since 72m), standbys: b - osd: 6 osds: 6 up (since 41m), 6 in (since 42m) + mon: 3 daemons, quorum a,b,d (age 75m) + mgr: a(active, since 74m), standbys: b + osd: 6 osds: 6 up (since 43m), 6 in (since 44m) data: pools: 2 pools, 33 pgs @@ -145,6 +147,7 @@ Today I have updated the ceph cluster definition in rook to usage: 37 MiB used, 45 GiB / 45 GiB avail pgs: 33 active+clean + The result is a working ceph clusters with RBD support. I also applied the cephfs manifest, however RWX volumes (readwritemany) are not yet spinning up. It seems that test [helm charts](https://artifacthub.io/)