2021-06-07 17:24:20 +00:00
|
|
|
## v1: original rook manifests
|
2021-06-04 16:33:52 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
git clone https://github.com/rook/rook.git
|
|
|
|
cd rook/cluster/examples/kubernetes/ceph
|
|
|
|
kubectl apply -f crds.yaml -f common.yaml
|
|
|
|
kubectl apply -f operator.yaml
|
|
|
|
kubectl get -n rook-ceph pods --watch
|
2021-06-06 14:42:35 +00:00
|
|
|
kubectl apply -f cluster.yaml
|
|
|
|
kubectl apply -f csi/rbd/storageclass.yaml
|
|
|
|
kubectl apply -f toolbox.yaml
|
2021-06-04 16:33:52 +00:00
|
|
|
```
|
2021-06-07 17:24:20 +00:00
|
|
|
|
|
|
|
## v2 with included manifests
|
|
|
|
|
|
|
|
* Patched for IPv6 support
|
|
|
|
* Including RBD
|
|
|
|
|
|
|
|
```
|
|
|
|
for yaml in crds common operator cluster storageclass toolbox; do
|
|
|
|
kubectl apply -f ${yaml}.yaml
|
|
|
|
done
|
|
|
|
```
|
|
|
|
|
2021-06-14 17:38:45 +00:00
|
|
|
Deleting (in case of teardown):
|
|
|
|
|
|
|
|
```
|
|
|
|
for yaml in crds common operator cluster storageclass toolbox; do
|
|
|
|
kubectl delete -f ${yaml}.yaml
|
|
|
|
done
|
|
|
|
```
|
|
|
|
|
|
|
|
|
2021-06-07 17:24:20 +00:00
|
|
|
## Debugging / ceph toolbox
|
|
|
|
|
|
|
|
```
|
|
|
|
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
|
|
|
|
```
|
2021-06-07 18:06:39 +00:00
|
|
|
|
|
|
|
## Creating a sample RBD device / PVC
|
|
|
|
|
|
|
|
```
|
|
|
|
kubectl apply -f pvc.yaml
|
|
|
|
```
|
2021-06-07 18:33:00 +00:00
|
|
|
|
|
|
|
Checks:
|
|
|
|
|
|
|
|
```
|
|
|
|
kubectl get pvc
|
|
|
|
kubectl describe pvc
|
|
|
|
|
|
|
|
kubectl get pv
|
|
|
|
kubectl describe pv
|
|
|
|
```
|
|
|
|
|
|
|
|
Digging into ceph, seeing the actual image:
|
|
|
|
|
|
|
|
```
|
|
|
|
[20:05] server47.place7:~# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- rbd -p replicapool ls
|
|
|
|
csi-vol-d3c96f79-c7ba-11eb-8e52-1ed2f2d63451
|
|
|
|
[20:11] server47.place7:~#
|
|
|
|
```
|
2021-06-07 19:10:35 +00:00
|
|
|
|
|
|
|
## Filesystem
|
|
|
|
|
|
|
|
```
|
|
|
|
[21:06] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph get pod -l app=rook-ceph-mds
|
|
|
|
NAME READY STATUS RESTARTS AGE
|
|
|
|
rook-ceph-mds-myfs-a-5f547fd7c6-qmp2r 1/1 Running 0 16s
|
|
|
|
rook-ceph-mds-myfs-b-dd78b444b-49h5h 0/1 PodInitializing 0 14s
|
|
|
|
[21:06] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph get pod -l app=rook-ceph-mds
|
|
|
|
NAME READY STATUS RESTARTS AGE
|
|
|
|
rook-ceph-mds-myfs-a-5f547fd7c6-qmp2r 1/1 Running 0 20s
|
|
|
|
rook-ceph-mds-myfs-b-dd78b444b-49h5h 1/1 Running 0 18s
|
|
|
|
[21:06] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s
|
|
|
|
cluster:
|
|
|
|
id: 049110d9-9368-4750-b3d3-6ca9a80553d7
|
|
|
|
health: HEALTH_WARN
|
|
|
|
mons are allowing insecure global_id reclaim
|
|
|
|
|
|
|
|
services:
|
|
|
|
mon: 3 daemons, quorum a,b,d (age 98m)
|
|
|
|
mgr: a(active, since 97m), standbys: b
|
|
|
|
mds: 1/1 daemons up, 1 hot standby
|
|
|
|
osd: 6 osds: 6 up (since 66m), 6 in (since 67m)
|
|
|
|
|
|
|
|
data:
|
|
|
|
volumes: 1/1 healthy
|
|
|
|
pools: 4 pools, 97 pgs
|
|
|
|
objects: 31 objects, 27 KiB
|
|
|
|
usage: 40 MiB used, 45 GiB / 45 GiB avail
|
|
|
|
pgs: 97 active+clean
|
|
|
|
|
|
|
|
io:
|
|
|
|
client: 3.3 KiB/s rd, 2.8 KiB/s wr, 2 op/s rd, 1 op/s wr
|
|
|
|
|
|
|
|
[21:07] server47.place7:~/ungleich-k8s/rook#
|
|
|
|
```
|
2021-06-07 19:25:32 +00:00
|
|
|
|
|
|
|
## DefaultStorageClass
|
|
|
|
|
|
|
|
By default none of the created storage classes are the "default" of
|
|
|
|
the cluster. So we need to set one of them, if persistentvolumeclaims
|
|
|
|
should be deployed:
|
|
|
|
|
|
|
|
```
|
|
|
|
[21:22] server47.place7:~/ungleich-k8s/rook# kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"stor
|
|
|
|
ageclass.kubernetes.io/is-default-class":"true"}}}'
|
|
|
|
```
|