diff --git a/content/u/blog/2022-08-27-migrating-ceph-nautilus-into-kubernetes-with-rook/contents.lr b/content/u/blog/2022-08-27-migrating-ceph-nautilus-into-kubernetes-with-rook/contents.lr index 0fb969d..47e0fa6 100644 --- a/content/u/blog/2022-08-27-migrating-ceph-nautilus-into-kubernetes-with-rook/contents.lr +++ b/content/u/blog/2022-08-27-migrating-ceph-nautilus-into-kubernetes-with-rook/contents.lr @@ -53,7 +53,7 @@ steps are planned: * Repeat if successful * Migrate to ceph pacific -## Original cluster +### Original cluster The target ceph cluster we want to migrate lives in the 2a0a:e5c0::/64 network. Ceph is using: @@ -63,7 +63,7 @@ public network = 2a0a:e5c0:0:0::/64 cluster network = 2a0a:e5c0:0:0::/64 ``` -## Kubernetes cluster networking inside the ceph network +### Kubernetes cluster networking inside the ceph network To be able to communicate with the existing OSDs, we will be using sub networks of 2a0a:e5c0::/64 for kubernetes. As these networks @@ -76,6 +76,135 @@ As we plan to use either [cilium](https://cilium.io/) or configure kubernetes to directly BGP peer with the existing Ceph nodes. +## The setup + +### Kubernetes Bootstrap + +As usual we bootstrap 3 control plane nodes using kubeadm. The proxy +for the API resides in a different kuberentes cluster. + +We run + +``` +kubeadm init --config kubeadm.yaml +``` + +on the first node and join the other two control plane nodes. As +usual, joining the workers last. + +### k8s Networking / CNI + +For this setup we are using calico as described in the +[ungleich kubernetes +manual](https://redmine.ungleich.ch/projects/open-infrastructure/wiki/The_ungleich_kubernetes_infrastructure#section-23). + +``` +VERSION=v3.23.3 +helm repo add projectcalico https://docs.projectcalico.org/charts +helm upgrade --install --namespace tigera calico projectcalico/tigera-operator --version $VERSION --create-namespace +``` + +### BGP Networking on the old nodes + +To be able to import the BGP routes from Kubernetes, all old / native +hosts will run bird. The installation and configuration is as follows: + +``` +apt-get update +apt-get install -y bird2 + +router_id=$(hostname | sed 's/server//') + +cat > /etc/bird/bird.conf < 64 then accept; else reject; }; + export none; + }; +} +EOF +/etc/init.d/bird restart + +``` + +The router id must be adjusted for every host. As all hosts have a +unique number, we use that one as the router id. +The bird configuration allows to use dynamic peers so that any k8s +node in the network can peer with the old servers. + +We also use a filter to avoid receiving /64 routes, as they are +overlapping with the on link route. + +### BGP networking in Kubernetes + +Calico supports BGP peering and we use a rather standard calico +configuration: + +``` +--- +apiVersion: projectcalico.org/v3 +kind: BGPConfiguration +metadata: + name: default +spec: + logSeverityScreen: Info + nodeToNodeMeshEnabled: true + asNumber: 65533 + serviceClusterIPs: + - cidr: 2a0a:e5c0:aaaa::/108 + serviceExternalIPs: + - cidr: 2a0a:e5c0:aaaa::/108 +``` + +Plus for each server and router we create a BGPPeer: + +``` +apiVersion: projectcalico.org/v3 +kind: BGPPeer +metadata: + name: serverXX +spec: + peerIP: 2a0a:e5c0::XX + asNumber: 65530 + keepOriginalNextHop: true +``` + +We apply the whole configuration using calicoctl: + +``` +./calicoctl create -f - < ~/vcs/k8s-config/bootstrap/p5-cow/calico-bgp.yaml +``` + +And a few seconds later we can observer the routes on the old / native +hosts: + +``` +bird> show protocols +Name Proto Table State Since Info +device1 Device --- up 23:09:01.393 +kernel1 Kernel master6 up 23:09:01.393 +k8s BGP --- start 23:09:01.393 Passive +k8s_1 BGP --- up 23:33:01.215 Established +k8s_2 BGP --- up 23:33:01.215 Established +k8s_3 BGP --- up 23:33:01.420 Established +k8s_4 BGP --- up 23:33:01.215 Established +k8s_5 BGP --- up 23:33:01.215 Established + +``` ## Changelog