Ceph RBD external provisioner
Deploy and configure RBD provisioner¶
Automatically¶
rake storage:deploy
rake shell
root@kubemaster:~# kubectl get all --selector="app=rbd-provisioner"
NAME READY STATUS RESTARTS AGE
pod/rbd-provisioner-857866b5b7-ttmpd 1/1 Running 0 3m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/rbd-provisioner 1 1 1 1 3m
NAME DESIRED CURRENT READY AGE
replicaset.apps/rbd-provisioner-857866b5b7 1 1 1 3m
Manually¶
Create a dedicated Ceph pool¶
On the frontend :
curl -k -XPOST "https://api.grid5000.fr/sid/sites/rennes/storage/ceph/pools" -H "Content-Type: application/json" -d '{"poolName": "g5k8s", "size": 3, "quota": 200000000000, "expiration_date": "2018-11-01"}'
#{"message":"Pool successfully created."}
Note
Edit the URL and replace rennes
if it's necessary. Replace the expiration_date
(2 months later).
Deploy RBD provisioner¶
Get external storage GIT repository from kubernetes-incubator :
rake shell
git clone https://github.com/kubernetes-incubator/external-storage.git
Deploy RBD external provisioner :
kubectl create -f external-storage/ceph/rbd/deploy/rbac/
Add secrets and dynamic storage class¶
Add secrets to connect to the Ceph cluster with your own Cephx key.
Create a secrets.yaml
file :
---
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: default
type: "kubernetes.io/rbd"
data:
# ceph auth get-key client.$USER | base64
key: xxx=
---
apiVersion: v1
kind: Secret
metadata:
name: ceph-admin-secret
namespace: default
type: "kubernetes.io/rbd"
data:
# ceph auth get-key client.$USER | base64
key: xxx=
Note
On the frontend, get your cephx key base64 encoded :
ceph auth get-key client.$USER | base64
Replace key value by the returned value for both ceph-secret
and ceph-admin-secret
.
Add secrets on kubemaster :
kubectl create -f secrets.yaml
Tip
On kubemaster (Veertuosa KVM virtual machine), your Grid'5000 home directory is available on /mnt/home
.
root@kubemaster:~# cat /etc/fstab
#...
home /mnt/home 9p trans=virtio,version=9p2000.L,nofail,x-systemd.device-timeout=10s 0 0
grid5000 /mnt/grid5000 9p trans=virtio,version=9p2000.L,nofail,x-systemd.device-timeout=10s 0 0
tmp /mnt/tmp 9p trans=virtio,version=9p2000.L,nofail,x-systemd.device-timeout=10s 0 0
Create a class.yaml
file :
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: rbd
provisioner: ceph.com/rbd
parameters:
monitors: ceph0.site.grid5000.fr:6789
pool: g5kUserId_g5k8s
adminId: g5kUserId
adminSecretNamespace: default
adminSecretName: ceph-admin-secret
userId: g5kUserId
userSecretNamespace: default
userSecretName: ceph-secret
imageFormat: "2"
Note
- Replace
g5kUserId
by your Grid'5000 login forpool
,adminId
anduserId
parameters. - Replace
site
by the Grid'5000 site onmonitors
parameter. - For monitors, the port used is different between sites : 6790 at Rennes, 6789 at Nantes.
Add the rbd
storage class :
kubectl create -f class.yaml
Set rbd storage class as default¶
Patch the resource rbd
and set annotation storageclass.kubernetes.io/is-default-class
to true
:
kubectl patch storageclass rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/rbd patched
kubectl get sc
NAME PROVISIONER AGE
rbd (default) ceph.com/rbd 34m
Clean RBDs created by a previous deployment¶
On the frontend :
for i in $(rbd -p ${USER}_g5k8s ls); do echo "--> Remove RBD $i"; rbd rm ${USER}_g5k8s/$i; done
--> Remove RBD kubernetes-dynamic-pvc-01d31213-9bd1-11e8-9afd-0a580af40204
Removing image: 100% complete...done.
--> Remove RBD kubernetes-dynamic-pvc-9be0b12b-9bd0-11e8-9afd-0a580af40204
Removing image: 100% complete...done.
--> Remove RBD kubernetes-dynamic-pvc-9bff2600-9bd0-11e8-9afd-0a580af40204
Removing image: 100% complete...done.
--> Remove RBD kubernetes-dynamic-pvc-eb8568dd-9bd0-11e8-9afd-0a580af40204
Removing image: 100% complete...done.
--> Remove RBD kubernetes-dynamic-pvc-f476692a-9bd0-11e8-9afd-0a580af40204
Removing image: 100% complete...done.