Pivert's Blog

ceph-csi on Kubernetes 1.24


Reading Time: 3 minutes

Disclaimer: This post is WIP

From K8S 1.24, it’s not possible to use the deprecated Cephfs rootfs Provisioner. The solution is to use the new ceph-csi solution for rbd, cephfs or nfs.

In this article, we will focus on CephFS mode.

The K8S is 3 nodes RKE2 cluster on Ubuntu 22.04 LTS Server; I won’t detail the installation since it’s quite straightforward.

Prerequisites

  • You have admin access to a running ceph cluster (In the example below, nodes are pve1, pve2, pve3)
  • You have a running Kubernetes cluster, and your .kube/config.yaml

I’ll use the provided helm chart and nvimdocker container. Nvimdocker provides all the tools (kubectl, git, helm, neovim with yaml & json linters, jq, aliases, …)

First we need to prepare the CephFS

You have 2 options here. Either create a new CephFS, either use an existing CephFS.
The /volume folder must not exist, and this name cannot be changed. However, we can change the

git clone https://github.com/ceph/ceph-csi.git

Configure a new cephfs for testing

From one of the ceph cluster node. The chosen pg of 16 and 32 are very small, but it’s for testing.

root@pve1:~# ceph osd pool create cephfs_metadata_tst 16 16
pool 'cephfs_metadata_tst' created
root@pve1:~# ceph osd pool create cephfs_data_tst 32 32
pool 'cephfs_data_tst' created
root@pve1:~# ceph fs new cephfstst cephfs_data_tst cephfs_metadata_tst 
new fs with metadata pool 9 and data pool 8
root@pve1:~# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
name: cephfstst, metadata pool: cephfs_data_tst, data pools: [cephfs_metadata_tst ]
root@pve1:~# 

Generate minimal ceph.conf

From a ceph node

root@pve1:~# ceph config generate-minimal-conf
# minimal ceph.conf for e7628d51-32b5-4f5c-8eec-1cafb41ead74
[global]
        fsid = e7628d51-32b5-4f5c-8eec-1cafb41ead74
        mon_host = [v2:192.168.178.2:3300/0,v1:192.168.178.2:6789/0] [v2:192.168.178.3:3300/0,v1:192.168.178.3:6789/0] [v2:192.168.178.4:3300/0,v1:192.168.178.4:6789/0]

Get nvimdocker (optional) and check your K8S access

nvimdocker will search for ~/.kube/config and mount the current working directory to /workdir for persisence. It contains all the tools needed for this tutorial. It’s configured for solarized dark colors. Feel free to use it or install the helm/editors/git/kubectl on your workstation.

pivert@Z690:~/LocalDocuments$ mkdir ceph-csi
pivert@Z690:~/LocalDocuments$ cd ceph-csi
pivert@Z690:~/LocalDocuments/ceph-csi$ curl -o nvimdocker https://gitlab.com/pivert/ansible-neovim-pyright/-/raw/main/
nvimdocker && chmod a+x nvimdocker
 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100  4460  100  4460    0     0   8468      0 –:–:– –:–:– –:–:–  8462
pivert@Z690:~/LocalDocuments/ceph-csi$ ./nvimdocker  
Info: local mount point /dev/ttyUSB0 does not exists. Skipping definition: /dev/ttyUSB0:/dev/ttyUSB0
description : Unified environment for system administration, applications development, and running Python/Perl/Ansible
/Node applications.
maintainer : François Delpierre <docker@pivert.org>
org.opencontainers.image.documentation : https://gitlab.com/pivert/ansible-neovim-pyright/
org.opencontainers.image.title : nvimdocker DevOps toolbox
org.opencontainers.image.url : https://gitlab.com/pivert/ansible-neovim-pyright/
org.opencontainers.image.vendor : François Delpierre <docker@pivert.org>
org.opencontainers.image.version : 2.03
created : 2022-11-08T17:14:45.223508985Z
Change /user owner to 1000:1000
Debian GNU/Linux 11 \n \l

HELP: Check mainconfig with ‘config’ command
16:08:47[Z690-nvimdocker-906251][pivert] /workdir
$kubectl get nodes
NAME    STATUS   ROLES                       AGE   VERSION
rke21   Ready    control-plane,etcd,master   49d   v1.24.4+rke2r1
rke22   Ready    control-plane,etcd,master   49d   v1.24.4+rke2r1
rke23   Ready    control-plane,etcd,master   49d   v1.24.4+rke2r1
16:08:55[Z690-nvimdocker-906251][pivert] /workdir
$

Get ceph-csi

$git clone https://github.com/ceph/ceph-csi.git
Cloning into ‘ceph-csi’…
remote: Enumerating objects: 95704, done.
remote: Counting objects: 100% (513/513), done.
remote: Compressing objects: 100% (304/304), done.
remote: Total 95704 (delta 251), reused 408 (delta 180), pack-reused 95191
Receiving objects: 100% (95704/95704), 102.76 MiB | 6.03 MiB/s, done.
Resolving deltas: 100% (57504/57504), done.
16:11:46[Z690-nvimdocker-906251][pivert] /workdir
$ls
ceph-csi  nvimdocker
16:11:49[Z690-nvimdocker-906251][pivert] /workdir
$
kubectl create namespace ceph-csi-cephfs
read -s -p 'Please enter key from ceph auth get-key client.admin : ' KEY; kubectl --namespace ceph-csi-cephfs create secret generic csi-cephfs-secret --from-literal=adminID=admin --from-literal=adminKey=${KEY}
kubectl --namespace ceph-csi-cephfs describe secret csi-cephfs-secret
kubectl --namespace ceph-csi-cephfs create configmap ceph-config --from-file=ceph.conf=./ceph.conf
helm show values ceph-csi/ceph-csi-cephfs > defaultValues.yaml
ssh root@pve1 'ceph config generate-minimal-conf' | tee ceph-minimal.conf
kubectl create namespace ceph-csi-cephfs
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm show values ceph-csi/ceph-csi-cephfs > defaultValues.yaml
cp defaultValues.yaml values.yaml
vim values.yaml
helm install --namespace "ceph-csi-cephfs" "ceph-csi-cephfs" ceph-csi/ceph-csi-cephfs --values ./values.yaml

Like it ?

Get notified on new posts (max 1 / month)
Soyez informés lors des prochains articles

Leave a Reply

Your email address will not be published.Required fields are marked *