This post is outdated, and will be replaced
From K8S 1.24, it’s not possible to use the deprecated Cephfs rootfs Provisioner.
The solution is to use the new ceph-csi solution for rbd, cephfs or nfs.
In this post, we will focus on CephFS mode.
The K8S is 3 nodes RKE2 cluster on Ubuntu 22.04 LTS Server; I won’t detail the installation since it’s quite straightforward.
Prerequisites
- You have admin access to a running ceph cluster (In the example below, nodes are pve1, pve2, pve3)
- You have a running Kubernetes cluster, and your .kube/config.yaml
I’ll use the provided helm chart and Seashell container. Seashell provides all the tools (kubectl, git, helm, neovim with yaml & json linters, jq, aliases, …)
Configure a new cephfs for testing
From one of the ceph cluster node. The chosen pg of 16 and 32 are very small, but it’s for testing.
root@pve1:~# ceph osd pool create cephfs_metadata_tst 16 16
pool 'cephfs_metadata_tst' created
root@pve1:~# ceph osd pool create cephfs_data_tst 32 32
pool 'cephfs_data_tst' created
root@pve1:~# ceph fs new cephfstst cephfs_data_tst cephfs_metadata_tst
new fs with metadata pool 9 and data pool 8
root@pve1:~# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
name: cephfstst, metadata pool: cephfs_data_tst, data pools: [cephfs_metadata_tst ]
root@pve1:~#
Generate minimal ceph.conf
From a ceph node
root@pve1:~# ceph config generate-minimal-conf
# minimal ceph.conf for e7628d51-32b5-4f5c-8eec-1cafb41ead74
[global]
fsid = e7628d51-32b5-4f5c-8eec-1cafb41ead74
mon_host = [v2:192.168.178.2:3300/0,v1:192.168.178.2:6789/0] [v2:192.168.178.3:3300/0,v1:192.168.178.3:6789/0] [v2:192.168.178.4:3300/0,v1:192.168.178.4:6789/0]
Get ceph.admin key
ceph auth get client.admin
Get Seashell (optional) and check your K8S access
Seashell will search for ~/.kube/config and mount the current working directory to /workdir for persisence. It contains all the tools needed for this tutorial. It’s configured for solarized dark colors. Feel free to use it or install the helm/editors/git/kubectl on your workstation.
pivert@Z690:~/LocalDocuments$ cd ceph-csi
pivert@Z690:~/LocalDocuments/ceph-csi$ curl -o seashell https://gitlab.com/pivert/seashell/-/raw/main/
seashell && chmod a+x seashell
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4460 100 4460 0 0 8468 0 –:–:– –:–:– –:–:– 8462
pivert@Z690:~/LocalDocuments/ceph-csi$ ./seashell
Info: local mount point /dev/ttyUSB0 does not exists. Skipping definition: /dev/ttyUSB0:/dev/ttyUSB0
description : Unified environment for system administration, applications development, and running Python/Perl/Ansible
/Node applications.
maintainer : François Delpierre <docker@pivert.org>
org.opencontainers.image.documentation : https://gitlab.com/pivert/ansible-neovim-pyright/
org.opencontainers.image.title : seashell DevOps toolbox
org.opencontainers.image.url : https://gitlab.com/pivert/seashell/
org.opencontainers.image.vendor : François Delpierre <docker@pivert.org>
org.opencontainers.image.version : 2.03
created : 2022-11-08T17:14:45.223508985Z
Change /user owner to 1000:1000
Debian GNU/Linux 11 \n \l
HELP: Check mainconfig with ‘config’ command
16:08:47[Z690-seashell-906251][pivert] /workdir
$kubectl get nodes
NAME STATUS ROLES AGE VERSION
rke21 Ready control-plane,etcd,master 49d v1.24.4+rke2r1
rke22 Ready control-plane,etcd,master 49d v1.24.4+rke2r1
rke23 Ready control-plane,etcd,master 49d v1.24.4+rke2r1
16:08:55[Z690-seashell-906251][pivert] /workdir
$
Get ceph-csi
Cloning into ‘ceph-csi’…
remote: Enumerating objects: 95704, done.
remote: Counting objects: 100% (513/513), done.
remote: Compressing objects: 100% (304/304), done.
remote: Total 95704 (delta 251), reused 408 (delta 180), pack-reused 95191
Receiving objects: 100% (95704/95704), 102.76 MiB | 6.03 MiB/s, done.
Resolving deltas: 100% (57504/57504), done.
16:11:46[Z690-seashell-906251][pivert] /workdir
$ls
ceph-csi seashell
16:11:49[Z690-seashell-906251][pivert] /workdir
$
kubectl create namespace ceph-csi-cephfs
Create the secret (option 1)
You need paste the key from the «Get ceph.admin key» above.
read -s -p 'Please enter key from ceph auth get-key client.admin : ' KEY
kubectl --namespace ceph-csi-cephfs create secret generic csi-cephfs-secret --from-literal=adminID=admin --from-literal=adminKey=${KEY}
Create the secret (option 2)
Direct, if you have ssh access to your cephfs cluster
KEY=$(ssh root@pve1 'ceph auth get client.admin -f json' | jq -r '.[0].key')
kubectl --namespace ceph-csi-cephfs create secret generic csi-cephfs-secret --from-literal=adminID=admin --from-literal=adminKey=${KEY}
Checks & ceph-config configmap
kubectl --namespace ceph-csi-cephfs describe secret csi-cephfs-secret
kubectl --namespace ceph-csi-cephfs create configmap ceph-config --from-file=ceph.conf=./ceph-minimal.conf
Helm
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm show values ceph-csi/ceph-csi-cephfs > defaultValues.yaml
cp defaultValues.yaml values.yaml
vim values.yaml
The helm values.yaml
You need to update values.yaml with your values, example:
---
rbac:
# Specifies whether RBAC resources should be created
create: true
serviceAccounts:
nodeplugin:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname
name:
provisioner:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname
name:
csiConfig:
- clusterID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
monitors:
- 192.168.178.2
- 192.168.178.3
- 192.168.178.4
cephFS:
subvolumeGroup: "csi"
netNamespaceFilePath: "{{ .kubeletDir }}/plugins/{{ .driverName }}/net"
# Labels to apply to all resources
commonLabels: {}
# Set logging level for csi containers.
# Supported values from 0 to 5. 0 for general useful logs,
# 5 for trace level verbosity.
# logLevel is the variable for CSI driver containers's log level
logLevel: 5
# sidecarLogLevel is the variable for Kubernetes sidecar container's log level
sidecarLogLevel: 1
nodeplugin:
name: nodeplugin
# if you are using ceph-fuse client set this value to OnDelete
updateStrategy: RollingUpdate
# set user created priorityclassName for csi plugin pods. default is
# system-node-critical which is highest priority
priorityClassName: system-node-critical
httpMetrics:
# Metrics only available for cephcsi/cephcsi => 1.2.0
# Specifies whether http metrics should be exposed
enabled: true
# The port of the container to expose the metrics
containerPort: 8081
service:
# Specifies whether a service should be created for the metrics
enabled: true
# The port to use for the service
servicePort: 8080
type: ClusterIP
# Annotations for the service
# Example:
# annotations:
# prometheus.io/scrape: "true"
# prometheus.io/port: "9080"
annotations: {}
clusterIP: ""
## List of IP addresses at which the stats-exporter service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
profiling:
enabled: false
registrar:
image:
repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
tag: v2.6.2
pullPolicy: IfNotPresent
resources: {}
plugin:
image:
repository: quay.io/cephcsi/cephcsi
tag: v3.8.0
pullPolicy: IfNotPresent
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
# Set to true to enable Ceph Kernel clients
# on kernel < 4.17 which support quotas
# forcecephkernelclient: true
# common mount options to apply all mounting
# example: kernelmountoptions: "recover_session=clean"
kernelmountoptions: ""
fusemountoptions: ""
provisioner:
name: provisioner
replicaCount: 3
strategy:
# RollingUpdate strategy replaces old pods with new ones gradually,
# without incurring downtime.
type: RollingUpdate
rollingUpdate:
# maxUnavailable is the maximum number of pods that can be
# unavailable during the update process.
maxUnavailable: 50%
# Timeout for waiting for creation or deletion of a volume
timeout: 60s
# cluster name to set on the subvolume
# clustername: "k8s-cluster-1"
# set user created priorityclassName for csi provisioner pods. default is
# system-cluster-critical which is less priority than system-node-critical
priorityClassName: system-cluster-critical
# enable hostnetwork for provisioner pod. default is false
# useful for deployments where the podNetwork has no access to ceph
enableHostNetwork: false
httpMetrics:
# Metrics only available for cephcsi/cephcsi => 1.2.0
# Specifies whether http metrics should be exposed
enabled: true
# The port of the container to expose the metrics
containerPort: 8081
service:
# Specifies whether a service should be created for the metrics
enabled: true
# The port to use for the service
servicePort: 8080
type: ClusterIP
# Annotations for the service
# Example:
# annotations:
# prometheus.io/scrape: "true"
# prometheus.io/port: "9080"
annotations: {}
clusterIP: ""
## List of IP addresses at which the stats-exporter service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
loadBalancerIP: ""
loadBalancerSourceRanges: []
profiling:
enabled: false
provisioner:
image:
repository: registry.k8s.io/sig-storage/csi-provisioner
tag: v3.3.0
pullPolicy: IfNotPresent
resources: {}
## For further options, check
## https://github.com/kubernetes-csi/external-provisioner#command-line-options
extraArgs: []
# set metadata on volume
setmetadata: true
resizer:
name: resizer
enabled: true
image:
repository: registry.k8s.io/sig-storage/csi-resizer
tag: v1.6.0
pullPolicy: IfNotPresent
resources: {}
## For further options, check
## https://github.com/kubernetes-csi/external-resizer#recommended-optional-arguments
extraArgs: []
snapshotter:
image:
repository: registry.k8s.io/sig-storage/csi-snapshotter
tag: v6.1.0
pullPolicy: IfNotPresent
resources: {}
## For further options, check
## https://github.com/kubernetes-csi/external-snapshotter#csi-external-snapshotter-sidecar-command-line-options
extraArgs: []
nodeSelector: {}
tolerations: []
affinity: {}
# Mount the host /etc/selinux inside pods to support
# selinux-enabled filesystems
selinuxMount: true
storageClass:
# Specifies whether the Storage class should be created
create: true
name: csi-cephfs-sc
# Annotations for the storage class
# Example:
# annotations:
# storageclass.kubernetes.io/is-default-class: "true"
annotations: {}
# String representing a Ceph cluster to provision storage from.
# Should be unique across all Ceph clusters in use for provisioning,
# cannot be greater than 36 bytes in length, and should remain immutable for
# the lifetime of the StorageClass in use.
clusterID: "e7628d51-32b5-4f5c-8eec-1cafb41ead74"
# (required) CephFS filesystem name into which the volume shall be created
# eg: fsName: myfs
fsName: cephfstst
# (optional) Ceph pool into which volume data shall be stored
# pool: <cephfs-data-pool>
# For eg:
# pool: "replicapool"
pool: ""
# (optional) Comma separated string of Ceph-fuse mount options.
# For eg:
# fuseMountOptions: debug
fuseMountOptions: ""
# (optional) Comma separated string of Cephfs kernel mount options.
# Check man mount.ceph for mount options. For eg:
# kernelMountOptions: readdir_max_bytes=1048576,norbytes
kernelMountOptions: ""
# (optional) The driver can use either ceph-fuse (fuse) or
# ceph kernelclient (kernel).
# If omitted, default volume mounter will be used - this is
# determined by probing for ceph-fuse and mount.ceph
# mounter: kernel
mounter: ""
# (optional) Prefix to use for naming subvolumes.
# If omitted, defaults to "csi-vol-".
# volumeNamePrefix: "foo-bar-"
volumeNamePrefix: ""
# The secrets have to contain user and/or Ceph admin credentials.
provisionerSecret: csi-cephfs-secret
# If the Namespaces are not specified, the secrets are assumed to
# be in the Release namespace.
provisionerSecretNamespace: ""
controllerExpandSecret: csi-cephfs-secret
controllerExpandSecretNamespace: ""
nodeStageSecret: csi-cephfs-secret
nodeStageSecretNamespace: ""
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions: []
# Mount Options
# Example:
# mountOptions:
# - discard
secret:
# Specifies whether the secret should be created
create: false
name: csi-cephfs-secret
# Key values correspond to a user name and its key, as defined in the
# ceph cluster. User ID should have required access to the 'pool'
# specified in the storage class
adminID: <plaintext ID>
adminKey: <Ceph auth key corresponding to ID above>
cephconf: |
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 192.168.178.2/24
fsid = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
mon_allow_pool_delete = true
mon_host = 192.168.178.2 192.168.178.4 192.168.178.3
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 192.168.178.2/24
fuse_set_user_groups = false
fuse_big_writes = true
[mds.pve1]
host = pve1
mds_standby_for_name = pve
[mds.pve2]
host = pve2
mds_standby_for_name = pve
[mds.pve3]
host = pve3
mds_standby_for_name = pve
[mon.pve1]
public_addr = 192.168.178.2
[mon.pve2]
public_addr = 192.168.178.3
[mon.pve3]
public_addr = 192.168.178.4
#########################################################
# Variables for 'internal' use please use with caution! #
#########################################################
# The filename of the provisioner socket
provisionerSocketFile: csi-provisioner.sock
# The filename of the plugin socket
pluginSocketFile: csi.sock
# kubelet working directory,can be set using `--root-dir` when starting kubelet.
kubeletDir: /var/lib/kubelet
# Name of the csi-driver
driverName: cephfs.csi.ceph.com
# Name of the configmap used for state
configMapName: ceph-csi-config
# Key to use in the Configmap if not config.json
# configMapKey:
# Use an externally provided configmap
externallyManagedConfigmap: false
# Name of the configmap used for ceph.conf
cephConfConfigMapName: ceph-config
Install
helm install --namespace "ceph-csi-cephfs" "ceph-csi-cephfs" ceph-csi/ceph-csi-cephfs --values ./values.yaml

3 responses to “ceph-csi on Kubernetes 1.24 (CephFS)”
Hi,
Thanks a lot for this article. I a stucked with the last command to execute with the following error:
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap “ceph-config” in namespace “ceph-csi-cephfs” exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key “app.kubernetes.io/managed-by”: must be set to “Helm”; annotation validation error: missing key “meta.helm.sh/release-name”: must be set to “ceph-csi-cephfs”; annotation validation error: missing key “meta.helm.sh/release-namespace”: must be set to “ceph-csi-cephfs”
I am running Microk8s cluster (v1.26.3 revision 4959) on Ubuntu. The 3 VMs runs on proxmox 7.4-3 and I try to use cephfs as a persistent storage for microk8s
I’ve tried to not execute the velow command:
kubectl –namespace ceph-csi-cephfs describe secret csi-cephfs-secret
kubectl –namespace ceph-csi-cephfs create configmap ceph-config –from-file=ceph.conf=./ceph-minimal.conf
And instead adding the correct value in the values.yaml -> my pods a then created but the provisioners are stucked in pending status:
$ kubectl get pods -n ceph-csi-cephfs
NAME READY STATUS RESTARTS AGE
ceph-csi-cephfs-provisioner-5d5d78fcc4-pnq7d 0/5 Pending 0 14m
ceph-csi-cephfs-provisioner-5d5d78fcc4-zj6lx 0/5 Pending 0 14m
ceph-csi-cephfs-provisioner-5d5d78fcc4-7bf4d 0/5 Pending 0 14m
ceph-csi-cephfs-nodeplugin-dgdch 3/3 Running 0 14m
ceph-csi-cephfs-nodeplugin-v2nm7 3/3 Running 0 14m
ceph-csi-cephfs-nodeplugin-qv87p 3/3 Running 0 14m
Logs:
$ kubectl logs -n ceph-csi-cephfs ceph-csi-cephfs-provisioner-5d5d78fcc4-pnq7d
Defaulted container “csi-provisioner” out of: csi-provisioner, csi-snapshotter, csi-resizer, csi-cephfsplugin, liveness-prometheus
$ kubectl logs -n ceph-csi-cephfs ceph-csi-cephfs-nodeplugin-dgdch
Defaulted container “driver-registrar” out of: driver-registrar, csi-cephfsplugin, liveness-prometheus
I0401 13:10:24.972073 1665786 main.go:166] Version: v2.6.2
I0401 13:10:24.972136 1665786 main.go:167] Running node-driver-registrar in mode=registration
I0401 13:10:24.972515 1665786 main.go:191] Attempting to open a gRPC connection with: “/csi/csi.sock”
I0401 13:10:24.972543 1665786 connection.go:154] Connecting to unix:///csi/csi.sock
I0401 13:10:25.973424 1665786 main.go:198] Calling CSI driver to discover driver name
I0401 13:10:25.973466 1665786 connection.go:183] GRPC call: /csi.v1.Identity/GetPluginInfo
I0401 13:10:25.973473 1665786 connection.go:184] GRPC request: {}
I0401 13:10:25.978235 1665786 connection.go:186] GRPC response: {“name”:”cephfs.csi.ceph.com”,”vendor_version”:”v3.8.0″}
I0401 13:10:25.978293 1665786 connection.go:187] GRPC error:
I0401 13:10:25.978300 1665786 main.go:208] CSI driver name: “cephfs.csi.ceph.com”
I0401 13:10:25.978324 1665786 node_register.go:53] Starting Registration Server at: /registration/cephfs.csi.ceph.com-reg.sock
I0401 13:10:25.978442 1665786 node_register.go:62] Registration Server started at: /registration/cephfs.csi.ceph.com-reg.sock
I0401 13:10:25.978604 1665786 node_register.go:92] Skipping HTTP server because endpoint is set to: “”
I0401 13:10:26.142141 1665786 main.go:102] Received GetInfo call: &InfoRequest{}
I0401 13:10:26.142395 1665786 main.go:109] “Kubelet registration probe created” path=”/var/lib/kubelet/plugins/cephfs.csi.ceph.com/registration”
I0401 13:10:26.321657 1665786 main.go:120] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}
Any idea what is wrong or what am I doing wrong?
Thanks for you help
Hi Jay,
I didn’t have much time to check on your problem. But I’m curious.
Did you manage to find the problem ? How did you fix it ? Do you have recommendations to prevent or identify the issue ?
You problem seems very similar to the one in the comment : https://github.com/Azure/secrets-store-csi-driver-provider-azure/issues/798#issuecomment-1039878360
Regards,
Hello Jay, existing configmap should be in relation with helm. To resolve it just edit existing configmap “kubectl edit cm ceph-config -n ceph-csi-cephfs”
than add annatations and labels as bellow.
metadata:
annotations:
meta.helm.sh/release-name: ceph-csi-cephfs
meta.helm.sh/release-namespace: ceph-csi-cephfs
labels:
app.kubernetes.io/managed-by: Helm
btw thanks for this article to François