File storage
Charmed Ceph supports different types of access to file storage: CephFS and NFS.
The Ceph File System, or CephFS, is a POSIX-compliant file system built on top of Ceph’s distributed object store, RADOS.
NFS Ganesha is an NFS server (refer to Sharing File Systems with NFS ) that runs in a user address space instead of as part of the operating system kernel and is used to present CephFS shares via NFS.
The ceph-fs charm deploys the metadata server daemon (MDS), which is the underlying management software for CephFS. The charm is deployed within the context of an existing Charmed Ceph cluster.
Highly available CephFS is achieved by deploying multiple MDS servers (i.e. multiple ceph-fs application units).
The ceph-nfs charm deploys nfs-ganesha, the software used to serve NFS. The charm is deployed alongside CephFS.
CephFS deployment
To deploy a three-node MDS cluster in a pre-existing Ceph cluster:
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 ceph-fs
juju add-relation ceph-fs:ceph-mds ceph-mon:mds
Here the three ceph-fs units are containerised, where new containers are placed on existing machines 0, 1, and 2.
CephFS is now fully set up.
CephFS client usage
This section will provide optional instructions for verifying the CephFS service by setting up a simple client environment. Deploy the client using the steps provided in the Client setup appendix.
These instructions are based upon the native capabilities of the Linux kernel (v.4.x).
A kernel driver will allow the client to mount CephFS as a regular file system.
An example deployment will have a juju status
output similar to the following:
Model Controller Cloud/Region Version SLA Timestamp
ceph my-controller my-maas/default 3.5.2 unsupported 19:34:16Z
App Version Status Scale Charm Channel Rev Exposed Message
ceph-fs 18.2.0 active 2 ceph-fs reef/stable 47 no Unit is ready
ceph-mon 18.2.0 active 3 ceph-mon reef/stable 93 no Unit is ready and clustered
ceph-nfs active 2 ceph-nfs reef/stable 11 no Unit is ready
ceph-osd 18.2.0 active 3 ceph-osd reef/stable 528 no Unit is ready (2 OSD)
ceph-client 22.04 active 2 ubuntu stable 18 no
Unit Workload Agent Machine Public address Ports Message
ceph-client/0* active idle 3 10.0.0.240 ready
ceph-fs/0 active idle 0/lxd/0 10.0.0.245 Unit is ready
ceph-fs/1 active idle 1/lxd/0 10.0.0.246 Unit is ready
ceph-fs/2* active idle 2/lxd/0 10.0.0.241 Unit is ready
ceph-mon/0 active idle 0/lxd/1 10.0.0.247 Unit is ready and clustered
ceph-mon/1 active idle 1/lxd/1 10.0.0.242 Unit is ready and clustered
ceph-mon/2* active idle 2/lxd/1 10.0.0.249 Unit is ready and clustered
ceph-osd/0 active idle 0 10.0.0.229 Unit is ready (2 OSD)
ceph-osd/1* active idle 1 10.0.0.230 Unit is ready (2 OSD)
ceph-osd/2 active idle 2 10.0.0.252 Unit is ready (2 OSD)
The client host is represented by the ceph-client/0
unit.
Verify that the filesystem name set up by the ceph-fs charm is ‘ceph-fs’:
juju ssh ceph-mon/0 "sudo ceph fs ls"
Output:
name: ceph-fs, metadata pool: ceph-fs_metadata, data pools: [ceph-fs_data ]
Create a CephFS user (‘test’) with read/write permissions at the root of the ‘ceph-fs’ filesystem, collect the user’s keyring file, and transfer it to the client:
juju ssh ceph-mon/0 "sudo ceph fs authorize ceph-fs client.test / rw" \
| tee ceph.client.test.keyring
juju scp ceph.client.test.keyring ceph-client/0:
Connect to the client:
juju ssh ceph-client/0
From the CephFS client,
Configure the client using the keyring file and set up the correct permissions:
sudo mv ~ubuntu/ceph.client.test.keyring /etc/ceph
sudo chmod 600 /etc/ceph/ceph.client.test.keyring
sudo chown root: /etc/ceph/ceph.client.test.keyring
The key installed on a client host authorises access to the CephFS filesystem for the host itself, and not to a particular user.
Mount the CephFS filesystem and create a test file:
sudo mkdir /mnt/cephfs
sudo mount -t ceph :/ /mnt/cephfs -o name=test
sudo mkdir /mnt/cephfs/work
sudo chown ubuntu: /mnt/cephfs/work
touch /mnt/cephfs/work/test
CephNFS deployment
To deploy a three-node CephNFS cluster in a pre-existing (Reef) Ceph cluster:
juju deploy --channel reef/stable -n 3 --to lxd:0,lxd:1,lxd:2 --config vip=10.0.0.101 ceph-nfs
juju deploy hacluster
juju add-relation ceph-nfs hacluster
juju add-relation ceph-nfs:ceph-client ceph-mon:client
Here the three ceph-nfs units are containerised, where new containers are placed on existing machines 0, 1, and 2.
CephFS is now fully set up.
CephNFS client usage
This section will provide optional instructions for verifying the CephNFS service by setting up a simple client environment. The only client side requirement is the inclusion of the nfs-common
package.
An example deployment will have a juju status
output similar to the following:
Model Controller Cloud/Region Version SLA Timestamp
ceph my-controller my-maas/default 3.5.2 unsupported 19:34:16Z
App Version Status Scale Charm Channel Rev Exposed Message
ceph-fs 18.2.0 active 2 ceph-fs reef/stable 47 no Unit is ready
ceph-mon 18.2.0 active 3 ceph-mon reef/stable 93 no Unit is ready and clustered
ceph-nfs active 2 ceph-nfs reef/stable 11 no Unit is ready
ceph-osd 18.2.0 active 3 ceph-osd reef/stable 528 no Unit is ready (2 OSD)
ceph-client 22.04 active 2 ubuntu stable 18 no
Unit Workload Agent Machine Public address Ports Message
ceph-client/0* active idle 3 10.0.0.240 ready
ceph-fs/0 active idle 0/lxd/0 10.0.0.245 Unit is ready
ceph-fs/1 active idle 1/lxd/0 10.0.0.246 Unit is ready
ceph-fs/2* active idle 2/lxd/0 10.0.0.241 Unit is ready
ceph-mon/0 active idle 0/lxd/1 10.0.0.247 Unit is ready and clustered
ceph-mon/1 active idle 1/lxd/1 10.0.0.242 Unit is ready and clustered
ceph-mon/2* active idle 2/lxd/1 10.0.0.249 Unit is ready and clustered
ceph-osd/0 active idle 0 10.0.0.229 Unit is ready (2 OSD)
ceph-osd/1* active idle 1 10.0.0.230 Unit is ready (2 OSD)
ceph-osd/2 active idle 2 10.0.0.252 Unit is ready (2 OSD)
The client host is represented by the ceph-client/0
unit.
Create an NFS share on the leader ceph-nfs unit:
juju run --wait ceph-nfs/1 create-share name=test-share allowed-ips=10.0.0.240 size=10
Output:
unit-ceph-nfs-1:
UnitId: ceph-nfs/1
id: "18"
results:
ip: 10.0.0.101
message: Share created
path: /volumes/_nogroup/test-share/b524fc68-7811-4e0d-82a8-889318d010c6
status: completed
timing:
completed: 2022-04-22 07:24:52 +0000 UTC
enqueued: 2022-04-22 07:24:46 +0000 UTC
started: 2022-04-22 07:24:48 +0000 UTC
Mount the NFS filesystem and create a file:
sudo mkdir /mnt/ceph_nfs
sudo mount -t nfs -o nfsvers=4.1,proto=tcp 10.0.0.201:/volumes/_nogroup/test-share/b524fc68-7811-4e0d-82a8-889318d010c6 /mnt/ceph_nfs
sudo mkdir /mnt/ceph_nfs/work
sudo chown ubuntu: /mnt/ceph_nfs/work
touch /mnt/ceph_nfs/work/test