Object storage
All the services available through Ceph are built on top of Ceph’s distributed object store, RADOS.
The ceph-radosgw charm deploys the RADOS Gateway (RGW), a S3 and Swift compatible HTTP gateway. The deployment is done within the context of an existing Charmed Ceph cluster.
Highly available RGW is achieved by deploying multiple gateways (i.e. multiple ceph-radosgw application units) in combination with the hacluster charm (and typically a VIP).
RGW deployment
To deploy a single RADOS Gateway in a pre-existing Ceph cluster:
juju deploy --to lxd:0 ceph-radosgw
juju integrate ceph-radosgw:mon ceph-mon:radosgw
Here the ceph-radosgw unit is containerised, where the new container is placed on existing machine 0.
For an HA scenario, to deploy a three-node cluster:
juju deploy --config cluster_count=3 hacluster radosgw-hacluster
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config vip=10.0.0.100 ceph-radosgw
juju integrate radosgw-hacluster:ha ceph-radosgw:ha
juju integrate ceph-radosgw:mon ceph-mon:radosgw
The RADOS Gateway is now fully set up.
RGW client usage
This section will provide optional instructions for verifying the RADOS Gateway by setting up a simple client environment using a single ceph-radosgw unit. Deploy the client using the steps provided in the Client setup appendix.
Create a user for the Ceph Object Gateway service (e.g. ‘ubuntu’):
juju ssh ceph-mon/0 'sudo radosgw-admin user create \
--uid="ubuntu" --display-name="Charmed Ceph"'
The command output will include an “access key” and a “secret key”, both of which will be needed later on, but the following command can be used to query for them:
juju ssh ceph-mon/0 'sudo radosgw-admin user info \
--uid ubuntu' | grep -e access_key -e secret_key
The software client used for these instructions will be minio-mc-nsg.
An example deployment will have a juju status
output similar to the following:
Model Controller Cloud/Region Version SLA Timestamp
ceph my-controller my-maas/default 3.5.2 unsupported 20:34:16Z
App Version Status Scale Charm Channel Rev OS Notes
ceph-mon 18.2.0 active 3 ceph-mon reef/stable 93 ubuntu
ceph-osd 18.2.0 active 3 ceph-osd reef/stable 528 ubuntu
ceph-radosgw 18.2.0 active 3 ceph-radosgw reef/stable 574 ubuntu
ceph-client 22.04 active 1 ubuntu stable 18 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-client/0* active idle 3 10.0.0.240 ready
ceph-mon/0 active idle 0/lxd/1 10.0.0.247 Unit is ready and clustered
ceph-mon/1 active idle 1/lxd/1 10.0.0.242 Unit is ready and clustered
ceph-mon/2* active idle 2/lxd/1 10.0.0.249 Unit is ready and clustered
ceph-osd/0 active idle 0 10.0.0.229 Unit is ready (2 OSD)
ceph-osd/1* active idle 1 10.0.0.230 Unit is ready (2 OSD)
ceph-osd/2 active idle 2 10.0.0.252 Unit is ready (2 OSD)
ceph-radosgw/0* active idle 0/lxd/2 10.0.0.239 80/tcp Unit is ready
The client host is represented by the ceph-client/0
unit.
Connect to the client:
juju ssh ceph-client/0
From the RGW client,
Install the Object storage client software:
sudo snap install minio-mc-nsg
sudo snap alias minio-mc-nsg mc
For this example deployment, we have the following:
- service IP address and port:
http://10.0.0.239:80
- access key:
N3STUWYGY9Q6W92YLO1P
- secret key:
wED4WGGkC5LAy29ZmbjkIW7nN2hbHXaJuC6yDoJX
Add a host entry (e.g. ‘ceph-radosgw’) to the client and configure the client using the above values:
mc config host add ceph-radosgw \
http://10.0.0.239:80 N3STUWYGY9Q6W92YLO1P \
wED4WGGkC5LAy29ZmbjkIW7nN2hbHXaJuC6yDoJX
Verify that the client can interact with the service by creating a “bucket” and writing to it:
mc mb ceph-radosgw/mybucket
touch test
mc cp test ceph-radosgw/mybucket