Your submission was sent successfully! Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

You have successfully unsubscribed! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates about Ubuntu and upcoming events where you can meet our team.Close

Install Ceph on Ubuntu

Ceph is a storage system designed for excellent performance, reliability, and scalability. However, the installation and management of Ceph can be challenging. The Ceph-on- Ubuntu solution takes the administration minutiae out of the equation through the use of Juju charms. With charms, the deployment of a Ceph cluster becomes trivial as does the scaling of the cluster’s storage capacity.

Looking for help running Ceph?

Get in touch

Ceph install

Single-node deployment

  • Uses MicroCeph
  • Works on a workstation or VM
  • Suitable for testing and development

These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.

You will need a multi-core processor and at least 8GiB of memory and 100GB of disk space. MicroCeph has been tested on x86-based physical and virtual machines running Ubuntu 22.04 LTS.


  1. To get started, install the MicroCeph snap with the following command on each node to be used in the cluster:

    sudo snap install microceph
  2. Then bootstrap the cluster:

    sudo microceph cluster bootstrap
  3. Check the cluster status with the following command:

    sudo microceph.ceph status

    Here you should see that there is a single node in the cluster.

  4. To use MicroCeph as a single node, the default CRUSH rules need to be modified:

    sudo microceph.ceph osd crush rule rm replicated_rule
    sudo microceph.ceph osd crush rule create-replicated single default osd
  5. Next, add some disks that will be used as OSDs:

    sudo microceph disk add /dev/sd[x] --wipe

    Repeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using:

    sudo microceph.ceph status
    sudo microceph.ceph osd status

Multi-node deployment

  • Uses MicroCeph
  • Minimum 4-nodes, full-HA Ceph cluster
  • Suitable for small-scale production environments

These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.

You will need 4 physical machines with multi-core processors and at least 8GiB of memory and 100GB of disk space. MicroCeph has been tested on x86-based physical machines running Ubuntu 22.04 LTS.


  1. To get started, install the MicroCeph snap with the following command on each node to be used in the cluster:

    sudo snap install microceph
  2. Then bootstrap the cluster from the first node:

    sudo microceph cluster bootstrap
  3. On the first node, add other nodes to the cluster:

    sudo microceph cluster add node[x]
  4. Copy the resulting output to be used on node[x]:

    sudo microceph cluster join pasted-output-from-node1

    Repeat these steps for each additional node you would like to add to the cluster.

  5. Check the cluster status with the following command:

    sudo microceph.ceph status
  6. Next, add some disks to each node that will be used as OSDs:

    sudo microceph disk add /dev/sd[x] --wipe

    Repeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using:

    sudo microceph.ceph status
    sudo microceph.ceph osd status

Containerised deployment

  • Uses a Canonical-supplied and maintained rock (OCI image)
  • Works with cephadm and rook
  • Suitable for all types of containerised deployments

These installation instructions use the Canonical produced and supplied Ceph rock — this OCI compliant image provides a drop in replacement for the upstream Ceph OCI image.

Read the container image documentation ›

Large-scale deployment

  • Uses Charmed Ceph
  • Uses MAAS for bare metal orchestration
  • Suitable for large-scale production environments

Charmed Ceph is Canonical's fully automated, model-driven approach to installing and managing Ceph. Charmed Ceph is generally deployed on bare-metal hardware that is managed by MAAS.

How to install Charmed Ceph ›

For more details, read the Ceph documentation ›

Need more help with Ceph?

Let our Ceph experts help you take the next step.

Get in touch

Latest Ceph news from our blog ›

Loading...