Storage Locked / Cannot Use CEPH pool

cThis issue arises when you have a bad CRUSH rule or overall Ceph configuration set.

First of all, check the replication rule in the pool configuration.

In my case, I had set the crush rule or replication to replicated_rule, which by default disperses replicas across different hosts. I was running a single-node cluster, so the total number of hosts was 1. However, my number of replicas was set to 3 and the minimum number of replicas was set to 2. Since there was a single host, and the replication type was set to host, and the minimum replica was set to 2, the configuration failed.

For me, there were two solutions: either decrease the minimum replica to 1 to match the number of hosts, or, since I had two OSDs, change the replication type to OSD.

Both solutions work, but I preferred the second, so I set the replication type to OSD because some amount of redundancy is necessary to keep the data safe.

Creating a CRUSH Rule

There are two types of replication:

  1. Across hosts
  2. Across OSDs

Step-by-step commands:

  1. ceph osd tree
  2. ceph osd crush rule create-replicated replicated_osd default osd
  3. ceph osd pool set prodpool crush_rule replicated_osd

If you encounter any issues while using Ceph with Proxmox VE, or if you applied a wrong or bad configuration by mistake and are unable to log in to the Proxmox VE web interface, please fix the Ceph issue first using the command line by logging into an SSH session and then restarting Proxmox system services:

systemctl restart pve-cluster.service
systemctl restart pvedaemon.service
systemctl restart pveproxy.service

Cheatsheet

  1. Dump details of all CRUSH rules:

    ceph osd crush rule dump
  2. List all CRUSH rules:

    ceph osd crush rule ls
  3. List OSD Tree:

    ceph osd tree
  4. Remove a CRUSH rule:

    ceph osd crush rule rm <name>
  5. Create a CRUSH rule:

    ceph osd crush rule create-replicated <rule_name> <crush_tree_root> <type> [<class>]

    Example:

    ceph osd crush rule create-replicated replicated_osd default osd
  6. List all pools:

    ceph osd pool ls
  7. Show pool status:

    ceph osd pool stats
  8. Set CRUSH rule to the pool:

    ceph osd pool set <pool_name> crush_rule <crush_rule_name>

    Example:

    ceph osd pool set prodpool crush_rule replicated_osd