Kindle Notes & Highlights
For data consistency, it performs data replication, failure detection, and recovery, as well as data migration and rebalancing across cluster nodes.
Ceph pool is a logical partition to store objects that provides an organized way of storage. We
Unlike traditional systems that rely on storing and managing a central metadata / index table, Ceph uses the CRUSH algorithm to deterministically compute where the data should be written to or read from.
The CRUSH mechanism works in such a way that the metadata computation workload is distributed and performed only when needed.
A placement group is a logical collection of objects that are replicated on OSDs to provide reliability in a storage system.
The number of placement groups in a cluster should be meticulously calculated.
PGP is Placement Group for Placement purpose, which should be kept equal to the total number
A Ceph pool is a logical partition to store objects. Each pool in Ceph holds a number of placement groups, which in turn holds a number of objects that are mapped to OSDs across clusters.
A Ceph pool is mapped with a CRUSH ruleset when data is written to a pool; it is identified by the CRUSH ruleset for the placement of objects and its replica inside the cluster.
You can use ceph osd find to search an OSD and its location in a CRUSH map:
primary OSDs of the placement group acting set fail to report their statistics to the monitors, or if other OSDs report their primary OSDs down, the monitor will consider these PGs as stale.
2 GB per-OSD daemon will be a good choice.
If this value is set to 0, Ceph uses the write-through caching method. If this parameter is not used, the default cache mechanism is write-back.
Erasure-coded pools require less storage space compared to replicated pools; however, this storage saving comes at the cost of performance because the erasure coding process divides every object into multiple smaller data chunks, and few newer coding chunks are mixed with these data chunks.

