Learning Ceph
Rate it:
Open Preview
Read between July 20 - September 3, 2017
23%
Flag icon
For data consistency, it performs data replication, failure detection, and recovery, as well as data migration and rebalancing across cluster nodes.
30%
Flag icon
Ceph pool is a logical partition to store objects that provides an organized way of storage. We
31%
Flag icon
Unlike traditional systems that rely on storing and managing a central metadata / index table, Ceph uses the CRUSH algorithm to deterministically compute where the data should be written to or read from.
31%
Flag icon
The CRUSH mechanism works in such a way that the metadata computation workload is distributed and performed only when needed.
33%
Flag icon
A placement group is a logical collection of objects that are replicated on OSDs to provide reliability in a storage system.
33%
Flag icon
The number of placement groups in a cluster should be meticulously calculated.
33%
Flag icon
PGP is Placement Group for Placement purpose, which should be kept equal to the total number
34%
Flag icon
A Ceph pool is a logical partition to store objects. Each pool in Ceph holds a number of placement groups, which in turn holds a number of objects that are mapped to OSDs across clusters.
34%
Flag icon
A Ceph pool is mapped with a CRUSH ruleset when data is written to a pool; it is identified by the CRUSH ruleset for the placement of objects and its replica inside the cluster.
68%
Flag icon
You can use ceph osd find to search an OSD and its location in a CRUSH map:
69%
Flag icon
primary OSDs of the placement group acting set fail to report their statistics to the monitors, or if other OSDs report their primary OSDs down, the monitor will consider these PGs as stale.
79%
Flag icon
2 GB per-OSD daemon will be a good choice.
83%
Flag icon
If this value is set to 0, Ceph uses the write-through caching method. If this parameter is not used, the default cache mechanism is write-back.
85%
Flag icon
Erasure-coded pools require less storage space compared to replicated pools; however, this storage saving comes at the cost of performance because the erasure coding process divides every object into multiple smaller data chunks, and few newer coding chunks are mixed with these data chunks.