千家信息网

ceph隔离级别的示例分析

发表于:2025-02-01 作者:千家信息网编辑
千家信息网最后更新 2025年02月01日,这篇文章主要介绍ceph隔离级别的示例分析,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!ceph的隔离级别默认都是host级别,也就是说两个副本不会同时落在同一个host 上的
千家信息网最后更新 2025年02月01日ceph隔离级别的示例分析

这篇文章主要介绍ceph隔离级别的示例分析,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!

ceph的隔离级别默认都是host级别,也就是说两个副本不会同时落在同一个host 上的磁盘中。这样就保证了有一台机器故障不导致数据不可用。但是如果同时两台机器故障你呢?这就有可能数据丢失造成严重后果。甚至说遇到一个机架突然掉电那这个ceph机器都不可用。解决办法就是提高隔离级别为机架、甚至为了避免重大自然灾害导致一个机房损坏是的数据丢失隔离级别可以提高到机房级别,甚至更高。

下面我们的例子就是隔离级别提高到机架级别。

[root@ceph-node1 opt]# cat decrushmap # begin crush maptunable choose_local_tries 0tunable choose_local_fallback_tries 0tunable choose_total_tries 50tunable chooseleaf_descend_once 1tunable chooseleaf_vary_r 1tunable chooseleaf_stable 1tunable straw_calc_version 1tunable allowed_bucket_algs 54# devicesdevice 0 osd.0 class hdddevice 1 osd.1 class ssddevice 2 osd.2 class hdddevice 3 osd.3 class ssddevice 4 osd.4 class hdddevice 5 osd.5 class ssddevice 6 osd.6 class hdddevice 7 osd.7 class hdddevice 8 osd.8 class hdddevice 9 osd.9 class hdddevice 10 osd.10 class hdddevice 11 osd.11 class hdd# typestype 0 osdtype 1 hosttype 2 chassistype 3 racktype 4 rowtype 5 pdutype 6 podtype 7 roomtype 8 datacentertype 9 regiontype 10 root# bucketshost ceph-node1 {        id -3              # do not change unnecessarily        id -4 class hdd          # do not change unnecessarily        id -15 class ssd         # do not change unnecessarily        # weight 0.058        alg straw2        hash 0     # rjenkins1        item osd.0 weight 0.029        item osd.1 weight 0.029}host ceph-node2 {        id -5              # do not change unnecessarily        id -6 class hdd          # do not change unnecessarily        id -16 class ssd         # do not change unnecessarily        # weight 0.058        alg straw2        hash 0     # rjenkins1        item osd.2 weight 0.029        item osd.3 weight 0.029}host ceph-node3 {        id -7              # do not change unnecessarily        id -8 class hdd          # do not change unnecessarily        id -17 class ssd         # do not change unnecessarily        # weight 0.058        alg straw2        hash 0     # rjenkins1        item osd.4 weight 0.029        item osd.5 weight 0.029}host ceph-node4 {        id -9              # do not change unnecessarily        id -10 class hdd         # do not change unnecessarily        id -18 class ssd         # do not change unnecessarily        # weight 0.058        alg straw2        hash 0     # rjenkins1        item osd.6 weight 0.029        item osd.7 weight 0.029}host ceph-node5 {        id -11             # do not change unnecessarily        id -12 class hdd         # do not change unnecessarily        id -19 class ssd         # do not change unnecessarily        # weight 0.058        alg straw2        hash 0     # rjenkins1        item osd.8 weight 0.029        item osd.9 weight 0.029}host ceph-node6 {        id -13             # do not change unnecessarily        id -14 class hdd         # do not change unnecessarily        id -20 class ssd         # do not change unnecessarily        # weight 0.058        alg straw2        hash 0     # rjenkins1        item osd.10 weight 0.029        item osd.11 weight 0.029}# rackrack rack01 {        id -101          # do not change unnecessarily        id -102 class hdd                # do not change unnecessarily        id -103 class ssd                # do not change unnecessarily        # weight 0.058        alg straw2        hash 0  # rjenkins1        item ceph-node1 weight 0.058        item ceph-node2 weight 0.058}rack rack02 {        id -104          # do not change unnecessarily        id -105 class hdd                # do not change unnecessarily        id -106 class ssd                # do not change unnecessarily        # weight 0.058        alg straw2        hash 0  # rjenkins1        item ceph-node3 weight 0.058        item ceph-node4 weight 0.058}rack rack03 {        id -107          # do not change unnecessarily        id -108 class hdd                # do not change unnecessarily        id -109 class ssd                # do not change unnecessarily        # weight 0.058        alg straw2        hash 0  # rjenkins1        item ceph-node5 weight 0.058        item ceph-node6 weight 0.058}root default {        id -110            # do not change unnecessarily        id -111 class hdd                # do not change unnecessarily        id -112 class ssd                # do not change unnecessarily        # weight 0.354        alg straw2        hash 0     # rjenkins1        item rack01 weight 0.118        item rack02 weight 0.118        item rack03 weight 0.118}# rulesrule replicated_rule {        id 0        type replicated        min_size 1        max_size 10        step take default class hdd        step chooseleaf firstn 0 type rack        step emit}rule replicated_ssd {        id 1        type replicated        min_size 1        max_size 10        step take default class ssd        step chooseleaf firstn 0 type rack        step emit}[root@ceph-node1 opt]# ceph osd treeID   CLASS WEIGHT  TYPE NAME               STATUS REWEIGHT PRI-AFF -110       0.35399 root default                                    -101       0.11800     rack rack01                                   -3       0.05800         host ceph-node1                            0   hdd 0.02899             osd.0           up  1.00000 1.00000    1   ssd 0.02899             osd.1           up  1.00000 1.00000   -5       0.05800         host ceph-node2                            2   hdd 0.02899             osd.2           up  1.00000 1.00000    3   ssd 0.02899             osd.3           up  1.00000 1.00000 -104       0.11800     rack rack02                                   -7       0.05800         host ceph-node3                            4   hdd 0.02899             osd.4           up  1.00000 1.00000    5   ssd 0.02899             osd.5           up  1.00000 1.00000   -9       0.05800         host ceph-node4                            6   hdd 0.02899             osd.6           up  1.00000 1.00000    7   hdd 0.02899             osd.7           up  1.00000 1.00000 -107       0.11800     rack rack03                                  -11       0.05800         host ceph-node5                            8   hdd 0.02899             osd.8           up  1.00000 1.00000    9   hdd 0.02899             osd.9           up  1.00000 1.00000  -13       0.05800         host ceph-node6                           10   hdd 0.02899             osd.10          up  1.00000 1.00000   11   hdd 0.02899             osd.11          up  1.00000 1.00000

测试发现达到效果

[root@ceph-node1 opt]# ceph osd map stat rbd_data.10ab6b8b4567.0000000000000042osdmap e73 pool 'stat' (1) object 'rbd_data.10ab6b8b4567.0000000000000042' -> pg 1.fa3e81bf (1.3f) -> up ([11,2,4], p11) acting ([11,2,4], p11)# pg的3个副本分别放在了不同rack上

以上是"ceph隔离级别的示例分析"这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注行业资讯频道!

0