千家信息网

docker中ceph集群的日常运维操作有哪些

发表于:2024-11-15 作者:千家信息网编辑
千家信息网最后更新 2024年11月15日,这篇文章给大家分享的是有关docker中ceph集群的日常运维操作有哪些的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。查看ceph的所有守护进程[root@k8s-node
千家信息网最后更新 2024年11月15日docker中ceph集群的日常运维操作有哪些

这篇文章给大家分享的是有关docker中ceph集群的日常运维操作有哪些的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。

查看ceph的所有守护进程

[root@k8s-node1 ceph]# systemctl list-unit-files |grep cephceph-disk@.service                            static  ceph-mds@.service                             disabledceph-mgr@.service                             disabledceph-mon@.service                             enabled ceph-osd@.service                             enabled ceph-radosgw@.service                         disabledceph-mds.target                               enabled ceph-mgr.target                               enabled ceph-mon.target                               enabled ceph-osd.target                               enabled ceph-radosgw.target                           enabled ceph.target                                   enabled

按照类型在 ceph 节点上启动特定类型的守护进程

systemctl start ceph-osd.targetsystemctl start ceph-mon.targetsystemctl start ceph-mds.target

ceph 节点上启动特定的守护进程实例

systemctl start ceph-osd@{id}systemctl start ceph-mon@{hostname}systemctl start ceph-msd@{hostname}

mon 监控状态检查

[root@k8s-node1 ceph]# ceph -s    cluster 2e6519d9-b733-446f-8a14-8622796f83ef     health HEALTH_OK     monmap e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}            election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3        mgr active: k8s-node1 standbys: k8s-node3, k8s-node2     osdmap e31: 3 osds: 3 up, 3 in            flags sortbitwise,require_jewel_osds,require_kraken_osds      pgmap v13640: 64 pgs, 1 pools, 0 bytes data, 0 objects            35913 MB used, 21812 MB / 57726 MB avail                  64 active+clean
[root@k8s-node1 ceph]# ceph ceph> status    cluster 2e6519d9-b733-446f-8a14-8622796f83ef     health HEALTH_OK     monmap e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}            election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3        mgr active: k8s-node1 standbys: k8s-node3, k8s-node2     osdmap e31: 3 osds: 3 up, 3 in            flags sortbitwise,require_jewel_osds,require_kraken_osds      pgmap v13670: 64 pgs, 1 pools, 0 bytes data, 0 objects            35915 MB used, 21810 MB / 57726 MB avail                  64 active+clean
ceph> healthHEALTH_OK
ceph> mon_status{"name":"k8s-node1","rank":0,"state":"leader","election_epoch":26,"quorum":[0,1,2],"features":{"required_con":"9025616074522624","required_mon":["kraken"],"quorum_con":"1152921504336314367","quorum_mon":["kraken"]},"outside_quorum":[],"extra_probe_peers":["172.16.22.202:6789\/0","172.16.22.203:6789\/0"],"sync_provider":[],"monmap":{"epoch":4,"fsid":"2e6519d9-b733-446f-8a14-8622796f83ef","modified":"2018-10-28 21:30:09.197608","created":"2018-10-28 09:49:11.509071","features":{"persistent":["kraken"],"optional":[]},"mons":[{"rank":0,"name":"k8s-node1","addr":"172.16.22.201:6789\/0","public_addr":"172.16.22.201:6789\/0"},{"rank":1,"name":"k8s-node2","addr":"172.16.22.202:6789\/0","public_addr":"172.16.22.202:6789\/0"},{"rank":2,"name":"k8s-node3","addr":"172.16.22.203:6789\/0","public_addr":"172.16.22.203:6789\/0"}]}}

ceph 日志记录

ceph 日志默认的位置保存在节点/var/log/ceph/ceph.log 里面可以使用 ceph -w 查看实时的日志记录情况

哪个节点报错了,就登录到哪个节点上用下面的命令看日志。

[root@k8s-node1 ceph]# ceph -w

ceph mon 也在不断的对自⼰状态进⾏各种检查,检查失败的时候会将自⼰的信息写到集群日志中去

[root@k8s-node1 ceph]# ceph mon state4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}, election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3

检查 osd

[root@k8s-node1 ceph]# ceph osd stat     osdmap e31: 3 osds: 3 up, 3 in            flags sortbitwise,require_jewel_osds,require_kraken_osds
[root@k8s-node1 ceph]# ceph osd treeID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.05516 root default                                         -2 0.01839     host k8s-node1                                    0 0.01839         osd.0           up  1.00000          1.00000 -3 0.01839     host k8s-node2                                    1 0.01839         osd.1           up  1.00000          1.00000 -4 0.01839     host k8s-node3                                    2 0.01839         osd.2           up  1.00000          1.00000

检查 pool 的⼤小以及可用状态

[root@k8s-node1 ceph]#  ceph dfGLOBAL:    SIZE       AVAIL      RAW USED     %RAW USED     57726M     21811M       35914M         62.21 POOLS:    NAME     ID     USED     %USED     MAX AVAIL     OBJECTS     rbd      0         0         0         5817M           0

感谢各位的阅读!关于"docker中ceph集群的日常运维操作有哪些"这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!

0