千家信息网

如何解决ceph健康检查报错?

发表于:2024-11-14 作者:千家信息网编辑
千家信息网最后更新 2024年11月14日,1、报错一[root@ct ceph]# ceph -s cluster: id: dfb110f9-e0e0-4544-9f13-9141750ee9f6 health: HE
千家信息网最后更新 2024年11月14日如何解决ceph健康检查报错?

1、报错一

[root@ct ceph]# ceph -s  cluster:    id:     dfb110f9-e0e0-4544-9f13-9141750ee9f6    health: HEALTH_WARN            Degraded data redundancy: 192 pgs undersized  services:    mon: 3 daemons, quorum ct,c1,c2    mgr: ct(active), standbys: c2, c1    osd: 2 osds: 2 up, 2 in  data:    pools:   3 pools, 192 pgs    objects: 0  objects, 0 B    usage:   2.0 GiB used, 2.0 TiB / 2.0 TiB avail    pgs:     102 active+undersized             90  stale+active+undersized查看obs状态,c2没有连接上[root@ct ceph]# ceph osd status+----+------+-------+-------+--------+---------+--------+---------+-----------+| id | host |  used | avail | wr ops | wr data | rd ops | rd data |   state   |+----+------+-------+-------+--------+---------+--------+---------+-----------+| 0  |  ct  | 1026M | 1022G |    0   |     0   |    0   |     0   | exists,up || 1  |  c1  | 1026M | 1022G |    0   |     0   |    0   |     0   | exists,up |+----+------+-------+-------+--------+---------+--------+---------+-----------+

解决方法:
在c2重启osd即可解决

[root@c2 ~]# systemctl restart ceph-osd.target

2、报错二

[root@ct ceph]# ceph -s  cluster:    id:     44d72edb-4085-4cfc-8652-eb670472f169    health: HEALTH_WARN            clock skew detected on mon.c1, mon.c2  services:    mon: 3 daemons, quorum ct,c1,c2    mgr: c1(active), standbys: c2, ct    osd: 3 osds: 1 up, 1 in  data:    pools:   0 pools, 0 pgs    objects: 0  objects, 0 B    usage:   1.0 GiB used, 1023 GiB / 1024 GiB avail    pgs: 

解决方法:
(1)控制节点重启NTP服务

[root@ct ceph]# systemctl restart ntpd
(2)计算节点重新同步控制节点时间
[root@c2 ~]# ntpdate 192.168.100.10
(3)在控制节点重启mon服务
[root@ct ceph]# systemctl restart ceph-mon.target

0