千家信息网

cephfs 文件空间重建

发表于:2024-10-18 作者:千家信息网编辑
千家信息网最后更新 2024年10月18日,重置cephfs清理现有cephfs 所有文件,重建空间:清理删除 cephfs关闭所有mds服务systemctl stop ceph-mds@$HOSTNAME systemctl status
千家信息网最后更新 2024年10月18日cephfs 文件空间重建
重置cephfs

清理现有cephfs 所有文件,重建空间:

清理删除 cephfs

关闭所有mds服务
systemctl stop ceph-mds@$HOSTNAME  systemctl status ceph-mds@$HOSTNAME  
查看cephfs 信息
## ceph fs ls name: leadorfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]## ceph mds state392: 0/1/1 up, 1 failed## ceph mon dumpdumped monmap epoch 1
设置mds状态为失败
ceph mds fail 0    
删除mds文件系统
ceph fs rm leadorfs --yes-i-really-mean-it      
删除元数据文件夹
ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it   ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it   
再次查看集群状态
## ceph mds state394:## eph mds  dumpdumped fsmap epoch 397fs_name cephfsepoch   397flags   0created 0.000000modified        0.000000tableserver     0root    0session_timeout 0session_autoclose       0max_file_size   0last_failure    0last_failure_osd_epoch  0compat  compat={},rocompat={},incompat={}max_mds 0inup      {}faileddamagedstoppeddata_poolsmetadata_pool   0inline_data     disabled

重建 cephfs

启动所有mds服务
systemctl start ceph-mds@$HOSTNAMEsystemctl status ceph-mds@$HOSTNAME#验证:ceph mds state397:, 3 up:standby
重建cephfs
ceph osd pool create cephfs_data 512ceph osd pool create cephfs_metadata 512ceph fs new ptcephfs cephfs_metadata cephfs_data
验证集群状态
## ceph mds state400: 1/1/1 up {0=jp33e502-4-13.ptengine.com=up:active}, 2 up:standby## ceph mds dumpdumped fsmap epoch 400fs_name ptcephfsepoch   400flags   0created 2018-09-11 12:48:26.300848modified        2018-09-11 12:48:26.300848tableserver     0root    0session_timeout 60session_autoclose       300max_file_size   1099511627776last_failure    0last_failure_osd_epoch  0compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=file layout v2}max_mds 1in      0up      {0=25579}faileddamagedstoppeddata_pools      3metadata_pool   4inline_data     disabled25579:  172.19.4.13:6800/2414848276 'jp33e502-4-13.ptengine.com' mds.0.399 up:active seq 834
集群健康状态
ceph -w    cluster fe946afe-43d0-404c-baed-fb04cd22d20d     health HEALTH_OK     monmap e1: 3 mons at {jp33e501-4-11=172.19.4.11:6789/0,jp33e501-4-12=172.19.4.12:6789/0,jp33e502-4-13=172.19.4.13:6789/0}            election epoch 12, quorum 0,1,2 jp33e501-4-11,jp33e501-4-12,jp33e502-4-13      fsmap e400: 1/1/1 up {0=jp33e502-4-13.ptengine.com=up:active}, 2 up:standby     osdmap e2445: 14 osds: 14 up, 14 in            flags sortbitwise,require_jewel_osds      pgmap v876685: 1024 pgs, 2 pools, 2068 bytes data, 20 objects            73366 MB used, 12919 GB / 12990 GB avail                1024 active+clean
0