千家信息网

Ceph常用的命令总结

发表于:2024-10-21 作者:千家信息网编辑
千家信息网最后更新 2024年10月21日,本篇内容主要讲解"Ceph常用的命令总结",感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习"Ceph常用的命令总结"吧!1. 创建自定义poolceph osd
千家信息网最后更新 2024年10月21日Ceph常用的命令总结

本篇内容主要讲解"Ceph常用的命令总结",感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习"Ceph常用的命令总结"吧!

1. 创建自定义pool

ceph osd pool create pg_num pgp_num

其中pgp_num为pg_num的有效归置组个数,是一个可选参数。pg_num应该足够大,不要拘泥于官方文档的计算方法,根据实际情况选择256、512、1024、2048、4096。

2. 设置pool的副本数、最小副本数、最大副本数

ceph osd pool set  size 2ceph osd pool set  min_size 1ceph osd pool set  max_size 10

资源所限,如果不希望保存3副本,可以用该命令对特定的pool更改副本存放数。

利用get可以获得特定pool的副本数。

ceph osd pool get  size

3. 增加osd

可以利用ceph-deploy增加osd:

ceph-deploy osd prepare monosd1:/mnt/ceph osd2:/mnt/cephceph-deploy osd activate monosd1:/mnt/ceph osd2:/mnt/ceph#相当于:ceph-deploy osd create monosd1:/mnt/ceph osd2:/mnt/ceph#还有一种方法,在安装osd时同时指定对应的journal的安装路径ceph-deploy osd create osd1:/cephmp1:/dev/sdf1 /cephmp2:/dev/sdf2

也可以手动增加:

## Prepare disk first, create partition and format itmkfs.xfs -f /dev/sddmkdir /cephmp1mount /dev/sdd /cephmp1cd /cephmp1ceph-osd -i 12 --mkfs --mkkeyceph auth add osd.12 osd 'allow *' mon 'allow rwx' -i /cephmp1/keyring#change the crushmapceph osd getcrushmap -o mapcrushtool -d map -o map.txtvim map.txtcrushtool -c map.txt -o mapceph osd setcrushmap -i map## Start it/etc/init.d/ceph start osd.12

4. 删除osd

先将此osd停止工作:

## Mark it outceph osd out 5## Wait for data migration to complete (ceph -w), then stop itservice ceph -a stop osd.5## Now it is marked out and down

再对其进行删除操作:

## If deleting from active stack, be sure to follow the above to mark it out and downceph osd crush remove osd.5## Remove auth for diskceph auth del osd.5## Remove diskceph osd rm 5## Remove from ceph.conf and copy new conf to all hosts

5. 查看osd总体情况、osd的详细信息、crush的详细信息

ceph osd treeceph osd dump --format=json-prettyceph osd crush dump --format=json-pretty

6. 获得并修改CRUSH maps

## save current crushmap in binaryceph osd getcrushmap -o crushmap.bin## Convert to txtcrushtool -d crushmap.bin -o crushmap.txt## Edit it and re-convert to binarycrushtool -c crushmap.txt -o crushmap.bin.new## Inject into running systemceph osd setcrushmap -i crushmap.bin.new## If you've added a new ruleset and want to use that for a pool, do something like:ceph osd pool default crush rule = 4#也可以这样设置一个pool的rulecpeh osd pool set testpool crush_ruleset 

-o=output; -d=decompile; -c=compile; -i=input

记住这些缩写,上面的命令就很容易理解了。

7. 增加/删除journal

为了提高性能,通常将ceph的journal置于单独的磁盘或分区中:

先利用以下命令设置ceph集群为nodown:

  • ceph osd set nodown

# Relevant ceph.conf options -- existing setup --[osd]    osd data = /srv/ceph/osd$id    osd journal = /srv/ceph/osd$id/journal    osd journal size = 512# stop the OSD:/etc/init.d/ceph osd.0 stop/etc/init.d/ceph osd.1 stop/etc/init.d/ceph osd.2 stop# Flush the journal:ceph-osd -i 0 --flush-journalceph-osd -i 1 --flush-journalceph-osd -i 2 --flush-journal# Now update ceph.conf - this is very important or you'll just recreate journal on the same disk again -- change to [filebased journal] --[osd]    osd data = /srv/ceph/osd$id    osd journal = /srv/ceph/journal/osd$id/journal    osd journal size = 10000 -- change to [partitionbased journal (journal in this case would be on /dev/sda2)] --[osd]    osd data = /srv/ceph/osd$id    osd journal = /dev/sda2    osd journal size = 0# Create new journal on each diskceph-osd -i 0 --mkjournalceph-osd -i 1 --mkjournalceph-osd -i 2 --mkjournal# Done, now start all OSD again/etc/init.d/ceph osd.0 start/etc/init.d/ceph osd.1 start/etc/init.d/ceph osd.2 start

记得将nodown设置回来:

  • ceph osd unset nodown

8. ceph cache pool

经初步测试,ceph的cache pool性能并不好,有时甚至低于无cache pool时的性能。可以利用flashcache等替代方案来优化ceph的cache。

ceph osd tier add satapool ssdpoolceph osd tier cache-mode ssdpool writebackceph osd pool set ssdpool hit_set_type bloomceph osd pool set ssdpool hit_set_count 1## In this example 80-85% of the cache pool is equal to 280GBceph osd pool set ssdpool target_max_bytes $((280*1024*1024*1024))ceph osd tier set-overlay satapool ssdpoolceph osd pool set ssdpool hit_set_period 300ceph osd pool set ssdpool cache_min_flush_age 300   # 10 minutesceph osd pool set ssdpool cache_min_evict_age 1800   # 30 minutesceph osd pool set ssdpool cache_target_dirty_ratio .4ceph osd pool set ssdpool cache_target_full_ratio .8

9. 查看运行时配置

ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show

10. 查看监控集群状态

ceph healthcehp health detailceph statusceph -s#可以加上--fortmat=json-prettyceph osd statceph osd dumpceph osd treeceph mon statceph quorum_statusceph mon dumpceph mds statceph mds dump

11. 查看所有的pool

ceph osd lspoolsrados lspools

12. 查看kvm和qemu是否支持rbd

qemu-system-x86_64 -drive format=?qemu-img -h | grep rbd

13, 查看特定的pool及其中的文件

rbd ls testpoolrbd create testpool/test.img -s 1024 --image-format=2rbd info testpool/test.imgrbd rm testpool/test.img #统计块数rados -p testpool ls | grep ^rb.0.11a1 | wc -l
#导入并查看文件rados makepool testpoolrados put -p testpool logo.png logo.pngceph osd map testpool logo.pngrbd import logo.png testpool/logo.pngrbd info testpool/logo.png

14. 挂载/卸载创建的块设备

ceph osd pool create testpool 256 256rbd create testpool/test.img -s 1024 --image-format=2rbd map testpool/test.imgrbd showmappedmkfs.xfs /dev/rbd0rbd unmap /dev/rbd0

15. 创建快照

#创建rbd snap create testpool/test.img@test.img-snap1#查看rbd snap ls testpool/test.img#回滚rbd snap rollback testpool/test.img@test.img-snap1#删除rbd snap rm testpool/test.img@test.img-snap1#清除所有快照rbd snap purge testpool/test.img

16. 计算合理的pg数

官方建议每OSD50-100个pg。total pgs=osds*100/副本数,例如6osd、2副本的环境,pgs为6*100/2=300

pg数只能增加,无法减少;增加pg_num后必须同时增减pgp_num

17. 对pool的操作

ceph osd pool create testpool 256 256ceph osd pool delete testpool testpool --yes-i-really-really-mean-itceph osd pool rename testpool anothertestpoolceph osd pool mksnap testpool testpool-snap

18. 重新安装前的格式化

ceph-deploy purge osd0 osd1ceph-deploy purgedata osd0 osd1ceph-deploy forgetkeysceph-deploy disk zap --fs-type xfs osd0:/dev/sdb1

19. 修改osd journal的存储路径

#noout参数会阻止osd被标记为out,使其权重为0ceph osd set nooutservice ceph stop osd.1ceph-osd -i 1 --flush-journalmount /dev/sdc /journalceph-osd -i 1 --mkjournal /journalservice ceph start osd.1ceph osd unset noout

20. xfs挂载参数

mkfs.xfs -n size=64k /dev/sdb1#/etc/fstab挂载参数rw,noexec,nodev,noatime,nodiratime,nobarrier

21. 认证配置

[global]auth cluser required = noneauth service required = noneauth client required = none#0.56之前auth supported = none

22. pg_num不够用,进行迁移和重命名

ceph osd pool create new-pool pg_numrados cppool old-pool new-poolceph osd pool delete old-poolceph osd pool rename new-pool old-pool#或者直接增加pool的pg_num

23. 推送config文件

ceph-deploy --overwrite-conf config push mon1 mon2 mon3

24. 在线修改config参数

ceph tell osd.* injectargs '--mon_clock_drift_allowde 1'

使用此命令需要区分配置的参数属于mon、mds还是osd。

到此,相信大家对"Ceph常用的命令总结"有了更深的了解,不妨来实际操作一番吧!这里是网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!

0