千家信息网

ceph技巧有哪些

发表于:2024-10-21 作者:千家信息网编辑
千家信息网最后更新 2024年10月21日,这篇文章主要介绍ceph技巧有哪些,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!1. ceph rbd在线resize扩容前[root@mon0 ceph]# rbd crea
千家信息网最后更新 2024年10月21日ceph技巧有哪些

这篇文章主要介绍ceph技巧有哪些,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!

1. ceph rbd在线resize

扩容前

[root@mon0 ceph]# rbd create myrbd/rbd1 -s 1024 --image-format=2[root@mon0 ceph]# rbd ls myrbdrbd1[root@mon0 ceph]# rbd info myrbd/rbd1rbd image 'rbd1':        size 1024 MB in 256 objects        order 22 (4096 kB objects)        block_name_prefix: rbd_data.12ce6b8b4567        format: 2        features: layering

扩容

[root@mon0 ceph]# rbd resize myrbd/rbd1 -s 2048Resizing image: 100% complete...done.

在rbd1未格式化和挂载之前,直接resize就可以了。如果rbd1已经格式化并挂载了,需要一些额外的操作:

[root@mon0 ceph]# rbd map myrbd/rbd1[root@mon0 ceph]# rbd showmappedid pool  image    snap device    0  test  test.img -    /dev/rbd0 1  myrbd rbd1     -    /dev/rbd1 [root@mon0 ceph]# mkfs.xfs /dev/rbd1log stripe unit (4194304 bytes) is too large (maximum is 256KiB)log stripe unit adjusted to 32KiBmeta-data=/dev/rbd1              isize=256    agcount=9, agsize=64512 blks         =                       sectsz=512   attr=2, projid32bit=0data     =                       bsize=4096   blocks=524288, imaxpct=25         =                       sunit=1024   swidth=1024 blksnaming   =version 2              bsize=4096   ascii-ci=0log      =internal log           bsize=4096   blocks=2560, version=2         =                       sectsz=512   sunit=8 blks, lazy-count=1realtime =none                   extsz=4096   blocks=0, rtextents=0[root@mon0 ceph]# mount /dev/rbd1 /mnt[root@mon0 ceph]# df -hFilesystem      Size  Used Avail Use% Mounted on/dev/sda1       529G   20G  482G   4% /tmpfs            16G  408K   16G   1% /dev/shm/dev/sdb        559G   33G  527G   6% /openstack/dev/sdc        1.9T   75M  1.9T   1% /cephmp1/dev/sdd        1.9T   61M  1.9T   1% /cephmp2/dev/rbd1       2.0G   33M  2.0G   2% /mnt[root@mon0 ceph]# rbd resize myrbd/rbd1 -s 4096Resizing image: 100% complete...done.[root@mon0 ceph]# df -hFilesystem      Size  Used Avail Use% Mounted on/dev/sda1       529G   20G  482G   4% /tmpfs            16G  408K   16G   1% /dev/shm/dev/sdb        559G   33G  527G   6% /openstack/dev/sdc        1.9T   75M  1.9T   1% /cephmp1/dev/sdd        1.9T   61M  1.9T   1% /cephmp2/dev/rbd1       2.0G   33M  2.0G   2% /mnt[root@mon0 ceph]# xfs_growfs /mntmeta-data=/dev/rbd1              isize=256    agcount=9, agsize=64512 blks         =                       sectsz=512   attr=2, projid32bit=0data     =                       bsize=4096   blocks=524288, imaxpct=25         =                       sunit=1024   swidth=1024 blksnaming   =version 2              bsize=4096   ascii-ci=0log      =internal               bsize=4096   blocks=2560, version=2         =                       sectsz=512   sunit=8 blks, lazy-count=1realtime =none                   extsz=4096   blocks=0, rtextents=0data blocks changed from 524288 to 1048576[root@mon0 ceph]# df -hFilesystem      Size  Used Avail Use% Mounted on/dev/sda1       529G   20G  482G   4% /tmpfs            16G  408K   16G   1% /dev/shm/dev/sdb        559G   33G  527G   6% /openstack/dev/sdc        1.9T   75M  1.9T   1% /cephmp1/dev/sdd        1.9T   61M  1.9T   1% /cephmp2/dev/rbd1       4.0G   33M  4.0G   1% /mnt

还有一种情况是,rbd1已经被挂载到一个vm上:

virsh domblklist myvmrbd resize myrbd/rbd1#这里需要通过virsh blockresize进行操作virsh blockresize --domain myvm --path vdb --size 100Grbd info myrbd/rbd1

2. 利用ceph-deploy

利用ceph-deploy安装ceph非常简单,可以在安装后根据需要调整ceph.conf。

mkdir ceph-deploy; cd ceph-deployceph-deploy install $clusterceph-deploy new cephnode-01 cephnode-02 cephnode-03ceph-deploy --overwrite-conf mon create cephnode-01 cephnode-02 cephnode-03ceph-deploy gatherkeys cephnode-01ceph-deploy osd create \    cephnode-01:/dev/sdb:/dev/sda5 \    cephnode-01:/dev/sdc:/dev/sda6 \    cephnode-01:/dev/sdd:/dev/sda7 \    cephnode-02:/dev/sdb:/dev/sda5 \    cephnode-02:/dev/sdc:/dev/sda6 \    cephnode-02:/dev/sdd:/dev/sda7 \    cephnode-03:/dev/sdb:/dev/sda5 \    cephnode-03:/dev/sdc:/dev/sda6 \    cephnode-03:/dev/sdd:/dev/sda7 \    cephnode-04:/dev/sdb:/dev/sda5 \    cephnode-04:/dev/sdc:/dev/sda6 \    cephnode-04:/dev/sdd:/dev/sda7 \    cephnode-05:/dev/sdb:/dev/sda5 \    cephnode-05:/dev/sdc:/dev/sda6 \    cephnode-05:/dev/sdd:/dev/sda7

利用ceph-deploy进行卸载:

ceph-deploy purgedata $clusterceph-deploy purge $clusterfor host in $cluster  do    ssh $host <

3. 删除rbd前先删除相应快照

rbd snap ls myrbd/rbd1SNAPID NAME       SIZE     10 snap1 102400 KB     12 snap2 102400 KB rbd snap purge myrbd/rbd1Removing all snapshots: 100% complete...done.rbd rm myrbd/rbd1Removing image: 100% complete...done.

4. 测试rbd snap

创建并挂载rbd1:

[root@mon0 ceph]# rbd create myrbd/rbd1 -s 1024 --image-format=2[root@mon0 ceph]# rbd map myrbd/rbd1[root@mon0 ceph]# rbd showmappedid pool  image    snap device    0  test  test.img -    /dev/rbd0 1  myrbd rbd1     -    /dev/rbd1 [root@mon0 ceph]# mkfs.xfs /dev/rbd1log stripe unit (4194304 bytes) is too large (maximum is 256KiB)log stripe unit adjusted to 32KiBmeta-data=/dev/rbd1              isize=256    agcount=9, agsize=31744 blks         =                       sectsz=512   attr=2, projid32bit=0data     =                       bsize=4096   blocks=262144, imaxpct=25         =                       sunit=1024   swidth=1024 blksnaming   =version 2              bsize=4096   ascii-ci=0log      =internal log           bsize=4096   blocks=2560, version=2         =                       sectsz=512   sunit=8 blks, lazy-count=1realtime =none                   extsz=4096   blocks=0, rtextents=0[root@mon0 ceph]# blkid|grep rbd1/dev/rbd1: UUID="643b509f-406b-411b-b3a5-fa220fbdfe47" TYPE="xfs" [root@mon0 ceph]# mount /dev/rbd1 /mnt[root@mon0 ceph]# mount | grep rbd1/dev/rbd1 on /mnt type xfs (rw)

创建snap并回滚:

[root@mon0 ceph]# rbd snap ls myrbd/rbd1SNAPID NAME     SIZE      2 snap1 1024 MB [root@mon0 ceph]# rbd snap rollback myrbd/rbd1@snap1Rolling back to snapshot: 100% complete...done.[root@mon0 ceph]# ls /mntv2[root@mon0 ceph]# umount /mnt[root@mon0 ceph]# mount /dev/rbd1 /mnt[root@mon0 ceph]# ls /mntv1

通过快照挂载:

[root@mon0 ceph]# rbd map myrbd/rbd1@snap1[root@mon0 ceph]# rbd showmappedid pool  image    snap  device    0  test  test.img -     /dev/rbd0 1  myrbd rbd1     -     /dev/rbd1 2  myrbd rbd1     snap1 /dev/rbd2 [root@mon0 ceph]# mount -t xfs -o ro,norecovery,nouuid /dev/rbd2 /tmp[root@mon0 ceph]# ls /tmpv1

5. 定位文件位置

利用rados put上传并利用map查看文件

[root@osd2 software]# md5sum epel-release-6-8.noarch.rpm 2cd0ae668a585a14e07c2ea4f264d79b  epel-release-6-8.noarch.rpm[root@osd2 software]# rados put -p myrbd epel.rpm ./epel-release-6-8.noarch.rpm [root@osd2 software]# ceph osd map myrbd epel.rpmosdmap e88 pool 'myrbd' (4) object 'epel.rpm' -> pg 4.e9ddf5be (4.be) -> up ([4,2], p4) acting ([4,2], p4)

查看所在位置并验证:

[root@osd2 software]# ceph osd tree# id       weight  type name  up/down reweight-1      10.92   root default-2      3.64            host mon00       1.82                    osd.0   up      1       1       1.82                    osd.1   up      1       -3      3.64            host osd12       1.82                    osd.2   up      1       3       1.82                    osd.3   up      1       -4      3.64            host osd24       1.82                    osd.4   up      1       5       1.82                    osd.5   up      1       [root@osd2 software]# cd /cephmp1/current/4.be_head/[root@osd2 4.be_head]# lsepel.rpm__head_E9DDF5BE__4[root@osd2 4.be_head]# md5sum epel.rpm__head_E9DDF5BE__4 2cd0ae668a585a14e07c2ea4f264d79b  epel.rpm__head_E9DDF5BE__4[root@osd2 4.be_head]# ll -htotal 20K-rw-r--r--. 1 root root 15K Nov  4 17:59 epel.rpm__head_E9DDF5BE__4

利用rbd input上传后进行验证:

[root@osd2 software]# touch hello.txt[root@osd2 software]# echo "hello world" >> hello.txt[root@osd2 software]# rbd import ./hello.txt myrbd/hello.txtImporting image: 100% complete...done.[root@osd2 software]# rbd info myrbd/hello.txtrbd image 'hello.txt':        size 12 bytes in 1 objects        order 22 (4096 kB objects)        block_name_prefix: rb.0.1365.6b8b4567        format: 1[root@osd2 software]# rados ls -p myrbdrbd_data.13446b8b4567.00000000000000barbd_directoryrbd_data.13446b8b4567.000000000000007drbd_data.13446b8b4567.000000000000007crbd_data.13446b8b4567.000000000000005drbd_data.13446b8b4567.000000000000007erbd_data.13446b8b4567.00000000000000ffrb.0.1365.6b8b4567.000000000000hello.txt.rbdrbd_data.13446b8b4567.00000000000000d9rbd_data.13446b8b4567.00000000000000f8rbd_data.13446b8b4567.000000000000009brbd_data.13446b8b4567.0000000000000001rbd_header.13446b8b4567epel.rpmrbd_data.13446b8b4567.000000000000001frbd_data.13446b8b4567.000000000000003erbd_id.rbd1rbd_data.13446b8b4567.0000000000000000#这样得到的位置信息是错误的[root@osd2 software]# ceph osd map myrbd hello.txtosdmap e88 pool 'myrbd' (4) object 'hello.txt' -> pg 4.d92fd82b (4.2b) -> up ([4,3], p4) acting ([4,3], p4)#需要加上.rbd[root@osd2 current]# ceph osd map myrbd hello.txt.rbdosdmap e88 pool 'myrbd' (4) object 'hello.txt.rbd' -> pg 4.9b9bf373 (4.73) -> up ([3,1], p3) acting ([3,1], p3)[root@osd2 current]# ssh osd1[root@osd1 ~]# cd /cephmp2/current/4.73_head/[root@osd1 4.73_head]# ll -htotal 8.0K-rw-r--r-- 1 root root 112 Nov  4 18:08 hello.txt.rbd__head_9B9BF373__4[root@osd1 4.73_head]# cat hello.txt.rbd__head_9B9BF373__4 <<< Rados Block Device Image >>>rb.0.1365.6b8b4567RBD001.005#如果是普通的rbd块,format-1类型的rbd块:#ceph osd map test test.img.rbd#format-2类型的rbd块:#ceph osd map test rbd_id.test.img

以上是"ceph技巧有哪些"这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注行业资讯频道!

0