千家信息网

ceph中使用技巧有哪些

发表于:2024-09-25 作者:千家信息网编辑
千家信息网最后更新 2024年09月25日,小编给大家分享一下ceph中使用技巧有哪些,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!1. 设置cephx keys如果
千家信息网最后更新 2024年09月25日ceph中使用技巧有哪些

小编给大家分享一下ceph中使用技巧有哪些,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!

1. 设置cephx keys

如果ceph设置了cephx,就可以为不同的用户设置权限。

#创建dummy的key$ ceph auth get-or-create client.dummy mon 'allow r' osd  'allow rwx pool=dummy'[client.dummy]    key = AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ==$ ceph auth listinstalled auth entries:...client.dummy    key: AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ==    caps: [mon] allow r    caps: [osd] allow rwx pool=dummy...#对dummy的key重新分配权限$ ceph auth caps client.dummy mon 'allow rwx' osd 'allow rwx pool=dummy'updated caps for client.dummy$ ceph auth listinstalled auth entries:client.dummy    key: AQAPiu1RCMb4CxAAmP7rrufwZPRqy8bpQa2OeQ==    caps: [mon] allow rwx    caps: [osd] allow allow rwx pool=dummy/dev/sda2       /srv/ceph/osdX1  xfs rw,noexec,nodev,noatime,nodiratime,barrier=0   0   0

2. 查看rbd被挂载到哪里

由于rbd showmapped只能显示本地挂载的rbd设备,如果机器比较多,而你恰好忘了在哪里map的了,就只能逐个机器找了。利用listwatchers可以解决这个问题。

对于image format为1的块:

$ rbd info bootrbd image 'boot':    size 10240 MB in 2560 objects    order 22 (4096 kB objects)    block_name_prefix: rb.0.89ee.2ae8944a    format: 1$ rados -p rbd listwatchers boot.rbdwatcher=192.168.251.102:0/2550823152 client.35321 cookie=1

对于image format为2的块,有些不一样:

[root@osd2 ceph]# rbd info myrbd/rbd1rbd image 'rbd1':        size 8192 kB in 2 objects        order 22 (4096 kB objects)        block_name_prefix: rbd_data.13436b8b4567        format: 2        features: layering[root@osd2 ceph]# rados -p myrbd listwatchers rbd_header.13436b8b4567watcher=192.168.108.3:0/2292307264 client.5130 cookie=1

需要将rbd info得到的序号加到rbd_header后面。

3. 怎样删除巨型rbd image

之前在一些博客看到删除巨型rbd image,如果直接通过rbd rm的话会很耗时(漫长的夜)。但在ceph 0.87上尝试了一下,这个问题已经不存在了,具体过程如下:

#创建一个1PB大小的块[root@osd2 ceph]# time rbd create myrbd/huge-image -s 1024000000real    0m0.353suser    0m0.016ssys     0m0.009s[root@osd2 ceph]# rbd info myrbd/huge-imagerbd image 'huge-image':        size 976 TB in 256000000 objects        order 22 (4096 kB objects)        block_name_prefix: rb.0.1489.6b8b4567        format: 1[root@osd2 ceph]# time rbd rm myrbd/huge-imageRemoving image: 2% complete...^\Quit (core dumped)real    10m24.406suser    18m58.335ssys     11m39.507s

上面创建了一个1PB大小的块,也许是太大了,直接rbd rm删除的时候还是很慢,所以用了一下方法:

[root@osd2 ceph]# rados -p myrbd rm huge-image.rbd[root@osd2 ceph]# time rbd rm myrbd/huge-image2014-11-06 09:44:42.916826 7fdb4fd5a7e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directoryRemoving image: 100% complete...done.real    0m0.192suser    0m0.012ssys     0m0.013s

来个1TB大小的块试试:

[root@osd2 ceph]# rbd create myrbd/huge-image -s 1024000[root@osd2 ceph]# rbd info myrbd/huge-imagerbd image 'huge-image':        size 1000 GB in 256000 objects        order 22 (4096 kB objects)        block_name_prefix: rb.0.149c.6b8b4567        format: 1[root@osd2 ceph]# time rbd rm myrbd/huge-imageRemoving image: 100% complete...done.real    0m29.418suser    0m52.467ssys     0m32.372s

所以巨型的块删除还是要用以下方法:

format 1:

[root@osd2 ceph]# rbd create myrbd/huge-image -s 1024000000[root@osd2 ceph]# rbd info myrbd/huge-imagerbd image 'huge-image':        size 976 TB in 256000000 objects        order 22 (4096 kB objects)        block_name_prefix: rb.0.14a5.6b8b4567        format: 1[root@osd2 ceph]# rados -p myrbd rm huge-image.rbd[root@osd2 ceph]# time rados -p myrbd ls|grep '^rb.0.14a5.6b8b4567'|xargs -n 200  rados -p myrbd rm[root@osd2 ceph]# time rbd rm myrbd/huge-image2014-11-06 09:54:12.718211 7ffae55747e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directoryRemoving image: 100% complete...done.real    0m0.191suser    0m0.015ssys     0m0.010s

format 2:

[root@osd2 ceph]# rbd create myrbd/huge-image -s 1024000000 --image-format=2[root@osd2 ceph]# rbd info myrbd/huge-imagerbd image 'huge-image':        size 976 TB in 256000000 objects        order 22 (4096 kB objects)        block_name_prefix: rbd_data.14986b8b4567        format: 2        features: layering[root@osd2 ceph]# rados -p myrbd rm rbd_id.huge-image[root@osd2 ceph]# rados -p myrbd rm rbd_header.14986b8b4567[root@osd2 ceph]# rados -p myrbd ls | grep '^rbd_data.14986b8b4567' | xargs -n 200  rados -p myrbd rm[root@osd2 ceph]# time rbd rm myrbd/huge-image2014-11-06 09:59:26.043671 7f6b6923c7e0 -1 librbd::ImageCtx: error finding header: (2) No such file or directoryRemoving image: 100% complete...done.real    0m0.192suser    0m0.016ssys    0m0.010s

注意,如果块是空的,不许要xargs那条语句;如果是非空就需要了。

所以,如果是100TB以上的块,最好还是先删除id,再rbd rm进行删除。

4. 查看kvm或qemu是否支持ceph

$ sudo qemu-system-x86_64 -drive format=?Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd dmg tftp ftps ftp https http cow cloop bochs blkverify blkdebug$ qemu-img -h......Supported formats: vvfat vpc vmdk vhdx vdi sheepdog sheepdog sheepdog rbd raw host_cdrom host_floppy host_device file qed qcow2 qcow parallels nbd nbd nbd dmg tftp ftps ftp https http cow cloop bochs blkverify blkdebug

可以到 http://ceph.com/packages/下载最新的rpm或deb包。

5. 利用ceph rbd配置nfs

一种简单实用的存储方法,具体如下:

#安装nfs rpm[root@osd1 current]# yum install nfs-utils rpcbindLoaded plugins: fastestmirror, priorities, refresh-packagekit, securityLoading mirror speeds from cached hostfileepel/metalink                                                                                                  | 5.5 kB     00:00      * base: mirrors.cug.edu.cn * epel: mirrors.yun-idc.com * extras: mirrors.btte.net * rpmforge: ftp.riken.jp * updates: mirrors.btte.netCeph                                                                                                           |  951 B     00:00     Ceph-noarch                                                                                                    |  951 B     00:00     base                                                                                                           | 3.7 kB     00:00     ceph-source                                                                                                    |  951 B     00:00     epel                                                                                                           | 4.4 kB     00:00     epel/primary_db                                                                                                | 6.3 MB     00:01     extras                                                                                                         | 3.4 kB     00:00     rpmforge                                                                                                       | 1.9 kB     00:00     updates                                                                                                        | 3.4 kB     00:00     updates/primary_db                                                                                             | 188 kB     00:00     69 packages excluded due to repository priority protectionsSetting up Install ProcessPackage rpcbind-0.2.0-11.el6.x86_64 already installed and latest versionResolving Dependencies--> Running transaction check---> Package nfs-utils.x86_64 1:1.2.3-39.el6 will be updated---> Package nfs-utils.x86_64 1:1.2.3-54.el6 will be an update--> Finished Dependency ResolutionDependencies Resolved====================================================================================================================================== Package                         Arch                         Version                                Repository                  Size======================================================================================================================================Updating: nfs-utils                       x86_64                       1:1.2.3-54.el6                         base                       326 kTransaction Summary======================================================================================================================================Upgrade       1 Package(s)Total download size: 326 kIs this ok [y/N]: yDownloading Packages:nfs-utils-1.2.3-54.el6.x86_64.rpm                                                                              | 326 kB     00:00     Running rpm_check_debugRunning Transaction TestTransaction Test SucceededRunning Transaction  Updating   : 1:nfs-utils-1.2.3-54.el6.x86_64                                                                                    1/2   Cleanup    : 1:nfs-utils-1.2.3-39.el6.x86_64                                                                                    2/2   Verifying  : 1:nfs-utils-1.2.3-54.el6.x86_64                                                                                    1/2   Verifying  : 1:nfs-utils-1.2.3-39.el6.x86_64                                                                                    2/2 Updated:  nfs-utils.x86_64 1:1.2.3-54.el6           #创建一个块并格式化、挂载[root@osd1 current]# rbd create myrbd/nfs_image -s 1024000 --image-format=2[root@osd1 current]# rbd map myrbd/nfs_image/dev/rbd0[root@osd1 current]# mkdir /mnt/nfs[root@osd1 current]# mkfs.xfs /dev/rbd0log stripe unit (4194304 bytes) is too large (maximum is 256KiB)log stripe unit adjusted to 32KiBmeta-data=/dev/rbd0              isize=256    agcount=33, agsize=8190976 blks         =                       sectsz=512   attr=2, projid32bit=0data     =                       bsize=4096   blocks=262144000, imaxpct=25         =                       sunit=1024   swidth=1024 blksnaming   =version 2              bsize=4096   ascii-ci=0log      =internal log           bsize=4096   blocks=128000, version=2         =                       sectsz=512   sunit=8 blks, lazy-count=1realtime =none                   extsz=4096   blocks=0, rtextents=0[root@osd1 current]# mount /dev/rbd0 -o rw,noexec,nodev,noatime,nobarrier /mnt/nfs#改写exports文件,添加一行[root@osd1 current]#  vim /etc/exports/mnt/nfs 192.168.108.0/24(rw,no_root_squash,no_subtree_check,async)[root@osd1 current]# exportfs -r这里还需要执行指令service rpcbind start[root@osd1 current]# service nfs startStarting NFS services:                                     [  OK  ]Starting NFS quotas:                                       [  OK  ]Starting NFS mountd:                                       [  OK  ]Starting NFS daemon:                                       [  OK  ]Starting RPC idmapd:                                       [  OK  ]此时客户端就可以挂载了。客户端运行:showmount -e 192.168.108.2然后进行挂载:mount -t nfs 192.168.108.2:/mnt/nfs /mnt/nfs

如果无法挂载,运行 service rpcbind start或 service portmap start命令试一下。

以上是"ceph中使用技巧有哪些"这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注行业资讯频道!

0