千家信息网

Ceph手动添加osd的步骤

发表于:2024-09-22 作者:千家信息网编辑
千家信息网最后更新 2024年09月22日,这篇文章主要介绍"Ceph手动添加osd的步骤",在日常操作中,相信很多人在Ceph手动添加osd的步骤问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答"Ceph手动添加o
千家信息网最后更新 2024年09月22日Ceph手动添加osd的步骤

这篇文章主要介绍"Ceph手动添加osd的步骤",在日常操作中,相信很多人在Ceph手动添加osd的步骤问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答"Ceph手动添加osd的步骤"的疑惑有所帮助!接下来,请跟着小编一起来学习吧!

1、Ceph版本

ceph手动添加osd的过程,其实就是ceph-deploy的过程自己手动执行了一遍

# ceph versionceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)

2、磁盘分区

bluestore存储方式,data是以前的journal(vdc1),block(vdc2)才是实际数据存储

2.1、划分ceph data

# uuidgen ff3db0d3-fd32-4b2d-8c35-1fb074e00cea# sgdisk --new=1:0:+100M --change-name=1:"ceph data" --partition-guid=1:ff3db0d3-fd32-4b2d-8c35-1fb074e00cea --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdcThe operation has completed successfully.# /usr/bin/udevadm settle --timeout=600# /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc# /usr/bin/udevadm settle --timeout=600

2.2、划分ceph block

# uuidgen a44651fb-8904-4a86-adf6-541fefdf229e# sgdisk --largest-new=2 --change-name=2:"ceph block" --partition-guid=2:a44651fb-8904-4a86-adf6-541fefdf229e --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdcThe operation has completed successfully.# /usr/bin/udevadm settle --timeout=600# /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc# /usr/bin/udevadm settle --timeout=600

2.3、查看分区情况

# ls -lrt /dev/disk/by-partuuid/|grep vdca44651fb-8904-4a86-adf6-541fefdf229e -> ../../vdc2lrwxrwxrwx 1 root root 10 Jul 12 16:37ff3db0d3-fd32-4b2d-8c35-1fb074e00cea -> ../../vdc1

3、格式化/dev/vdc1为xfs

# sgdisk --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 -- /dev/vdcThe operation has completed successfully.# udevadm settle --timeout=600# flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc# udevadm settle --timeout=600# mkfs -t xfs -f -i size=2048 -- /dev/vdc1meta-data=/dev/vdc1              isize=2048   agcount=4, agsize=6400 blks         =                       sectsz=512   attr=2, projid32bit=1         =                       crc=1        finobt=0, sparse=0data     =                       bsize=4096   blocks=25600, imaxpct=25         =                       sunit=0      swidth=0 blksnaming   =version 2              bsize=4096   ascii-ci=0 ftype=1log      =internal log           bsize=4096   blocks=864, version=2         =                       sectsz=512   sunit=0 blks, lazy-count=1realtime =none                   extsz=4096   blocks=0, rtextents=0

4、挂载临时目录(vdc1)

# mkdir /var/lib/ceph/tmp/mnt.DlWdC2# mount -t xfs -o noatime,inode64 -- /dev/vdc1 /var/lib/ceph/tmp/mnt.DlWdC2# fsid=$(ceph-osd --cluster=ceph --show-config-value=fsid)# cat << EOF > /var/lib/ceph/tmp/mnt.DlWdC2/ceph_fsid> $fsid> EOF# echo "ff3db0d3-fd32-4b2d-8c35-1fb074e00cea" >> /var/lib/ceph/tmp/mnt.DlWdC2/fsid# restorecon -R /var/lib/ceph/tmp/mnt.DlWdC2/magic # cat << EOF > /var/lib/ceph/tmp/mnt.DlWdC2/block_uuid> a44651fb-8904-4a86-adf6-541fefdf229e> EOF

5、建立block软链接(vdc2)

# ln -s /dev/disk/by-partuuid/a44651fb-8904-4a86-adf6-541fefdf229e /var/lib/ceph/tmp/mnt.DlWdC2/block# echo bluestore >> /var/lib/ceph/tmp/mnt.DlWdC2/type# /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.DlWdC2/# /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.DlWdC2/# umount /var/lib/ceph/tmp/mnt.DlWdC2/# sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdcWarning: The kernel is still using the old partition table.The new table will be used at the next reboot.The operation has completed successfully.

6、启动osd

udev来实现了osd的自动挂载

# /usr/bin/udevadm settle --timeout=600# /usr/bin/flock -s /dev/vdc /usr/sbin/partprobe /dev/vdc# /usr/bin/udevadm settle --timeout=600# /usr/bin/udevadm trigger --action=add --sysname-match vdc1

7、osd目录文件说明

# cd /var/lib/ceph/osd/ceph-4/# ls -lrttotal 52-rw-r--r-- 1 ceph ceph  37 Jul 12 16:43 ceph_fsid-rw-r--r-- 1 ceph ceph  37 Jul 12 16:47 fsid-rw-r--r-- 1 ceph ceph  21 Jul 12 16:47 magic-rw-r--r-- 1 ceph ceph  37 Jul 12 16:49 block_uuidlrwxrwxrwx 1 ceph ceph  58 Jul 12 16:50 block -> /dev/disk/by-partuuid/a44651fb-8904-4a86-adf6-541fefdf229e-rw-r--r-- 1 ceph ceph  10 Jul 12 16:51 type-rw------- 1 ceph ceph  56 Jul 12 16:57 keyring-rw-r--r-- 1 ceph ceph   2 Jul 12 16:57 whoami-rw-r--r-- 1 root root 384 Jul 12 16:57 activate.monmap-rw-r--r-- 1 ceph ceph   8 Jul 12 16:57 kv_backend-rw-r--r-- 1 ceph ceph   2 Jul 12 16:57 bluefs-rw-r--r-- 1 ceph ceph   4 Jul 12 16:57 mkfs_done-rw-r--r-- 1 ceph ceph   6 Jul 12 16:57 ready-rw-r--r-- 1 ceph ceph   3 Jul 12 16:57 active-rw-r--r-- 1 ceph ceph   0 Jul 12 16:57 systemd
ceph_fsid       -- ceph集群的fsidfsid            -- osd的fsid,也是/dev/vdc1的盘符idmagic           -- ceph osd volume v026block_uuid      -- /dev/vdc2的盘符idblock           -- 软链接,指向/dev/vdc2的盘符type            -- ceph存储类型,这里为bluestorekeyring         -- osd的秘钥whoami          -- osd的序号activate.monmap -- 活跃的monmapkv_backend      -- 值为rocksdbbluefs          -- 值为1mkfs_done       -- 值为yesready           -- 值为readyactive          -- 值为oksystemd         -- 空文件

到此,关于"Ceph手动添加osd的步骤"的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注网站,小编会继续努力为大家带来更多实用的文章!

0