千家信息网

如何实现基于ceph rbd+corosync+pacemaker HA-NFS文件共享

发表于:2025-02-05 作者:千家信息网编辑
千家信息网最后更新 2025年02月05日,这篇文章给大家分享的是有关如何实现基于ceph rbd+corosync+pacemaker HA-NFS文件共享的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。1、 架构图
千家信息网最后更新 2025年02月05日如何实现基于ceph rbd+corosync+pacemaker HA-NFS文件共享

这篇文章给大家分享的是有关如何实现基于ceph rbd+corosync+pacemaker HA-NFS文件共享的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。

1、 架构图

2、 环境准备

1.1 IP规划

两台支持rbd的nfs-server主机:10.20.18.97 10.20.18.11

Vip:10.20.18.123 设置在同一网段

1.2 软件安装

# yum install pacemaker corosync cluster-glue resource-agents# rpm -ivh crmsh-2.1-1.6.x86_64.rpm -nodeps

1.3 ssh互信略

1.4 ntp配置略

1.5 配置hosts(两台)

# vi /etc/hosts10.20.18.97 SZB-L000590810.20.18.111 SZB-L0005469

3 Corosync配置(两台)

3.1 配置corosync

# mv /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf# vi /etc/corosync/corosync.conf# Please read the corosync.conf.5 manual pagecompatibility: whitetanktotem {version: 2secauth: offthreads: 0interface {ringnumber: 0bindnetaddr: 10.20.18.111mcastaddr: 226.94.1.1mcastport: 5405ttl: 1}}logging {fileline: offto_stderr: noto_logfile: yesto_syslog: yeslogfile: /var/log/cluster/corosync.logdebug: offtimestamp: onlogger_subsys {subsys: AMFdebug: off}}amf {mode: disabled}service {ver: 0name: pacemaker}aisexec {user: rootgroup: root}

Bindnetaddr 为节点ip

Mcastaddr 为合法的组播地址,随便填

3.2 启动corosync

# service corosync start

3.3 参数设置(目的是因为只有2个节点,忽略法定票数)

# crm configure property stonith-enabled=false# sudo crm configure property no-quorum-policy=ignore

3.4 查看节点状态(都online就ok)

# crm_mon -1Last updated: Fri May 22 15:56:37 2015Last change: Fri May 22 13:09:33 2015 via crmd on SZB-L0005469Stack: classic openais (with plugin)Current DC: SZB-L0005908 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes0 Resources configuredOnline: [ SZB-L0005469 SZB-L0005908 ]

4. Pacemaker资源配置

说明: Pacemaker主要管理资源,本实验中为了搭建rbd-nfs,所以会对rbd map 、mount 、nfs-export、vip等资源进行管理。简而言之,自动实现rbd到nfs共享。

4.1 格式化rbd

(本实验创建的镜像为share/share2),只需在一个节点做一次。

# rados mkpool share# rbd create share/share2 -size 1024# rbd map share/share2# rbd showmapped# mkfs.xfs /dev/rbd1# rbd unmap share/share2

4.2 资源pacemaker配置

4.2.1 准备rbd.in脚本

(拷贝ceph源码中脚本src/ocf/rbd.in到下面目录,所有节点都做)

# mkdir /usr/lib/ocf/resource.d/ceph# cd /usr/lib/ocf/resource.d/ceph/# chmod + rbd.in

注:下面配置单个节点做

4.2.2 配置rbd map

(可以用crm configure edit命令直接copy下面内容)

# primitive p_rbd_map_1 ocf:ceph:rbd.in \params user=admin pool=share name=share2 cephconf="/etc/ceph/ceph.conf" \ op monitor interval=10s timeout=20s

4.2.3 mount 文件系统

# primitive p_fs_rbd_1 Filesystem \params directory="/mnt/share2" fstype=xfs device="/dev/rbd/share/share2" fast_stop=no \op monitor interval=20s timeout=40s \op start interval=0 timeout=60s \op stop interval=0 timeout=60s

4.2.4 nfs-export

primitive p_export_rbd_1 exportfs \params directory="/mnt/share2" clientspec="10.20.0.0/24" options="rw,async,no_subtree_check,no_root_squash" fsid=1 \op monitor interval=10s timeout=20s \

4.2.5 VIP 配置

primitive p_vip_1 IPaddr \params ip=10.20.18.123 cidr_netmask=24 \op monitor interval=5

4.2.6 nfs 服务配置

primitive p_rpcbind lsb:rpcbind \op monitor interval=10s timeout=30sprimitive p_nfs_server lsb:nfs \op monitor interval=10s timeout=30s

4.3 源组配置

group g_nfs p_rpcbind p_nfs_servergroup g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1clone clo_nfs g_nfs \meta globally-unique="false" target-role="Started"

4.4 资源定位规则

location l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469

4.5 查看总的配置(可略过)

# crm configure editnode SZB-L0005469node SZB-L0005908primitive p_export_rbd_1 exportfs \params directory="/mnt/share2" clientspec="10.20.0.0/24" options="rw,async,no_subtree_check,no_root_squash" fsid=1 \op monitor interval=10s timeout=20s \op start interval=0 timeout=40sprimitive p_fs_rbd_1 Filesystem \params directory="/mnt/share2" fstype=xfs device="/dev/rbd/share/share2" fast_stop=no \op monitor interval=20s timeout=40s \op start interval=0 timeout=60s \op stop interval=0 timeout=60sprimitive p_nfs_server lsb:nfs \op monitor interval=10s timeout=30sprimitive p_rbd_map_1 ocf:ceph:rbd.in \params user=admin pool=share name=share2 cephconf="/etc/ceph/ceph.conf" \op monitor interval=10s timeout=20sprimitive p_rpcbind lsb:rpcbind \op monitor interval=10s timeout=30sprimitive p_vip_1 IPaddr \params ip=10.20.18.123 cidr_netmask=24 \op monitor interval=5group g_nfs p_rpcbind p_nfs_servergroup g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1clone clo_nfs g_nfs \meta globally-unique=false target-role=Startedlocation l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469property cib-bootstrap-options: \dc-version=1.1.10-14.el6-368c726 \cluster-infrastructure="classic openais (with plugin)" \symmetric-cluster=true \stonith-enabled=false \no-quorum-policy=ignore \expected-quorum-votes=2rsc_defaults rsc_defaults-options: \resource-stickiness=0 \migration-threshold=1

4.6 重启corosync服务(两台)

# service corosync restart# crm_mon -1Last updated: Fri May 22 16:55:14 2015Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469Stack: classic openais (with plugin)Current DC: SZB-L0005908 - partition with quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes8 Resources configuredOnline: [ SZB-L0005469 SZB-L0005908 ]Resource Group: g_rbd_share_1p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005469 p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005469 p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005469 p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005469 Clone Set: clo_nfs [g_nfs]Started: [ SZB-L0005469 SZB-L0005908 ]

5 测试

5.1 查看挂载点(通过虚拟Ip)

# showmount -e 10.20.18.123Export list for 10.20.18.123:/mnt/share2 10.20.0.0/24

5. 2 故障转移测试

# service corosync stop # SZB-L0005469 执行# crm_mon -1 # SZB-L0005908 执行Last updated: Fri May 22 17:14:31 2015Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469Stack: classic openais (with plugin)Current DC: SZB-L0005908 - partition WITHOUT quorumVersion: 1.1.10-14.el6-368c7262 Nodes configured, 2 expected votes8 Resources configuredOnline: [ SZB-L0005908 ]OFFLINE: [ SZB-L0005469 ]Resource Group: g_rbd_share_1p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005908 p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005908 p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005908 p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005908 Clone Set: clo_nfs [g_nfs]Started: [ SZB-L0005908 ]Stopped: [ SZB-L0005469 ]

感谢各位的阅读!关于"如何实现基于ceph rbd+corosync+pacemaker HA-NFS文件共享"这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!

0