千家信息网

keepalived+ceph rbd如何配置nfs的高可用

发表于:2025-02-01 作者:千家信息网编辑
千家信息网最后更新 2025年02月01日,小编给大家分享一下keepalived+ceph rbd如何配置nfs的高可用,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧
千家信息网最后更新 2025年02月01日keepalived+ceph rbd如何配置nfs的高可用

小编给大家分享一下keepalived+ceph rbd如何配置nfs的高可用,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!

1. 创建并映射rbd块设备

试验机器为ceph中的osd1和osd2主机,ip分别为192.168.32.3、192.168.32.4,vip为192.168.32.5。

先创建一个rbd块设备,然后在两台机器上导出此相同的块:

[root@osd1 keepalived]# rbd ls testtest.img[root@osd1 keepalived]# rbd showmappedid pool image    snap device    0  test test.img -    /dev/rbd0 [root@osd2 keepalived]# rbd showmappedid pool image    snap device    0  test test.img -    /dev/rbd0

然后将/dev/rbd0格式化后进行挂载,如/mnt目录。

2. 配置keepalived

从http://www.keepalived.org/download.html下载最新的keepalived 1.2.15版本,安装比较简单,直接按照INSTALL的说明进行默认安装即可。为了方便管理keepalived服务,需进行以下操作:

cp /usr/local/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/mkdir /etc/keepalivedcp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/cp /usr/local/sbin/keepalived /usr/sbin/chkconfig --add keepalivedchkconfig keepalived onchkconfig --list keepalived

配置/etc/keepalived.conf。osd1的配置如下:

[root@osd1 keepalived]# cat keepalived.confglobal_defs{    notification_email    {    }    router_id osd1}vrrp_instance VI_1 {    state MASTER    interface em1    virtual_router_id 100    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.32.5/24    }}

osd2的配置如下:

[root@osd2 keepalived]# cat keepalived.conf global_defs{    notification_email    {    #    admin@example.com    }    #notification_email_from admin@example.com    #smtp_server 127.0.0.1    #stmp_connect_timeout 30    router_id osd2}vrrp_instance VI_1 {    state BACKUP    interface em1    virtual_router_id 100    priority 99    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    notify_master "/etc/keepalived/ChangeToMaster.sh"    notify_backup "/etc/keepalived/ChangeToBackup.sh"    virtual_ipaddress {        192.168.32.5/24    }}

在osd2写了两个控制脚本,用于osd2的keepalived状态改变时执行。ChangeToMaster.sh:

[root@osd2 keepalived]# cat ChangeToMaster.sh #!/bin/bashservice nfs startssh lm "umount -f /mnt"ssh lm "mount -t nfs 192.168.32.5:/mnt /mnt"

ChangeToBackup.sh:

[root@osd2 keepalived]# cat ChangeToBackup.sh #!/bin/bashssh lm "umount -f /mnt"ssh osd1 "service nfs stop"ssh osd1 "umount /mnt"ssh osd1 "rbd unmap /dev/rbd0"ssh osd1 "rbd map test/test.img"ssh osd1 "mount /dev/rbd0 /mnt"ssh osd1 "service nfs start"ssh lm "mount -t nfs 192.168.32.5:/mnt /mnt"service nfs stopumount /mntrbd unmap /dev/rbd0rbd map test/test.imgmount /dev/rbd0

3. 配置nfs

在ceph的一个节点利用rbd map一个块设备,然后格式化并挂载在一个目录,如/mnt。在此节点上安装nfs的rpm包:

yum -y install nfs-utils

设置挂载目录:

cat /etc/exports /mnt 192.168.101.157(rw,async,no_subtree_check,no_root_squash)/mnt 192.168.108.4(rw,async,no_subtree_check,no_root_squash)

启动并导出:

service nfs startchkconfig nfs onexportfs -r

客户端查看一下:

showmount -e mon0Export list for mon0:/mnt 192.168.108.4,192.168.101.157

然后挂载:

mount -t nfs mon0:/mnt /mnt

需要注意的是,NFS默认是用UDP协议,如果网络不稳定,换成TCP协议即可:

mount -t nfs mon0:/mnt /mnt -o proto=tcp -o nolock

4. 测试

关闭osd1的网卡em1查看结果:

[root@osd1 keepalived]# ifdown em1[root@osd1 keepalived]# ip a1: lo:  mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: em1:  mtu 1500 qdisc mq state DOWN qlen 1000    link/ether c8:1f:66:de:5e:65 brd ff:ff:ff:ff:ff:ff

查看osd2的网卡:

[root@osd2 keepalived]# ip a1: lo:  mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: em1:  mtu 1500 qdisc mq state UP qlen 1000    link/ether c8:1f:66:f7:61:5d brd ff:ff:ff:ff:ff:ff    inet 192.168.32.4/24 brd 192.168.32.255 scope global em1       valid_lft forever preferred_lft forever    inet 192.168.32.5/24 scope global secondary em1       valid_lft forever preferred_lft forever    inet6 fe80::ca1f:66ff:fef7:615d/64 scope link        valid_lft forever preferred_lft forever

vip已经漂移到osd2,查看客户端挂载情况:

[root@lm /]# df -hTFilesystem        Type   Size  Used Avail Use% Mounted on/dev/sda1         ext4   454G   79G  353G  19% /tmpfs             tmpfs  1.7G  4.6M  1.7G   1% /dev/shm192.168.32.5:/mnt nfs    100G   21G   80G  21% /mnt

打开osd1的网卡em1:

[root@osd1 keepalived]# ifup em1Determining if ip address 192.168.32.3 is already in use for device em1...[root@osd1 keepalived]# ip a1: lo:  mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: em1:  mtu 1500 qdisc mq state UP qlen 1000    link/ether c8:1f:66:de:5e:65 brd ff:ff:ff:ff:ff:ff    inet 192.168.32.3/24 brd 192.168.32.255 scope global em1       valid_lft forever preferred_lft forever    inet 192.168.32.5/24 scope global secondary em1       valid_lft forever preferred_lft forever    inet6 fe80::ca1f:66ff:fede:5e65/64 scope link        valid_lft forever preferred_lft forever

osd2的em1:

[root@osd2 keepalived]# ip a1: lo:  mtu 65536 qdisc noqueue state UNKNOWN     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00    inet 127.0.0.1/8 scope host lo       valid_lft forever preferred_lft forever    inet6 ::1/128 scope host        valid_lft forever preferred_lft forever2: em1:  mtu 1500 qdisc mq state UP qlen 1000    link/ether c8:1f:66:f7:61:5d brd ff:ff:ff:ff:ff:ff    inet 192.168.32.4/24 brd 192.168.32.255 scope global em1       valid_lft forever preferred_lft forever    inet6 fe80::ca1f:66ff:fef7:615d/64 scope link        valid_lft forever preferred_lft forever

现在的客户端:

[root@lm /]# df -hTFilesystem        Type   Size  Used Avail Use% Mounted on/dev/sda1         ext4   454G   79G  353G  19% /tmpfs             tmpfs  1.7G  4.6M  1.7G   1% /dev/shm192.168.32.5:/mnt nfs    100G   21G   80G  21% /mnt[root@lm /]# ls /mnt31.txt  a.txt  b.txt  c.txt  etc  linux-3.17.4  linux-3.17.4.tar  m2.txt  test.img  test.img2

也可以为ceph rbd的iscsi配置vip。

以上是"keepalived+ceph rbd如何配置nfs的高可用"这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注行业资讯频道!

0