千家信息网

Ceph的搭建与配置步骤

发表于:2024-10-05 作者:千家信息网编辑
千家信息网最后更新 2024年10月05日,这篇文章主要讲解了"Ceph的搭建与配置步骤",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"Ceph的搭建与配置步骤"吧!Ceph 搭建与配置笔记平台:
千家信息网最后更新 2024年10月05日Ceph的搭建与配置步骤

这篇文章主要讲解了"Ceph的搭建与配置步骤",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"Ceph的搭建与配置步骤"吧!

Ceph 搭建与配置笔记

平台:VirtualBox 4.3.12

虚拟机:CentOS 6.5 Linux 2.6.32-504.3.3.el6.x86_64

(1)准备工作

  • 修改主机名 配置 IP

注:以下步骤需要按情况对store01和store02分别配置

[root@store01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0TYPE=EthernetUUID=82e3956c-6850-426a-afd7-977a26a77dabONBOOT=yesNM_CONTROLLED=yesBOOTPROTO=staticIPADDR=192.168.1.179NETMASK=255.255.255.0GATEWAY=192.168.1.1HWADDR=08:00:27:65:4B:DDDEFROUTE=yesPEERDNS=yesPEERROUTES=yesIPV4_FAILURE_FATAL=yesIPV6INIT=noNAME="System eth0"[root@store01 ~]# ifconfig eth0eth0      Link encap:Ethernet  HWaddr 08:00:27:65:4B:DD            inet addr:192.168.1.179  Bcast:192.168.127.255  Mask:255.255.128.0          inet6 addr: fe80::a00:27ff:fe65:4bdd/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:75576 errors:0 dropped:0 overruns:0 frame:0          TX packets:41422 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000           RX bytes:88133010 (84.0 MiB)  TX bytes:4529474 (4.3 MiB)[root@store01 ~]# cat /etc/hosts127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.179    store01192.168.1.190    store02
  • 配置 NTP 时间同步

[root@store01 ~]# yum install ntp ntpdate[root@store01 ~]# service ntpd startStarting ntpd:                                             [  OK  ][root@store01 ~]# chkconfig ntpd on[root@store01 ~]# netstat -tunlp |grep 123udp        0      0 192.168.1.179:123           0.0.0.0:*                               12254/ntpd          udp        0      0 127.0.0.1:123               0.0.0.0:*                               12254/ntpd          udp        0      0 0.0.0.0:123                 0.0.0.0:*                               12254/ntpd          udp        0      0 fe80::a00:27ff:fe65:4bdd:123 :::*                                    12254/ntpd          udp        0      0 ::1:123                     :::*                                    12254/ntpd          udp        0      0 :::123                      :::*                                    12254/ntpd[root@store01 ~]# ntpq -p     remote           refid      st t when poll reach   delay   offset  jitter==============================================================================+gus.buptnet.edu 202.112.31.197   3 u    7   64  377  115.339    4.700  46.105*dns2.synet.edu. 202.118.1.46     2 u   69   64  373   44.619    1.680   6.667[root@store02 ~]# yum install ntp ntpdate[root@store02 ~]# vim /etc/ntp.conf server store01 iburst[root@store02 ~]# service ntpd startStarting ntpd:                                             [  OK  ][root@store02 ~]# chkconfig ntpd on[root@store02 ~]# ntpq -p     remote           refid      st t when poll reach   delay   offset  jitter============================================================================== store01         202.112.10.36    4 u   56   64    1    0.412    0.354   0.000 [root@store02 ~]# netstat -tunlp |grep 123udp        0      0 192.168.1.190:123           0.0.0.0:*                               12971/ntpd          udp        0      0 127.0.0.1:123               0.0.0.0:*                               12971/ntpd          udp        0      0 0.0.0.0:123                 0.0.0.0:*                               12971/ntpd          udp        0      0 fe80::a00:27ff:fead:71b:123 :::*                                    12971/ntpd          udp        0      0 ::1:123                     :::*                                    12971/ntpd          udp        0      0 :::123                      :::*                                    12971/ntpd
  • 关闭 SELinux IPTables

[root@store01 ~]# /etc/init.d/iptables stopiptables: Setting chains to policy ACCEPT: filter          [  OK  ]iptables: Flushing firewall rules:                         [  OK  ]iptables: Unloading modules:                             [  OK  ][root@store01 ~]# /etc/init.d/ip6tables stopip6tables: Setting chains to policy ACCEPT: filter         [  OK  ]ip6tables: Flushing firewall rules:                        [  OK  ]ip6tables: Unloading modules:                              [  OK  ][root@store01 ~]# chkconfig iptables off[root@store01 ~]# chkconfig ip6tables off[root@store01 ~]# setenforce 0[root@store01 ~]# vim /etc/selinux/config # This file controls the state of SELinux on the system.# SELINUX= can take one of these three values:#     enforcing - SELinux security policy is enforced.#     permissive - SELinux prints warnings instead of enforcing.#     disabled - No SELinux policy is loaded.SELINUX=disabled# SELINUXTYPE= can take one of these two values:#     targeted - Targeted processes are protected,#     mls - Multi Level Security protection.
  • 设置 root 用户 ssh 无密码访问(参考本博客另一篇博文)


(2)安装Ceph

  • 添加源(Ceph Version:0.72)

# vim /etc/yum.repos.d/ceph.repo
[ceph]name=Ceph packages for $basearchbaseurl=http://ceph.com/rpm-emperor/el6/$basearchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc[ceph-noarch]name=Ceph noarch packagesbaseurl=http://ceph.com/rpm-emperor/el6/noarchenabled=1gpgcheck=1type=rpm-mdgpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc[ceph-source]name=Ceph source packagesbaseurl=http://ceph.com/rpm-emperor/el6/SRPMSenabled=1gpgcheck=1type=rpm-mdgpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
  • 安装Ceph

[root@store01 ~]# yum install ceph ceph-deploy[root@store01 ~]# ceph-deploy --version1.5.11[root@store01 ~]# ceph --versionceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)[root@store02 ~]# yum install ceph

(3)配置Ceph

[root@store01 ~]# mkdir my-cluster[root@store01 ~]# cd my-cluster/[root@store01 my-cluster]# ls[root@store01 my-cluster]# ceph-deploy new store01[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf[ceph_deploy.cli][INFO  ] Invoked (1.5.11): /usr/bin/ceph-deploy new store01[ceph_deploy.new][DEBUG ] Creating new cluster named ceph[ceph_deploy.new][DEBUG ] Resolving host store01[ceph_deploy.new][DEBUG ] Monitor store01 at 192.168.1.179[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds[ceph_deploy.new][DEBUG ] Monitor initial members are ['store01'][ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.1.179'][ceph_deploy.new][DEBUG ] Creating a random mon key...[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...[root@store01 my-cluster]# lsceph.conf  ceph.log  ceph.mon.keyring[root@store01 my-cluster]# cat ceph.conf [global]auth_service_required = cephxfilestore_xattr_use_omap = trueauth_client_required = cephxauth_cluster_required = cephxmon_host = 192.168.1.179mon_initial_members = store01fsid = b45a03be-3abf-4736-8475-f238e1f2f479[root@store01 my-cluster]# vim ceph.conf [global]auth_service_required = cephxfilestore_xattr_use_omap = trueauth_client_required = cephxauth_cluster_required = cephxmon_host = 192.168.1.179mon_initial_members = store01fsid = b45a03be-3abf-4736-8475-f238e1f2f479osd pool default size = 2[root@store01 my-cluster]# ceph-deploy mon create-initial[root@store01 my-cluster]# lltotal 28-rw-r--r-- 1 root root   72 Dec 29 10:34 ceph.bootstrap-mds.keyring-rw-r--r-- 1 root root   72 Dec 29 10:34 ceph.bootstrap-osd.keyring-rw-r--r-- 1 root root   64 Dec 29 10:34 ceph.client.admin.keyring-rw-r--r-- 1 root root  257 Dec 29 10:34 ceph.conf-rw-r--r-- 1 root root 5783 Dec 29 10:34 ceph.log-rw-r--r-- 1 root root   73 Dec 29 10:33 ceph.mon.keyring[root@store01 my-cluster]# ceph-deploy disk list store01 store02[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf[ceph_deploy.cli][INFO  ] Invoked (1.5.11): /usr/bin/ceph-deploy disk list store01 store02[store01][DEBUG ] connected to host: store01 [store01][DEBUG ] detect platform information from remote host[store01][DEBUG ] detect machine type[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.6 Final[ceph_deploy.osd][DEBUG ] Listing disks on store01...[store01][DEBUG ] find the location of an executable[store01][INFO  ] Running command: /usr/sbin/ceph-disk list[store01][DEBUG ] /dev/sda :[store01][DEBUG ]  /dev/sda1 other, ext4, mounted on /boot[store01][DEBUG ]  /dev/sda2 other, LVM2_member[store01][DEBUG ] /dev/sdb other, unknown[store01][DEBUG ] /dev/sdc other, unknown[store01][DEBUG ] /dev/sr0 other, unknown[store02][DEBUG ] connected to host: store02 [store02][DEBUG ] detect platform information from remote host[store02][DEBUG ] detect machine type[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.6 Final[ceph_deploy.osd][DEBUG ] Listing disks on store02...[store02][DEBUG ] find the location of an executable[store02][INFO  ] Running command: /usr/sbin/ceph-disk list[store02][DEBUG ] /dev/sda :[store02][DEBUG ]  /dev/sda1 other, ext4, mounted on /boot[store02][DEBUG ]  /dev/sda2 other, LVM2_member[store02][DEBUG ] /dev/sdb other, unknown[store02][DEBUG ] /dev/sdc other, unknown[store02][DEBUG ] /dev/sr0 other, unknown[root@store01 my-cluster]# ceph-deploy disk zap store01:sd{b,c}[root@store01 my-cluster]# ceph-deploy disk zap store02:sd{b,c}[root@store01 my-cluster]# ceph-deploy osd create store01:sd{b,c}[root@store01 my-cluster]# ceph-deploy osd create store02:sd{b,c}[root@store01 my-cluster]# ceph status    cluster e5c2f7f3-2c8a-4ae0-af26-ab0cf5f67343     health HEALTH_OK     monmap e1: 1 mons at {store01=192.168.1.179:6789/0}, election epoch 1, quorum 0 store01     osdmap e18: 4 osds: 4 up, 4 in      pgmap v28: 192 pgs, 3 pools, 0 bytes data, 0 objects            136 MB used, 107 GB / 107 GB avail                 192 active+clean[root@store01 my-cluster]# ceph osd tree# id       weight  type name  up/down reweight-1      0.12    root default-2      0.06            host store011       0.03                    osd.1   up      1       0       0.03                    osd.0   up      1       -3      0.06            host store023       0.03                    osd.3   up      1       2       0.03                    osd.2   up      1

(4)创建池与用户

格式:ceph osd pool set {pool-name} pg_num注:pg_num选择标准Less than 5 OSDs set pg_num to 128Between 5 and 10 OSDs set pg_num to 512Between 10 and 50 OSDs set pg_num to 4096If you have more than 50 OSDs, you need to understand the tradeoffs and how to calculate the pg_num value by yourself[root@store01 my-cluster]# ceph osd pool create volumes 128pool 'volumes' created[root@store01 my-cluster]# ceph osd pool create images 128pool 'images' created[root@store01 my-cluster]# ceph osd lspools0 data,1 metadata,2 rbd,3 volumes,4 images,ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

至此 Ceph配置完毕

可以将此Ceph配置到OpenStack的Cinder、Nova以及Glance服务中作为后端。

感谢各位的阅读,以上就是"Ceph的搭建与配置步骤"的内容了,经过本文的学习后,相信大家对Ceph的搭建与配置步骤这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是,小编将为大家推送更多相关知识点的文章,欢迎关注!

0