千家信息网

怎么在Kolla-Ansible中使用Ceph后端存储

发表于:2025-02-05 作者:千家信息网编辑
千家信息网最后更新 2025年02月05日,这篇文章给大家分享的是有关怎么在Kolla-Ansible中使用Ceph后端存储的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。配置Ceph以osdev用户登录:$ ssh
千家信息网最后更新 2025年02月05日怎么在Kolla-Ansible中使用Ceph后端存储

这篇文章给大家分享的是有关怎么在Kolla-Ansible中使用Ceph后端存储的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。

配置Ceph

  • osdev用户登录:

$ ssh osdev@osdev01$ cd /opt/ceph/deploy/

创建Pool

创建镜像Pool
  • 用于保存Glance镜像:

$ ceph osd pool create images 32 32pool 'images' created
创建卷Pool
  • 用于保存Cinder的卷:

$ ceph osd pool create volumes 32 32pool 'volumes' created
  • 用于保存Cinder的卷备份:

$ ceph osd pool create backups 32 32pool 'backups' created
创建虚拟机Pool
  • 用于保存虚拟机系统卷:

$ ceph osd pool create vms 32 32pool 'vms' created
查看Pool
$ ceph osd lspools1 .rgw.root2 default.rgw.control3 default.rgw.meta4 default.rgw.log6 rbd8 images9 volumes10 backups11 vms

创建用户

查看用户
  • 查看所有用户:

$ ceph auth listinstalled auth entries:mds.osdev01        key: AQCabn5b18tHExAAkZ6Aq3IQ4/aqYEBBey5O3Q==        caps: [mds] allow        caps: [mon] allow profile mds        caps: [osd] allow rwxmds.osdev02        key: AQCbbn5bcq4yJRAAUfhoqPNfyp2m/ORu/7vHBA==        caps: [mds] allow        caps: [mon] allow profile mds        caps: [osd] allow rwxmds.osdev03        key: AQCcbn5bTAIdORAApGu9NJvC3AmS+L3EWXLMdw==        caps: [mds] allow        caps: [mon] allow profile mds        caps: [osd] allow rwxosd.0        key: AQCyJH5bG2ZBHRAAsDaLHcoOxv/mLCHwITA7JQ==        caps: [mgr] allow profile osd        caps: [mon] allow profile osd        caps: [osd] allow *osd.1        key: AQDTJH5bjvQ8HxAA4cyLttvZwiqFq1srFoSXWg==        caps: [mgr] allow profile osd        caps: [mon] allow profile osd        caps: [osd] allow *osd.2        key: AQD9JH5bbPi6IRAA7DbwaCh6JBaa6RfWPoe9VQ==        caps: [mgr] allow profile osd        caps: [mon] allow profile osd        caps: [osd] allow *client.admin        key: AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ==        caps: [mds] allow *        caps: [mgr] allow *        caps: [mon] allow *        caps: [osd] allow *client.bootstrap-mds        key: AQA1In5boIRwGBAAgj5OccvTGYkuB+btlgL0BQ==        caps: [mon] allow profile bootstrap-mdsclient.bootstrap-mgr        key: AQA1In5bS6pwGBAA379v3LXJrdURLmA1gnTaLQ==        caps: [mon] allow profile bootstrap-mgrclient.bootstrap-osd        key: AQA1In5bnMpwGBAAXohUfa4rGS0Rd2weMl4dPg==        caps: [mon] allow profile bootstrap-osdclient.bootstrap-rbd        key: AQA1In5buelwGBAANQSalrSzH3yslSc4rYPu1g==        caps: [mon] allow profile bootstrap-rbdclient.bootstrap-rgw        key: AQA1In5b0ghxGBAAIGK3WmBSkKZMnSEfvnEQow==        caps: [mon] allow profile bootstrap-rgwclient.rgw.osdev01        key: AQDZbn5b6aChEBAAzRuX4UWlxyws+aX1i+D26Q==        caps: [mon] allow rw        caps: [osd] allow rwxclient.rgw.osdev02        key: AQDabn5bypCDJBAAt18L5ppG5lEg6NkGQLYs5w==        caps: [mon] allow rw        caps: [osd] allow rwxclient.rgw.osdev03        key: AQDbbn5bbEVNNBAArX+/AKQu9q3hCRn/05Ya3A==        caps: [mon] allow rw        caps: [osd] allow rwxmgr.osdev01        key: AQDPIn5beqPTORAAEzcX3fMCCclLR2RiPyvugw==        caps: [mds] allow *        caps: [mon] allow profile mgr        caps: [osd] allow *mgr.osdev02        key: AQDRIn5bLRVqDxAA/yWXO8pX6fQynJNyCcoNww==        caps: [mds] allow *        caps: [mon] allow profile mgr        caps: [osd] allow *mgr.osdev03        key: AQDSIn5bGyrhHxAAvtAEOveovRxmdDlF45i2Cg==        caps: [mds] allow *        caps: [mon] allow profile mgr        caps: [osd] allow *
  • 查看指定用户:

$ ceph auth get client.adminexported keyring for client.admin[client.admin]        key = AQA1In5bZkxwGBAA9bBLE5/NKstK1CRMzfGgKQ==        caps mds = "allow *"        caps mgr = "allow *"        caps mon = "allow *"        caps osd = "allow *"
创建Glance用户
  • 创建glance用户,并授予images存储池访问权限:

$ ceph auth get-or-create client.glance[client.glance]        key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ==$ ceph auth caps client.glance mon 'allow r' osd 'allow rwx pool=images'updated caps for client.glance
  • 查看并保存glance用户的KeyRing文件:

$ ceph auth get client.glanceexported keyring for client.glance[client.glance]        key = AQBQq4NboVHdGxAAlfK2WJkiZMolluATpvOviQ==        caps mon = "allow r"        caps osd = "allow rwx pool=images"$ ceph auth get client.glance -o /opt/ceph/deploy/ceph.client.glance.keyringexported keyring for client.glance
创建Cinder用户
  • 创建cinder-volume用户,并授予volumes存储池访问权限:

$ ceph auth get-or-create client.cinder-volume[client.cinder-volume]        key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA==$ ceph auth caps client.cinder-volume mon 'allow r' osd 'allow rwx pool=volumes'updated caps for client.cinder-volume
  • 查看并保存cinder-volume用户的KeyRing文件:

$ ceph auth get client.cinder-volumeexported keyring for client.cinder-volume[client.cinder-volume]        key = AQBKt4NbqROVIxAACnH+pVv141+wOpgWj14RjA==        caps mon = "allow r"        caps osd = "allow rwx pool=volumes"$ ceph auth get client.cinder-volume -o /opt/ceph/deploy/ceph.client.cinder-volume.keyringexported keyring for client.cinder-volume
  • 创建cinder-backup用户,并授予volumesbackups存储池访问权限:

$ ceph auth get-or-create client.cinder-backup[client.cinder-backup]        key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og==$ ceph auth caps client.cinder-backup mon 'allow r' osd 'allow rwx pool=volumes, allow rwx pool=backups'updated caps for client.cinder-backup
  • 查看并保存cinder-backup用户的KeyRing文件:

$ ceph auth get client.cinder-backupexported keyring for client.cinder-backup[client.cinder-backup]        key = AQBit4NbN0rvLRAAYoa4SBM0qvwY8kPo5Md0og==        caps mon = "allow r"        caps osd = "allow rwx pool=volumes, allow rwx pool=backups"$ ceph auth get client.cinder-backup -o /opt/ceph/deploy/ceph.client.cinder-backup.keyringexported keyring for client.cinder-backup
创建Nova用户
  • 创建nova用户,并授予vms存储池的访问权限:

$ ceph auth get-or-create client.nova[client.nova]        key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A==$ ceph auth caps client.nova mon 'allow r' osd 'allow rwx pool=vms'updated caps for client.nova
  • 查看并保存nova用户的KeyRing文件:

$ ceph auth get client.novaexported keyring for client.nova[client.nova]        key = AQD7tINb4A58GRAA7CsAM9EAwFwtIpTdQFGO7A==        caps mon = "allow r"        caps osd = "allow rwx pool=vms"$ ceph auth get client.nova -o /opt/ceph/deploy/ceph.client.nova.keyringexported keyring for client.nova

配置Kolla-Ansible

  • root用户身份登录osdev01部署节点,并设置好环境变量:

$ ssh root@osdev01$ export KOLLA_ROOT=/opt/kolla$ cd ${KOLLA_ROOT}/myconfig

全局配置

  • 编辑globals.yml,禁止部署Ceph

enable_ceph: "no"
  • 开启Cinder服务,并开启GlanceCinderNova的后端Ceph功能:

enable_cinder: "yes"glance_backend_ceph: "yes"cinder_backend_ceph: "yes"nova_backend_ceph: "yes"

配置Glance

  • 配置Glance使用glance用户使用Cephimages存储池:

$ mkdir -pv config/glancemkdir: 已创建目录 "config/glance"$ vi config/glance/glance-api.conf[glance_store]stores = rbddefault_store = rbdrbd_store_pool = imagesrbd_store_user = glancerbd_store_ceph_conf = /etc/ceph/ceph.conf
  • 新增GlanceCeph客户端配置和glance用户的KeyRing文件:

$ vi config/glance/ceph.conf[global]fsid = 383237bd-becf-49d5-9bd6-deb0bc35ab2amon_initial_members = osdev01, osdev02, osdev03mon_host = 172.29.101.166,172.29.101.167,172.29.101.168auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephx$ cp -v /opt/ceph/deploy/ceph.client.glance.keyring config/glance/ceph.client.glance.keyring"/opt/ceph/deploy/ceph.client.glance.keyring" -> "config/glance/ceph.client.glance.keyring"

配置Cinder

  • 配置Cinder卷服务使用Cephcinder-volume用户使用volumes存储池,Cinder卷备份服务使用Cephcinder-backup用户使用backups存储池:

$ mkdir -pv config/cinder/mkdir: 已创建目录 "config/cinder/"$ vi config/cinder/cinder-volume.conf[DEFAULT]enabled_backends=rbd-1[rbd-1]rbd_ceph_conf=/etc/ceph/ceph.confrbd_user=cinder-volumebackend_host=rbd:volumesrbd_pool=volumesvolume_backend_name=rbd-1volume_driver=cinder.volume.drivers.rbd.RBDDriverrbd_secret_uuid = {{ cinder_rbd_secret_uuid }}$ vi config/cinder/cinder-backup.conf[DEFAULT]backup_ceph_conf=/etc/ceph/ceph.confbackup_ceph_user=cinder-backupbackup_ceph_chunk_size = 134217728backup_ceph_pool=backupsbackup_driver = cinder.backup.drivers.cephbackup_ceph_stripe_unit = 0backup_ceph_stripe_count = 0restore_discard_excess_bytes = true
  • 新增Cinder的卷服务和卷备份服务的Ceph客户端配置和KeyRing文件:

$ cp config/glance/ceph.conf config/cinder/ceph.conf$ mkdir -pv config/cinder/cinder-backup/ config/cinder/cinder-volume/mkdir: 已创建目录 "config/cinder/cinder-backup/"mkdir: 已创建目录 "config/cinder/cinder-volume/"$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-backup/ceph.client.cinder-volume.keyring"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-volume.keyring"$ cp -v /opt/ceph/deploy/ceph.client.cinder-backup.keyring config/cinder/cinder-backup/ceph.client.cinder-backup.keyring"/opt/ceph/deploy/ceph.client.cinder-backup.keyring" -> "config/cinder/cinder-backup/ceph.client.cinder-backup.keyring"$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/cinder/cinder-volume/ceph.client.cinder-volume.keyring"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/cinder/cinder-volume/ceph.client.cinder.keyring"

配置Nova

  • 配置Nova使用Cephnova用户使用vms存储池:

$ vi config/nova/nova-compute.conf[libvirt]images_rbd_pool=vmsimages_type=rbdimages_rbd_ceph_conf=/etc/ceph/ceph.confrbd_user=nova
  • 新增Nova的Ceph客户端配置和nova用户的KeyRing文件:

$ cp -v config/glance/ceph.conf config/nova/ceph.conf"config/glance/ceph.conf" -> "config/nova/ceph.conf"$ cp -v /opt/ceph/deploy/ceph.client.nova.keyring config/nova/ceph.client.nova.keyring"/opt/ceph/deploy/ceph.client.nova.keyring" -> "config/nova/ceph.client.nova.keyring"$ cp -v /opt/ceph/deploy/ceph.client.cinder-volume.keyring config/nova/ceph.client.cinder.keyring"/opt/ceph/deploy/ceph.client.cinder-volume.keyring" -> "config/nova/ceph.client.cinder.keyring"

部署测试

开始部署

  • 编辑部署脚本osdev.sh

#!/bin/bashset -uexvusage(){        echo -e "usage : \n$0 "        echo -e "  \$1 action"}if [ $# -lt 1 ]; then        usage        exit 1fi${KOLLA_ROOT}/kolla-ansible/tools/kolla-ansible --configdir ${KOLLA_ROOT}/myconfig --passwords ${KOLLA_ROOT}/myconfig/passwords.yml --inventory ${KOLLA_ROOT}/myconfig/mynodes.conf $1
  • 增加可执行权限:

$ chmod a+x osdev.sh
  • 部署OpenStack集群:

$ ./osdev.sh bootstrap-servers$ ./osdev.sh prechecks$ ./osdev.sh pull$ ./osdev.sh deploy$ ./osdev.sh post-deploy# ./osdev.sh "destroy --yes-i-really-really-mean-it"
  • 查看部署的服务概况:

$ openstack service list+----------------------------------+-------------+----------------+| ID                               | Name        | Type           |+----------------------------------+-------------+----------------+| 304c9c5073f14f4a97ca1c3cf5e1b49e | neutron     | network        || 46de4440a5cf4a5697fa94b2d0424ba9 | heat        | orchestration  || 60b46b491ce7403aaec0c064384dde49 | heat-cfn    | cloudformation || 7726ab5d41c5450d954f073f1a9aff28 | cinderv2    | volumev2       || 7a4bd5fc12904cc7b5c3810412f98c57 | gnocchi     | metric         || 7ae6f98018fb4d509e862e45ebf10145 | glance      | image          || a0ec333149284c09ac0e157753205fd6 | nova        | compute        || b15e90c382864723945b15c37d3317a6 | placement   | placement      || b5eaa49c50d64316b583eb1c0c4f9ce2 | cinderv3    | volumev3       || c6474640f5d9424da0ec51c70c1e6e01 | nova_legacy | compute_legacy || db27eb8524be4db3be12b9dd0dab16b8 | keystone    | identity       || edf5c8b894a74a69b65bb49d8e014fff | cinder      | volume         |+----------------------------------+-------------+----------------+$ openstack volume service list+------------------+-------------------+------+---------+-------+----------------------------+| Binary           | Host              | Zone | Status  | State | Updated At                 |+------------------+-------------------+------+---------+-------+----------------------------+| cinder-scheduler | osdev02           | nova | enabled | up    | 2018-08-27T11:33:27.000000 || cinder-volume    | rbd:volumes@rbd-1 | nova | enabled | up    | 2018-08-27T11:33:18.000000 || cinder-backup    | osdev02           | nova | enabled | up    | 2018-08-27T11:33:17.000000 |+------------------+-------------------+------+---------+-------+----------------------------+

初始化环境

  • 查看初始的RBD存储池情况,全部是空的:

$ rbd -p images ls$ rbd -p volumes ls$ rbd -p vms ls
  • 设置环境变量,并初始化OpenStack环境:

$ . ${KOLLA_ROOT}/myconfig/admin-openrc.sh$ ${KOLLA_ROOT}/myconfig/init-runonce
  • 查看新增的镜像信息:

$ openstack image list+--------------------------------------+--------+--------+| ID                                   | Name   | Status |+--------------------------------------+--------+--------+| 293b25bb-30be-4839-b4e2-1dba3c43a56a | cirros | active |+--------------------------------------+--------+--------+$ openstack image show 293b25bb-30be-4839-b4e2-1dba3c43a56a+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+| Field            | Value                                                                                                                                                    |+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+| checksum         | 443b7623e27ecf03dc9e01ee93f67afe                                                                                                                         || container_format | bare                                                                                                                                                     || created_at       | 2018-08-27T11:25:29Z                                                                                                                                     || disk_format      | qcow2                                                                                                                                                    || file             | /v2/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/file                                                                                                     || id               | 293b25bb-30be-4839-b4e2-1dba3c43a56a                                                                                                                     || min_disk         | 0                                                                                                                                                        || min_ram          | 0                                                                                                                                                        || name             | cirros                                                                                                                                                   || owner            | 68ada1726a864e2081a56be0a2dca3a0                                                                                                                         || properties       | locations='[{u'url': u'rbd://383237bd-becf-49d5-9bd6-deb0bc35ab2a/images/293b25bb-30be-4839-b4e2-1dba3c43a56a/snap', u'metadata': {}}]', os_type='linux' || protected        | False                                                                                                                                                    || schema           | /v2/schemas/image                                                                                                                                        || size             | 12716032                                                                                                                                                 || status           | active                                                                                                                                                   || tags             |                                                                                                                                                          || updated_at       | 2018-08-27T11:25:30Z                                                                                                                                     || virtual_size     | None                                                                                                                                                     || visibility       | public                                                                                                                                                   |+------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
  • 查看RBD存储池的变化,可见镜像被存储在images存储池中,并且有一个快照:

$ rbd -p images ls293b25bb-30be-4839-b4e2-1dba3c43a56a$ rbd -p volumes ls$ rbd -p vms ls$ rbd -p images info 293b25bb-30be-4839-b4e2-1dba3c43a56arbd image '293b25bb-30be-4839-b4e2-1dba3c43a56a':        size 12 MiB in 2 objects        order 23 (8 MiB objects)        id: 178f4008d95        block_name_prefix: rbd_data.178f4008d95        format: 2        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten        op_features:         flags:         create_timestamp: Mon Aug 27 19:25:29 2018$ rbd -p images snap list 293b25bb-30be-4839-b4e2-1dba3c43a56aSNAPID NAME   SIZE TIMESTAMP                     6 snap 12 MiB Mon Aug 27 19:25:30 2018

创建虚拟机

  • 创建一个虚拟机:

$ openstack server create --image cirros --flavor m1.tiny --key-name mykey --nic net-id=9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 demo1+-------------------------------------+-----------------------------------------------+| Field                               | Value                                         |+-------------------------------------+-----------------------------------------------+| OS-DCF:diskConfig                   | MANUAL                                        || OS-EXT-AZ:availability_zone         |                                               || OS-EXT-SRV-ATTR:host                | None                                          || OS-EXT-SRV-ATTR:hypervisor_hostname | None                                          || OS-EXT-SRV-ATTR:instance_name       |                                               || OS-EXT-STS:power_state              | NOSTATE                                       || OS-EXT-STS:task_state               | scheduling                                    || OS-EXT-STS:vm_state                 | building                                      || OS-SRV-USG:launched_at              | None                                          || OS-SRV-USG:terminated_at            | None                                          || accessIPv4                          |                                               || accessIPv6                          |                                               || addresses                           |                                               || adminPass                           | 65cVBJ7S6yaD                                  || config_drive                        |                                               || created                             | 2018-08-27T11:29:03Z                          || flavor                              | m1.tiny (1)                                   || hostId                              |                                               || id                                  | 309f1364-4d58-413d-a865-dfc37ff04308          || image                               | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a) || key_name                            | mykey                                         || name                                | demo1                                         || progress                            | 0                                             || project_id                          | 68ada1726a864e2081a56be0a2dca3a0              || properties                          |                                               || security_groups                     | name='default'                                || status                              | BUILD                                         || updated                             | 2018-08-27T11:29:03Z                          || user_id                             | c7111728fbbd4fd79bdd2b60e7d7cb42              || volumes_attached                    |                                               |+-------------------------------------+-----------------------------------------------+$ openstack server show 309f1364-4d58-413d-a865-dfc37ff04308+-------------------------------------+----------------------------------------------------------+| Field                               | Value                                                    |+-------------------------------------+----------------------------------------------------------+| OS-DCF:diskConfig                   | MANUAL                                                   || OS-EXT-AZ:availability_zone         | nova                                                     || OS-EXT-SRV-ATTR:host                | osdev03                                                  || OS-EXT-SRV-ATTR:hypervisor_hostname | osdev03                                                  || OS-EXT-SRV-ATTR:instance_name       | instance-00000001                                        || OS-EXT-STS:power_state              | Running                                                  || OS-EXT-STS:task_state               | None                                                     || OS-EXT-STS:vm_state                 | active                                                   || OS-SRV-USG:launched_at              | 2018-08-27T11:29:16.000000                               || OS-SRV-USG:terminated_at            | None                                                     || accessIPv4                          |                                                          || accessIPv6                          |                                                          || addresses                           | demo-net=10.0.0.11                                       || config_drive                        |                                                          || created                             | 2018-08-27T11:29:03Z                                     || flavor                              | m1.tiny (1)                                              || hostId                              | 4e345dd9f770f63f80d3eafe97c20d97746e890b2971a8398e26db86 || id                                  | 309f1364-4d58-413d-a865-dfc37ff04308                     || image                               | cirros (293b25bb-30be-4839-b4e2-1dba3c43a56a)            || key_name                            | mykey                                                    || name                                | demo1                                                    || progress                            | 0                                                        || project_id                          | 68ada1726a864e2081a56be0a2dca3a0                         || properties                          |                                                          || security_groups                     | name='default'                                           || status                              | ACTIVE                                                   || updated                             | 2018-08-27T11:29:16Z                                     || user_id                             | c7111728fbbd4fd79bdd2b60e7d7cb42                         || volumes_attached                    |                                                          |+-------------------------------------+----------------------------------------------------------+
  • 可见虚拟机在vms存储池中创建了一个卷:

$ rbd -p images ls293b25bb-30be-4839-b4e2-1dba3c43a56a$ rbd -p volumes ls$ rbd -p backups ls$ rbd -p vms ls309f1364-4d58-413d-a865-dfc37ff04308_disk
  • 登录虚拟机所在节点,可以看到虚拟机的系统卷使用的是在vms中创建的这个卷,从进程参数可以看出qemu直接使用的是Cephlibrbd库访问的RBD块设备:

$ ssh osdev@osdev03$ sudo docker exec -it nova_libvirt virsh list Id    Name                           State---------------------------------------------------- 1     instance-00000001              running$ sudo docker exec -it nova_libvirt virsh dumpxml 1...                                                                                    
...$ ps -aux | grep qemu42436 2678909 4.6 0.0 1341144 171404 ? Sl 19:29 0:08 /usr/libexec/qemu-kvm -name guest=instance-00000001,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-instance-00000001/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client-IBRS,ss=on,hypervisor=on,tsc_adjust=on,avx512f=on,avx512dq=on,clflushopt=on,clwb=on,avx512cd=on,avx512bw=on,avx512vl=on,pku=on,stibp=on,pdpe1gb=on -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 309f1364-4d58-413d-a865-dfc37ff04308 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=17.0.2,serial=74bf926c-70b7-03df-b211-d21d6016081a,uuid=309f1364-4d58-413d-a865-dfc37ff04308,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-instance-00000001/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -object secret,id=virtio-disk0-secret0,data=zNy84nlNYigA4vjbuOxcGQa1/hh8w28i/WoJbO1Xsl4=,keyid=masterKey0,iv=OhX+FApyFyq2XLWq0ff/Ew==,format=base64 -drive file=rbd:vms/309f1364-4d58-413d-a865-dfc37ff04308_disk:id=nova:auth_supported=cephx\;none:mon_host=172.29.101.166\:6789\;172.29.101.167\:6789\;172.29.101.168\:6789,file.password-secret=virtio-disk0-secret0,format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=79,id=hostnet0,vhost=on,vhostfd=80 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:04:e8:e9,bus=pci.0,addr=0x3 -chardev pty,id=charserial0,logfile=/var/lib/nova/instances/309f1364-4d58-413d-a865-dfc37ff04308/console.log,logappend=off -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 172.29.101.168:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on$ ldd /usr/libexec/qemu-kvm | grep -e ceph -e rbd librbd.so.1 => /lib64/librbd.so.1 (0x00007fde38815000) libceph-common.so.0 => /usr/lib64/ceph/libceph-common.so.0 (0x00007fde28247000)

创建卷

  • 创建一个卷:

$ openstack volume create --size 1 volume1+---------------------+--------------------------------------+| Field               | Value                                |+---------------------+--------------------------------------+| attachments         | []                                   || availability_zone   | nova                                 || bootable            | false                                || consistencygroup_id | None                                 || created_at          | 2018-08-27T11:33:52.000000           || description         | None                                 || encrypted           | False                                || id                  | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 || migration_status    | None                                 || multiattach         | False                                || name                | volume1                              || properties          |                                      || replication_status  | None                                 || size                | 1                                    || snapshot_id         | None                                 || source_volid        | None                                 || status              | creating                             || type                | None                                 || updated_at          | None                                 || user_id             | c7111728fbbd4fd79bdd2b60e7d7cb42     |+---------------------+--------------------------------------+
  • 查看存储池状态,可以看到新建的卷被放在volumes存储池:

$ rbd -p images ls293b25bb-30be-4839-b4e2-1dba3c43a56a$ rbd -p volumes lsvolume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9$ rbd -p backups ls$ rbd -p vms ls309f1364-4d58-413d-a865-dfc37ff04308_disk

创建备份

  • 创建一个卷备份,可以看到是创建在backups存储池中:

$ openstack volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9+-------+--------------------------------------+| Field | Value                                |+-------+--------------------------------------+| id    | f2321578-88d5-4337-b93c-798855b817ce || name  | None                                 |+-------+--------------------------------------+$ openstack volume backup list+--------------------------------------+------+-------------+-----------+------+| ID                                   | Name | Description | Status    | Size |+--------------------------------------+------+-------------+-----------+------+| f2321578-88d5-4337-b93c-798855b817ce | None | None        | available |    1 |+--------------------------------------+------+-------------+-----------+------+$ openstack volume backup show f2321578-88d5-4337-b93c-798855b817ce+-----------------------+--------------------------------------+| Field                 | Value                                |+-----------------------+--------------------------------------+| availability_zone     | nova                                 || container             | backups                              || created_at            | 2018-08-27T11:39:40.000000           || data_timestamp        | 2018-08-27T11:39:40.000000           || description           | None                                 || fail_reason           | None                                 || has_dependent_backups | False                                || id                    | f2321578-88d5-4337-b93c-798855b817ce || is_incremental        | False                                || name                  | None                                 || object_count          | 0                                    || size                  | 1                                    || snapshot_id           | None                                 || status                | available                            || updated_at            | 2018-08-27T11:39:46.000000           || volume_id             | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9 |+-----------------------+--------------------------------------+$ rbd -p backups lsvolume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base
  • 在此创建一个备份,发现backups存储池并无变化,仅仅是在原有的备份卷中增加一个快照:

$ volume backup create 3ccca300-bee3-4b5a-b89b-32e6b8b806d9+-------+--------------------------------------+| Field | Value                                |+-------+--------------------------------------+| id    | 07132063-9bdb-4391-addd-a791dae2cfea || name  | None                                 |+-------+--------------------------------------+$ rbd -p backups lsvolume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.base$ rbd -p backups snap list volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9.backup.baseSNAPID NAME                                                            SIZE TIMESTAMP                     4 backup.f2321578-88d5-4337-b93c-798855b817ce.snap.1535369984.08 1 GiB Mon Aug 27 19:39:46 2018      5 backup.07132063-9bdb-4391-addd-a791dae2cfea.snap.1535370126.76 1 GiB Mon Aug 27 19:42:08 2018

连接卷

  • 把新增的卷链接到之前创建的虚拟机中:

$ openstack server add volume demo1 volume1$ openstack volume show volume1+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| Field                          | Value                                                                                                                                                                                                                                                                                                                        |+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+| attachments                    | [{u'server_id': u'309f1364-4d58-413d-a865-dfc37ff04308', u'attachment_id': u'fb4d9ec0-8a33-4ed0-8845-09e6f17aac81', u'attached_at': u'2018-08-27T11:44:51.000000', u'host_name': u'osdev03', u'volume_id': u'3ccca300-bee3-4b5a-b89b-32e6b8b806d9', u'device': u'/dev/vdb', u'id': u'3ccca300-bee3-4b5a-b89b-32e6b8b806d9'}] || availability_zone              | nova                                                                                                                                                                                                                                                                                                                         || bootable                       | false                                                                                                                                                                                                                                                                                                                        || consistencygroup_id            | None                                                                                                                                                                                                                                                                                                                         || created_at                     | 2018-08-27T11:33:52.000000                                                                                                                                                                                                                                                                                                   || description                    | None                                                                                                                                                                                                                                                                                                                         || encrypted                      | False                                                                                                                                                                                                                                                                                                                        || id                             | 3ccca300-bee3-4b5a-b89b-32e6b8b806d9                                                                                                                                                                                                                                                                                         || migration_status               | None                                                                                                                                                                                                                                                                                                                         || multiattach                    | False                                                                                                                                                                                                                                                                                                                        || name                           | volume1                                                                                                                                                                                                                                                                                                                      || os-vol-host-attr:host          | rbd:volumes@rbd-1#rbd-1                                                                                                                                                                                                                                                                                                      || os-vol-mig-status-attr:migstat | None                                                                                                                                                                                                                                                                                                                         || os-vol-mig-status-attr:name_id | None                                                                                                                                                                                                                                                                                                                         || os-vol-tenant-attr:tenant_id   | 68ada1726a864e2081a56be0a2dca3a0                                                                                                                                                                                                                                                                                             || properties                     | attached_mode='rw'                                                                                                                                                                                                                                                                                                           || replication_status             | None                                                                                                                                                                                                                                                                                                                         || size                           | 1                                                                                                                                                                                                                                                                                                                            || snapshot_id                    | None                                                                                                                                                                                                                                                                                                                         || source_volid                   | None                                                                                                                                                                                                                                                                                                                         || status                         | in-use                                                                                                                                                                                                                                                                                                                       || type                           | None                                                                                                                                                                                                                                                                                                                         || updated_at                     | 2018-08-27T11:44:52.000000                                                                                                                                                                                                                                                                                                   || user_id                        | c7111728fbbd4fd79bdd2b60e7d7cb42                                                                                                                                                                                                                                                                                             |+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  • 到虚拟机所在节点查看其libvirt上参数的变化,发现新增了一个RBD磁盘:

$ sudo docker exec -it nova_libvirt virsh dumpxml 1...                                                                                    
3ccca300-bee3-4b5a-b89b-32e6b8b806d9
...
  • 为虚拟机创建一个浮动IP,使用SSH登陆进去:

$ openstack console url show demo1+-------+-------------------------------------------------------------------------------------+| Field | Value                                                                               |+-------+-------------------------------------------------------------------------------------+| type  | novnc                                                                               || url   | http://172.29.101.167:6080/vnc_auto.html?token=9f835216-1c53-41ae-849a-44a85429a334 |+-------+-------------------------------------------------------------------------------------+$ openstack floating ip create public1+---------------------+--------------------------------------+| Field               | Value                                |+---------------------+--------------------------------------+| created_at          | 2018-08-27T11:49:02Z                 || description         |                                      || fixed_ip_address    | None                                 || floating_ip_address | 192.168.162.52                       || floating_network_id | ff69b3ff-c2c4-4474-a7ba-952fa99df919 || id                  | 2aa86075-9c62-49f5-84ac-e7b6353c9591 || name                | 192.168.162.52                       || port_id             | None                                 || project_id          | 68ada1726a864e2081a56be0a2dca3a0     || qos_policy_id       | None                                 || revision_number     | 0                                    || router_id           | None                                 || status              | DOWN                                 || subnet_id           | None                                 || tags                | []                                   || updated_at          | 2018-08-27T11:49:02Z                 |+---------------------+--------------------------------------+$ openstack server add floating ip demo1 192.168.162.52$ openstack server list+--------------------------------------+-------+--------+------------------------------------+--------+---------+| ID                                   | Name  | Status | Networks                           | Image  | Flavor  |+--------------------------------------+-------+--------+------------------------------------+--------+---------+| 309f1364-4d58-413d-a865-dfc37ff04308 | demo1 | ACTIVE | demo-net=10.0.0.11, 192.168.162.52 | cirros | m1.tiny |+--------------------------------------+-------+--------+------------------------------------+--------+---------+$ ssh root@osdev02$ ip netnsqrouter-65759e60-6e20-41cc-a79c-fc492232b127 (id: 1)qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 (id: 0)$ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ping 192.168.162.50$ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ping 10.0.0.9(用户名"cirros",密码"gocubsgo")$ ip netns exec qrouter-65759e60-6e20-41cc-a79c-fc492232b127 ssh cirros@192.168.162.52$ ip netns exec qdhcp-9aa15b3e-7084-450f-b0a4-7c905e6bb7c0 ssh cirros@10.0.0.11$ sudo passwd rootChanging password for rootNew password: Bad password: too weakRetype password: Password for root changed by root$ su -Password:
  • 创建分区并写入测试文件,最后卸载分区:

# lsblkNAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTvda     253:0    0    1G  0 disk |-vda1  253:1    0 1015M  0 part /`-vda15 253:15   0    8M  0 part vdb     253:16   0    1G  0 disk# mkfs.ext4 /dev/vdbmke2fs 1.42.12 (29-Aug-2014)Creating filesystem with 262144 4k blocks and 65536 inodesFilesystem UUID: ede8d366-bfbc-4b9a-9d3f-306104f410d7Superblock backups stored on blocks:         32768, 98304, 163840, 229376Allocating group tables: done                            Writing inode tables: done                            Creating journal (8192 blocks): doneWriting superblocks and filesystem accounting information: done# mount /dev/vdb /mnt# df -hFilesystem                Size      Used Available Use% Mounted on/dev                    240.1M         0    240.1M   0% /dev/dev/vda1               978.9M     23.9M    914.1M   3% /tmpfs                   244.2M         0    244.2M   0% /dev/shmtmpfs                   244.2M     92.0K    244.1M   0% /run/dev/vdb                975.9M      1.3M    907.4M   0% /mnt# echo "hello openstack, volume test." > /mnt/ceph_rbd_test# umount /mnt# df -hFilesystem                Size      Used Available Use% Mounted on/dev                    240.1M         0    240.1M   0% /dev/dev/vda1               978.9M     23.9M    914.1M   3% /tmpfs                   244.2M         0    244.2M   0% /dev/shmtmpfs                   244.2M     92.0K    244.1M   0% /run

断开卷

  • 断开卷,同时查看虚拟机内部变化:

$ openstack server remove volume demo1 volume1# lsblkNAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTvda     253:0    0    1G  0 disk |-vda1  253:1    0 1015M  0 part /`-vda15 253:15   0    8M  0 part
  • 在宿主机映射和挂载RBD卷,并查看之前虚拟机内部创建的文件,完全相同:

$ rbd showmappedid pool image    snap device    0  rbd  rbd_test -    /dev/rbd0$ rbd feature disable volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9 object-map fast-diff deep-flatten$ rbd map volumes/volume-3ccca300-bee3-4b5a-b89b-32e6b8b806d9/dev/rbd1$ mkdir /mnt/volume1$ mount /dev/rbd1 /mnt/volume1/$ cat /mnt/volume1/ceph_rbd_test  lost+found/    $ cat /mnt/volume1/ceph_rbd_test hello openstack, volume test.

感谢各位的阅读!关于"怎么在Kolla-Ansible中使用Ceph后端存储"这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!

0