私有云 openstack部署
发表于:2025-02-04 作者:千家信息网编辑
千家信息网最后更新 2025年02月04日,控制节点 计算节点 两台机器环境准备centos7.1控制节点外网卡Linux-node0.openstack 192.168.31.151内网卡Linux-node0.openstack 192.1
千家信息网最后更新 2025年02月04日私有云 openstack部署
控制节点 计算节点 两台机器环境准备centos7.1控制节点外网卡Linux-node0.openstack 192.168.31.151内网卡Linux-node0.openstack 192.168.1.17计算节点外网卡linux-node1.openstack 192.168.31.219内网卡linux-node1.openstack 192.168.1.8关闭防火墙 firewalld关闭selinux/etc/hosts #主机名一开始设置好,后面就不能更改了,否则就会出问题!这里设置好ip与主机名的对应关系192.168.1.17 linux-node0.openstack 192.168.1.8 linux-node1.openstack #Base 安装源文件yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpmyum install -y centos-release-openstack-libertyyum install -y python-openstackclient##MySQL yum install -y mariadb mariadb-server MySQL-python##RabbitMQyum install -y rabbitmq-server##Keystoneyum install -y openstack-keystone httpd mod_wsgi memcached python-memcached##Glanceyum install -y openstack-glance python-glance python-glanceclient##Novayum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient##Neutron linux-node1.example.comyum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset##Dashboardyum install -y openstack-dashboard##Cinderyum install -y openstack-cinder python-cinderclient*************************************************************************************##Baseyum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpmyum install centos-release-openstack-libertyyum install python-openstackclient##Nova linux-node1.openstackyum install -y openstack-nova-compute sysfsutils##Neutron linux-node1.openstackyum install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset##Cinderyum install -y openstack-cinder python-cinderclient targetcli python-oslo-policy*************************************************************************************设置时间同步、 关闭 selinux 和 iptables在 linux-node0 上配置( 只有 centos7 能用, 6 还用 ntp)[root@linux-node0 ~]# yum install -y chronyvim /etc/chrony.confallow 192.168/16 #允许那些服务器和自己同步时间[root@linux-node1 ~]# systemctl enable chronyd.service #开机启动systemctl start chronyd.servicetimedatectl set-timezone Asia/Shanghai #设置时区timedatectl status在 linux-node1 上配置[root@linux-node1 ~]# yum install -y chronyvim /etc/chrony.confserver 192.168.1.17 iburst #只留一行[root@linux-node1 ~]# systemctl enable chronyd.servicesystemctl start chronyd.servicetimedatectl set-timezone Asia/Shanghaichronyc sources[root@linux-node0 ~]# cp /usr/share/mysql/my-medium.cnf /etc/my.cnf 或 /usr/share/mariadb/my-medium.cnf[mysqld]default-storage-engine = innodbinnodb_file_per_tablecollation-server = utf8_general_ciinit-connect = 'SET NAMES utf8'character-set-server = utf8[root@linux-node0 ~]# systemctl enable mariadb.service mysql_install_db --datadir="/var/lib/mysql" --user="mysql" #初始化数据库systemctl start mariadb.servicemysql_secure_installation #设置密码及初始化密码 123456,一路 y 回车
CREATE DATABASE keystone; GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';CREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';CREATE DATABASE cinder;GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';flush privileges; 更新数据库[root@linux-node0 ~]# systemctl enable rabbitmq-server.service[root@linux-node0 ~]# systemctl start rabbitmq-server.service创建openstack的用户名和密码[root@linux-node0 ~]# rabbitmqctl add_user openstack openstackCreating user "openstack" ......done.用户授权[root@linux-node0 ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"Setting permissions for user "openstack" in vhost "/" ......done.列出rabbitmq的插件[root@linux-node0 ~]# rabbitmq-plugins list[ ] amqp_client 3.3.5[ ] cowboy 0.5.0-rmq3.3.5-git4b93c2d[ ] eldap 3.3.5-gite309de4[ ] mochiweb 2.7.0-rmq3.3.5-git680dba8[ ] rabbitmq_amqp1_0 3.3.5[ ] rabbitmq_auth_backend_ldap 3.3.5[ ] rabbitmq_auth_mechanism_ssl 3.3.5[ ] rabbitmq_consistent_hash_exchange 3.3.5[ ] rabbitmq_federation 3.3.5[ ] rabbitmq_federation_management 3.3.5[ ] rabbitmq_management 3.3.5[ ] rabbitmq_management_agent 3.3.5[ ] rabbitmq_management_visualiser 3.3.5[ ] rabbitmq_mqtt 3.3.5[ ] rabbitmq_shovel 3.3.5[ ] rabbitmq_shovel_management 3.3.5[ ] rabbitmq_stomp 3.3.5[ ] rabbitmq_test 3.3.5[ ] rabbitmq_tracing 3.3.5[ ] rabbitmq_web_dispatch 3.3.5[ ] rabbitmq_web_stomp 3.3.5[ ] rabbitmq_web_stomp_examples 3.3.5[ ] sockjs 0.3.4-rmq3.3.5-git3132eb9[ ] webmachine 1.10.3-rmq3.3.5-gite9359c7rabbitmq管理插件启动[root@linux-node0 ~]# rabbitmq-plugins enable rabbitmq_management 重新启动rabbitmq[root@linux-node0 ~]# systemctl restart rabbitmq-server.service再次查看监听的端口:web管理端口:15672lsof -i:15672 查看进程[root@linux-node0 ~]# netstat -lntupActive Internet connections (only servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 38649/beam tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 38154/mysqld 打开http://192.168.31.151:15672 用户名 guest 密码 guest 登录进去之后:Admin------->复制administrator------->点击openstack------>Update this user-------->Tags:粘帖administrator--------->密码都设置为openstack-------->logout然后在登陆:用户名 openstack 密码 openstack[root@linux-node0 ~]# openssl rand -hex 10 8097f01ca96d056655cf 产生的随机数[root@linux-node0 ~]# grep -n '^[a-z]' /etc/keystone/keystone.conf12:admin_token = 8097f01ca96d056655cf107:verbose = true495:connection = mysql://keystone:keystone@192.168.1.17/keystone1313:servers = 192.168.1.17:112111349:driver = sql1911:provider = uuid1916:driver = memcache同步数据库:注意权限,所以要用su -s 切换到keystone用户下执行:[root@linux-node0 ~]# su -s /bin/sh -c "keystone-manage db_sync" keystoneNo handlers could be found for logger "oslo_config.cfg"验证数据是否创建成功[root@linux-node0 ~]# mysql -ukeystone -pkeystoneMariaDB [(none)]> use keystoneDatabase changedMariaDB [keystone]> show tables;[root@linux-node0 ~]# systemctl enable memcached[root@linux-node0 ~]# systemctl start memcached.service必须要配置httpd的ServerName,否则keystone服务不能起来[root@linux-node0 ~]# vi /etc/httpd/conf/httpd.confServerName 192.168.1.17:80[root@linux-node0 ~]# grep -n '^ServerName' /etc/httpd/conf/httpd.conf 95:ServerName 192.168.1.17:80新建keystone配置文件,并用apache来代理它:5000 正常的api来访问 35357 管理访问的端口[root@linux-node0 ~]# vim /etc/httpd/conf.d/wsgi-keystone.confListen 5000Listen 35357WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-public WSGIScriptAlias / /usr/bin/keystone-wsgi-public WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On = 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined= 2.4> Require all granted Order allow,deny Allow from all WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} WSGIProcessGroup keystone-admin WSGIScriptAlias / /usr/bin/keystone-wsgi-admin WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On 启动memcache与httpd服务[root@linux-node0 ~]# systemctl enable httpd[root@linux-node0 ~]# systemctl start httpd查看端口[root@linux-node0 ~]# netstat -lntup|grep httpdtcp6 0 0 :::5000 :::* LISTEN 39324/httpd tcp6 0 0 :::80 :::* LISTEN 39324/httpd tcp6 0 0 :::35357 :::* LISTEN 39324/httpd 创建验证用户及地址版本信息[root@linux-node0 ~]# grep -n '^admin_token' /etc/keystone/keystone.conf12:admin_token = 8097f01ca96d056655cf[root@linux-node0 ~]# export OS_TOKEN=8097f01ca96d056655cf[root@linux-node0 ~]# export OS_URL=http://192.168.1.17:35357/v3[root@linux-node0 ~]# export OS_IDENTITY_API_VERSION=3[root@linux-node0 ~]# env创建 admin 项目---创建 admin 用户(密码 admin,生产不要这么玩) ---创建 admin 角色---把 admin 用户加入到 admin 项目赋予 admin 的角色(三个 admin 的位置:项目,用户,角色)创建租户用户[root@linux-node0 ~]# openstack project create --domain default --description "Admin Project" admin+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | Admin Project || domain_id | default || enabled | True || id | b5a578cfdb4848dba2b91dd38d1e2b93 || is_domain | False || name | admin || parent_id | None |+-------------+----------------------------------+创建admin的用户[root@linux-node0 ~]# openstack user create --domain default --password-prompt adminUser Password:adminRepeat User Password:admin+-----------+----------------------------------+| Field | Value |+-----------+----------------------------------+| domain_id | default || enabled | True || id | ad4f6c3d88a047d6802a05735a03ba8f || name | admin |+-----------+----------------------------------+创建admin的角色[root@linux-node0 ~]# openstack role create admin+-------+----------------------------------+| Field | Value |+-------+----------------------------------+| id | 0b546d54ed7f467fa90f18bb899452d3 || name | admin |+-------+----------------------------------+把admin用户加入到admin项目,并赋予admin的角色[root@linux-node0 ~]# openstack role add --project admin --user admin admin创建普通用户密码及角色[root@linux-node0 ~]# openstack project create --domain default --description "Demo Project" demo+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | Demo Project || domain_id | default || enabled | True || id | 5f4aaeb328f049ddbfe2717ded103c67 || is_domain | False || name | demo || parent_id | None |+-------------+----------------------------------+[root@linux-node0 ~]# openstack user create --domain default --password=demo demo+-----------+----------------------------------+| Field | Value |+-----------+----------------------------------+| domain_id | default || enabled | True || id | 46dc3686bc0a4ea6b8d09505603ccecc || name | demo |+-----------+----------------------------------+[root@linux-node0 ~]# openstack role create user+-------+----------------------------------+| Field | Value |+-------+----------------------------------+| id | 314a22500bf042ba9a970701e2c39998 || name | user |+-------+----------------------------------+[root@linux-node0 ~]# openstack role add --project demo --user demo user创建一个Service的项目 用来管理其他服务用[root@linux-node0 ~]# openstack project create --domain default --description "Service Project" service+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | Service Project || domain_id | default || enabled | True || id | de068df7bbad42379c0c6050fa306fbb || is_domain | False || name | service || parent_id | None |+-------------+----------------------------------+查看创建的用户及角色[root@linux-node0 ~]# openstack user list+----------------------------------+-------+| ID | Name |+----------------------------------+-------+| 46dc3686bc0a4ea6b8d09505603ccecc | demo || ad4f6c3d88a047d6802a05735a03ba8f | admin |+----------------------------------+-------+[root@linux-node0 ~]# openstack role list+----------------------------------+-------+| ID | Name |+----------------------------------+-------+| 0b546d54ed7f467fa90f18bb899452d3 | admin || 314a22500bf042ba9a970701e2c39998 | user |+----------------------------------+-------+[root@linux-node0 ~]# openstack project list+----------------------------------+---------+| ID | Name |+----------------------------------+---------+| 5f4aaeb328f049ddbfe2717ded103c67 | demo || b5a578cfdb4848dba2b91dd38d1e2b93 | admin || de068df7bbad42379c0c6050fa306fbb | service |+----------------------------------+---------+keystone本身也需要注册[root@linux-node0 ~]# openstack service create --name keystone --description "OpenStack Identity" identity+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | OpenStack Identity || enabled | True || id | d632e3036b974943978631b9cabcafe0 || name | keystone || type | identity |+-------------+----------------------------------+公共的api接口[root@linux-node0 ~]# openstack endpoint create --region RegionOne identity public http://192.168.1.17:5000/v2.0+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | 1a8eb7b97ff64c56886942a38054b9bb || interface | public || region | RegionOne || region_id | RegionOne || service_id | d632e3036b974943978631b9cabcafe0 || service_name | keystone || service_type | identity || url | http://192.168.1.17:5000/v2.0 |+--------------+----------------------------------+私有的api接口[root@linux-node0 ~]# openstack endpoint create --region RegionOne identity internal http://192.168.1.17:5000/v2.0+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | 4caf182c26dd457ba86d9974dfb00c1b || interface | internal || region | RegionOne || region_id | RegionOne || service_id | d632e3036b974943978631b9cabcafe0 || service_name | keystone || service_type | identity || url | http://192.168.1.17:5000/v2.0 |+--------------+----------------------------------+管理的api接口[root@linux-node0 ~]# openstack endpoint create --region RegionOne identity admin http://192.168.1.17:35357/v2.0+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | 34c8185306c340a0bb4efbfc9da21003 || interface | admin || region | RegionOne || region_id | RegionOne || service_id | d632e3036b974943978631b9cabcafe0 || service_name | keystone || service_type | identity || url | http://192.168.1.17:35357/v2.0 |+--------------+----------------------------------+查看api接口[root@linux-node0 ~]# openstack endpoint list+----------------------------------+-----------+--------------+--------------+---------+-----------+-| ID | Region | Service Name | Service Type | Enabled | Interface | URL |+----------------------------------+-----------+--------------+--------------+---------+-----------+-| 1a8eb7b97ff64c56886942a38054b9bb | RegionOne | keystone | identity | True | public | http://19.168.1.17:5000/v2.0 || 34c8185306c340a0bb4efbfc9da21003 | RegionOne | keystone | identity | True | admin | http://192.168.1.17:35357/v2.0 || 4caf182c26dd457ba86d9974dfb00c1b | RegionOne | keystone | identity | True | internal | http://192.168.1.17:5000/v2.0 |+----------------------------------+-----------+--------------+--------------+---------+-----------+-删除 openstack endpoint delete ID号 使用用户名密码的方式登录:必须要先取消环境变量[root@linux-node0 ~]# unset OS_TOKEN[root@linux-node0 ~]# unset OS_URL[root@linux-node0 ~]# openstack --os-auth-url http://192.168.1.17:35357/v3 --os-project-domain-id default --os-user-domain-id default --os-project-name admin --os-username admin --os-auth-type password token issuePassword: +------------+----------------------------------+| Field | Value |+------------+----------------------------------+| expires | 2016-05-27T05:25:30.193235Z || id | 4e8c0c1e0f20481d959c977db7f689b6 || project_id | b5a578cfdb4848dba2b91dd38d1e2b93 || user_id | ad4f6c3d88a047d6802a05735a03ba8f |+------------+----------------------------------+密码 admin便快捷的使用keystone,我们需要设置两个环境变量:[root@linux-node0 ~]# cat admin-openrc.sh export OS_PROJECT_DOMAIN_ID=defaultexport OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=adminexport OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=adminexport OS_AUTH_URL=http://192.168.1.17:35357/v3export OS_IDENTITY_API_VERSION=3[root@linux-node0 ~]# cat demo-openrc.sh export OS_PROJECT_DOMAIN_ID=defaultexport OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=demoexport OS_TENANT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=demoexport OS_AUTH_URL=http://192.168.1.17:5000/v3export OS_IDENTITY_API_VERSION=3添加执行权限[root@linux-node0 ~]# chmod +x admin-openrc.sh demo-openrc.sh 测试获取TOKEN[root@linux-node0 ~]# source admin-openrc.sh [root@linux-node0 ~]# openstack token issue+------------+----------------------------------+| Field | Value |+------------+----------------------------------+| expires | 2016-05-27T05:30:03.600977Z || id | 409443b07f5948f2a437443090927621 || project_id | b5a578cfdb4848dba2b91dd38d1e2b93 || user_id | ad4f6c3d88a047d6802a05735a03ba8f |+------------+----------------------------------+修改配置文件添加数据库连接glance-api.conf与glance-registry.conf[root@linux-node0 ~]# vim /etc/glance/glance-api.conf [root@linux-node0 ~]# vim /etc/glance/glance-registry.conf [root@linux-node0 ~]# grep -n '^connection' /etc/glance/glance-api.conf538:connection=mysql://glance:glance@19.168.1.17/glance[root@linux-node0 ~]# grep -n '^connection' /etc/glance/glance-registry.conf 363:connection=mysql://glance:glance@192.168.1.17/glance同步数据库[root@linux-node0 ~]# su -s /bin/sh -c "glance-manage db_sync" glanceNo handlers could be found for logger "oslo_config.cfg"查看数据库同步是否成功[root@linux-node0 ~]# mysql -uglance -pglance -h 192.168.1.17MariaDB [(none)]> use glance;Database changedMariaDB [glance]> show tables创建glance用户[root@linux-node0 ~]# source admin-openrc.sh [root@linux-node0 ~]# openstack user create --domain default --password=glance glance+-----------+----------------------------------+| Field | Value |+-----------+----------------------------------+| domain_id | default || enabled | True || id | 9009c0e0431646d193744d445411a0ab || name | glance |+-----------+----------------------------------+将此用户加入到项目里面并给它赋予admin的权限[root@linux-node0 ~]# openstack role add --project service --user glance admin[root@linux-node0 ~]# vim /etc/glance/glance-api.conf [root@linux-node0 ~]# grep -n ^[a-z] /etc/glance/glance-api.conf 363:verbose=True491:notification_driver = noop538:connection=mysql://glance:glance@192.168.1.17/glance642:default_store=file701:filesystem_store_datadir=/var/lib/glance/p_w_picpaths/974:auth_uri = http://192.168.1.17:5000975:auth_url = http://192.168.1.17:35357976:auth_plugin = password977:project_domain_id = default978:user_domain_id = default979:project_name = service980:username = glance981:password = glance1484:flavor= keystone[root@linux-node0 ~]# grep -n '^[a-z]' /etc/glance/glance-registry.conf 363:connection=mysql://glance:glance@192.168.1.17/glance767:auth_uri = http://192.168.1.17:5000768:auth_url = http://192.168.1.17:35357769:auth_plugin = password770:project_domain_id = default771:user_domain_id = default772:project_name = service773:username = glance774:password = glance1256:flavor=keystone启动glance服务并设置开机启动[root@linux-node0 ~]# systemctl enable openstack-glance-api[root@linux-node0 ~]# systemctl enable openstack-glance-registry[root@linux-node0 ~]# systemctl start openstack-glance-api[root@linux-node0 ~]# systemctl start openstack-glance-registry监听端口: registry:9191 api:9292[root@linux-node0 ~]# netstat -antup [root@linux-node0 ~]# source admin-openrc.sh [root@linux-node0 ~]# openstack service create --name glance --description "OpenStack Image service" p_w_picpath+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | OpenStack Image service || enabled | True || id | 5ab719816a7f4294a7f843950fcd2e59 || name | glance || type | p_w_picpath |+-------------+----------------------------------+openstack endpoint create --region RegionOne p_w_picpath public http://192.168.1.17:9292openstack endpoint create --region RegionOne p_w_picpath internal http://192.168.1.17:9292openstack endpoint create --region RegionOne p_w_picpath admin http://192.168.1.17:9292[root@linux-node0 ~]# openstack endpoint create --region RegionOne p_w_picpath public http://192.168.1.17:9292+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | a181ddd3ee8b4d72be1a0fda87b542ef || interface | public || region | RegionOne || region_id | RegionOne || service_id | 5ab719816a7f4294a7f843950fcd2e59 || service_name | glance || service_type | p_w_picpath || url | http://192.168.1.17:9292 |+--------------+----------------------------------+[root@linux-node0 ~]# openstack endpoint create --region RegionOne p_w_picpath internal http://10.0.0.80:9292+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | 4df72061901c40efa3905e95674fc5bc || interface | internal || region | RegionOne || region_id | RegionOne || service_id | 5ab719816a7f4294a7f843950fcd2e59 || service_name | glance || service_type | p_w_picpath || url | http://192.168.1.17:9292 |+--------------+----------------------------------+[root@linux-node0 ~]# openstack endpoint create --region RegionOne p_w_picpath admin http://192.168.1.17:9292+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | f755b7c22ab04ea3857840086b7c7754 || interface | admin || region | RegionOne || region_id | RegionOne || service_id | 5ab719816a7f4294a7f843950fcd2e59 || service_name | glance || service_type | p_w_picpath || url | http://192.168.1.17:9292 |+--------------+----------------------------------+环境变量添加export OS_IMAGE_API_VERSION=2[root@linux-node0 ~]# cat admin-openrc.sh export OS_PROJECT_DOMAIN_ID=defaultexport OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=adminexport OS_TENANT_NAME=adminexport OS_USERNAME=adminexport OS_PASSWORD=adminexport OS_AUTH_URL=http://192.168.1.17:35357/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2[root@linux-node0 ~]# cat demo-openrc.sh export OS_PROJECT_DOMAIN_ID=defaultexport OS_USER_DOMAIN_ID=defaultexport OS_PROJECT_NAME=demoexport OS_TENANT_NAME=demoexport OS_USERNAME=demoexport OS_PASSWORD=demoexport OS_AUTH_URL=http://192.168.1.17:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2[root@linux-node0 ~]# glance p_w_picpath-list上传镜像[root@linux-node0 ~]# glance p_w_picpath-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress[=============================>] 100%+------------------+--------------------------------------+| Property | Value |+------------------+--------------------------------------+| checksum | ee1eca47dc88f4879d8a229cc70a07c6 || container_format | bare || created_at | 2016-05-27T05:09:36Z || disk_format | qcow2 || id | 07245ea1-5f76-453d-a320-f1b08433a10a || min_disk | 0 || min_ram | 0 || name | cirros || owner | b5a578cfdb4848dba2b91dd38d1e2b93 || protected | False || size | 13287936 || status | active || tags | [] || updated_at | 2016-05-27T05:09:36Z || virtual_size | None || visibility | public |+------------------+--------------------------------------+查看镜像[root@linux-node0 ~]# glance p_w_picpath-list+--------------------------------------+--------+| ID | Name |+--------------------------------------+--------+| 07245ea1-5f76-453d-a320-f1b08433a10a | cirros |+--------------------------------------+--------+配置nova.conf文件1)、配置nova连接及数据表的创建[root@linux-node0 ~]# grep -n ^[a-z] /etc/nova/nova.conf 1740:connection=mysql://nova:nova@192.168.1.17/nova同步数据库[root@linux-node0 ~]# su -s /bin/sh -c "nova-manage db sync" nova检查数据库[root@linux-node0 ~]# mysql -unova -pnova -h 192.168.1.17MariaDB [(none)]> use novaDatabase changedMariaDB [nova]> show tables;2)、Keystone配置[root@linux-node0 ~]# vim /etc/nova/nova.conf [root@linux-node0 ~]# grep -n ^[a-z] /etc/nova/nova.conf 1420:rpc_backend=rabbit1740:connection=mysql://nova:nova@192.168.1.17/nova2922:rabbit_host=192.168.1.172926:rabbit_port=56722938:rabbit_userid=openstack2942:rabbit_password=openstack[root@linux-node0 ~]# source admin-openrc.sh [root@linux-node0 ~]# openstack user create --domain default --password=nova nova+-----------+----------------------------------+| Field | Value |+-----------+----------------------------------+| domain_id | default || enabled | True || id | 6b4986f51d7749fd8dc9668d92e21e01 || name | nova |+-----------+----------------------------------+[root@linux-node0 ~]# openstack role add --project service --user nova admin[root@linux-node0 nova]# grep -n ^[a-z] nova.conf 61:rpc_backend=rabbit124:my_ip=192.168.1.17268:enabled_apis=osapi_compute,metadata425:auth_strategy=keystone1053:network_api_class=nova.network.neutronv2.api.API1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver1331:security_group_api=neutron1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver1828:vncserver_listen=$my_ip1832:vncserver_proxyclient_address=$my_ip2213:connection=mysql://nova:nova@192.168.1.17/nova2334:host=$my_ip2542:auth_uri = http://192.168.1.17:50002543:auth_url = http://192.168.1.17:353572544:auth_plugin = password2545:project_domain_id = default2546:user_domain_id = default2547:project_name = service2548:username = nova2549:password = nova3033:url = http://192.168.1.17:96963034:auth_url = http://192.168.1.17:353573035:auth_plugin = password3036:project_domain_id = default3037:user_domain_id = default3038:region_name = RegionOne3039:project_name = service3040:username = neutron3041:password = neutron3049:service_metadata_proxy=true3053:metadata_proxy_shared_secret=neutron3804:lock_path=/var/lib/nova/tmp3967:rabbit_host=192.168.1.173971:rabbit_port=56723983:rabbit_userid=openstack3987:rabbit_password=openstack设置开机自启动systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service启动全部服务[root@linux-node1 ~]# systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service注册服务openstack service create --name nova --description "OpenStack Compute" computeopenstack endpoint create --region RegionOne compute public http://192.168.1.17:8774/v2/%\(tenant_id\)sopenstack endpoint create --region RegionOne compute internal http://192.168.1.17:8774/v2/%\(tenant_id\)sopenstack endpoint create --region RegionOne compute admin http://192.168.1.17:8774/v2/%\(tenant_id\)s[root@linux-node0 ~]# source admin-openrc.sh [root@linux-node0 ~]# openstack service create --name nova --description "OpenStack Compute" compute+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | OpenStack Compute || enabled | True || id | 47c979dc1312436fb912b8e8b842f293 || name | nova || type | compute |+-------------+----------------------------------+[root@linux-node0 ~]# openstack endpoint create --region RegionOne compute public http://192.168.1.17:8774/v2/%\(tenant_id\)s+--------------+----------------------------------------+| Field | Value |+--------------+----------------------------------------+| enabled | True || id | b42b8696b4e84d0581228f8fef746ce2 || interface | public || region | RegionOne || region_id | RegionOne || service_id | 47c979dc1312436fb912b8e8b842f293 || service_name | nova || service_type | compute || url | http://192.168.1.17:8774/v2/%(tenant_id)s |+--------------+----------------------------------------+[root@linux-node0 ~]# openstack endpoint create --region RegionOne compute internal http://192.168.1.17:8774/v2/%\(tenant_id\)s+--------------+----------------------------------------+| Field | Value |+--------------+----------------------------------------+| enabled | True || id | b54df18a4c23471399858df476a98d5f || interface | internal || region | RegionOne || region_id | RegionOne || service_id | 47c979dc1312436fb912b8e8b842f293 || service_name | nova || service_type | compute || url | http://192.168.1.17:8774/v2/%(tenant_id)s |+--------------+----------------------------------------+[root@linux-node0 ~]# openstack endpoint create --region RegionOne compute admin http://192.168.1.17:8774/v2/%\(tenant_id\)s+--------------+----------------------------------------+| Field | Value |+--------------+----------------------------------------+| enabled | True || id | 71daf94628384f1e8315060f86542696 || interface | admin || region | RegionOne || region_id | RegionOne || service_id | 47c979dc1312436fb912b8e8b842f293 || service_name | nova || service_type | compute || url | http://192.168.1.17:8774/v2/%(tenant_id)s |+--------------+----------------------------------------+验证是否成功:[root@linux-node0 ~]# openstack host list+-------------------------+-------------+----------+| Host Name | Service | Zone |+-------------------------+-------------+----------+| control-node0.xiegh.com | conductor | internal || control-node0.xiegh.com | consoleauth | internal || control-node0.xiegh.com | scheduler | internal || control-node0.xiegh.com | cert | internal |+-------------------------+-------------+----------+如果出现此四个服务则代表nova创建成功了nova-compute一般运行在计算节点上,通过message queue接收并管理VM的生命周期nova-compute通过libvirt管理KVM,通过XenAPI管理Xen[root@linux-node1 ~]# grep -n '^[a-z]' /etc/nova/nova.conf 61:rpc_backend=rabbit124:my_ip=10.0.0.81268:enabled_apis=osapi_compute,metadata425:auth_strategy=keystone1053:network_api_class=nova.network.neutronv2.api.API1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver1331:security_group_api=neutron1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver1820:novncproxy_base_url=http://192.168.1.17:6080/vnc_auto.html1828:vncserver_listen=0.0.0.01832:vncserver_proxyclient_address=10.0.0.811835:vnc_enabled=true1838:vnc_keymap=en-us2213:connection=mysql://nova:nova@192.168.1.17/nova2334:host=192.168.1.172542:auth_uri = http://192.168.1.17:50002543:auth_url = http://192.168.1.17:353572544:auth_plugin = password2545:project_domain_id = default2546:user_domain_id = default2547:project_name = service2548:username = nova2549:password = nova2727:virt_type=kvm3033:url = http://192.168.1.17:96963034:auth_url = http://192.168.1.17:353573035:auth_plugin = password3036:project_domain_id = default3037:user_domain_id = default3038:region_name = RegionOne3039:project_name = service3040:username = neutron3041:password = neutron3804:lock_path=/var/lib/nova/tmp3967:rabbit_host=192.168.1.173971:rabbit_port=56723983:rabbit_userid=openstack3987:rabbit_password=openstack[root@linux-node1 ~]# systemctl enable libvirtd openstack-nova-computeCreated symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-compute.service to /usr/lib/systemd/system/openstack-nova-compute.service[root@linux-node1 ~]# systemctl start libvirtd openstack-nova-compute在控制节点上面查看注册状态[root@linux-node0 ~]# openstack host list+-------------------------+-------------+----------+| Host Name | Service | Zone |+-------------------------+-------------+----------+| control-node0.xiegh.com | conductor | internal || control-node0.xiegh.com | consoleauth | internal || control-node0.xiegh.com | scheduler | internal || control-node0.xiegh.com | cert | internal || linux-node1.xiegh.com | compute | nova |+-------------------------+-------------+----------+计算节点上nova安装成功并注册成功镜像出于活动的状态[root@linux-node0 ~]# nova p_w_picpath-list+--------------------------------------+--------+--------+--------+| ID | Name | Status | Server |+--------------------------------------+--------+--------+--------+| 07245ea1-5f76-453d-a320-f1b08433a10a | cirros | ACTIVE | |+--------------------------------------+--------+--------+--------+验证nova与keystone的连接,如下说明成功[root@linux-node0 ~]# nova endpointsWARNING: keystone has no endpoint in ! Available endpoints for this service:+-----------+----------------------------------+| keystone | Value |+-----------+----------------------------------+| id | 1a8eb7b97ff64c56886942a38054b9bb || interface | public || region | RegionOne || region_id | RegionOne || url | http://192.168.1.17:5000/v2.0 |+-----------+----------------------------------++-----------+----------------------------------+| keystone | Value |+-----------+----------------------------------+| id | 34c8185306c340a0bb4efbfc9da21003 || interface | admin || region | RegionOne || region_id | RegionOne || url | http://192.168.1.17:35357/v2.0 |+-----------+----------------------------------++-----------+----------------------------------+| keystone | Value |+-----------+----------------------------------+| id | 4caf182c26dd457ba86d9974dfb00c1b || interface | internal || region | RegionOne || region_id | RegionOne || url | http://192.168.1.17:5000/v2.0 |+-----------+----------------------------------+WARNING: glance has no endpoint in ! Available endpoints for this service:+-----------+----------------------------------+| glance | Value |+-----------+----------------------------------+| id | 4df72061901c40efa3905e95674fc5bc || interface | internal || region | RegionOne || region_id | RegionOne || url | http://192.168.1.17:9292 |+-----------+----------------------------------++-----------+----------------------------------+| glance | Value |+-----------+----------------------------------+| id | a181ddd3ee8b4d72be1a0fda87b542ef || interface | public || region | RegionOne || region_id | RegionOne || url | http://192.168.1.17:9292 |+-----------+----------------------------------++-----------+----------------------------------+| glance | Value |+-----------+----------------------------------+| id | f755b7c22ab04ea3857840086b7c7754 || interface | admin || region | RegionOne || region_id | RegionOne || url | http://192.168.1.17:9292 |+-----------+----------------------------------+WARNING: nova has no endpoint in ! Available endpoints for this service:+-----------+-----------------------------------------------------------+| nova | Value |+-----------+-----------------------------------------------------------+| id | 71daf94628384f1e8315060f86542696 || interface | admin || region | RegionOne || region_id | RegionOne || url | http://192.168.1.17:8774/v2/b5a578cfdb4848dba2b91dd38d1e2b93 |+-----------+-----------------------------------------------------------++-----------+-----------------------------------------------------------+| nova | Value |+-----------+-----------------------------------------------------------+| id | b42b8696b4e84d0581228f8fef746ce2 || interface | public || region | RegionOne || region_id | RegionOne || url | http://192.168.1.17:8774/v2/b5a578cfdb4848dba2b91dd38d1e2b93 |+-----------+-----------------------------------------------------------++-----------+-----------------------------------------------------------+| nova | Value |+-----------+-----------------------------------------------------------+| id | b54df18a4c23471399858df476a98d5f || interface | internal || region | RegionOne || region_id | RegionOne || url | http://192.168.1.17:8774/v2/b5a578cfdb4848dba2b91dd38d1e2b93 |+-----------+-----------------------------------------------------------+Neutron部署注册网络服务:source admin-openrc.sh openstack service create --name neutron --description "OpenStack Networking" networkopenstack endpoint create --region RegionOne network public http://192.168.1.17:9696openstack endpoint create --region RegionOne network internal http://192.168.1.17:9696openstack endpoint create --region RegionOne network admin http://192.168.1.17:9696[root@linux-node0 ~]# openstack service create --name neutron --description "OpenStack Networking" network+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | OpenStack Networking || enabled | True || id | eb5f03d85c774f48940654811a22b581 || name | neutron || type | network |+-------------+----------------------------------+[root@linux-node0 ~]# openstack endpoint create --region RegionOne network public http://192.168.1.17:9696+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | f782d738018a4dc5b80931f67f31d974 || interface | public || region | RegionOne || region_id | RegionOne || service_id | eb5f03d85c774f48940654811a22b581 || service_name | neutron || service_type | network || url | http://192.168.1.17:9696 |+--------------+----------------------------------+[root@linux-node0 ~]# openstack endpoint create --region RegionOne network internal http://192.168.1.17:9696+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | 21565236fb1b4bc8b0c37c040369d7d4 || interface | internal || region | RegionOne || region_id | RegionOne || service_id | eb5f03d85c774f48940654811a22b581 || service_name | neutron || service_type | network || url | http://192.168.1.17:9696 |+--------------+----------------------------------+[root@linux-node0 ~]# openstack endpoint create --region RegionOne network admin http://192.168.1.17:9696+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | f2c83846242d4443a7cd3f205cf3bb56 || interface | admin || region | RegionOne || region_id | RegionOne || service_id | eb5f03d85c774f48940654811a22b581 || service_name | neutron || service_type | network || url | http://192.168.1.17:9696 |+--------------+----------------------------------+[root@linux-node0 ~]#grep -n '^[a-z]' /etc/neutron/neutron.conf 20:state_path = /var/lib/neutron60:core_plugin = ml277:service_plugins = router92:auth_strategy = keystone360:notify_nova_on_port_status_changes = True364:notify_nova_on_port_data_changes = True367:nova_url = http://192.168.1.17:8774/v2573:rpc_backend=rabbit717:auth_uri = http://192.168.1.17:5000718:auth_url = http://192.168.1.17:35357719:auth_plugin = password720:project_domain_id = default721:user_domain_id = default722:project_name = service723:username = neutron724:password = neutron737:connection = mysql://neutron:neutron@192.168.1.17:3306/neutron780:auth_url = http://192.168.1.17:35357781:auth_plugin = password782:project_domain_id = default783:user_domain_id = default784:region_name = RegionOne785:project_name = service786:username = nova787:password = nova818:lock_path = $state_path/lock998:rabbit_host = 192.168.1.171002:rabbit_port = 56721014:rabbit_userid = openstack1018:rabbit_password = openstack[root@linux-node0 ~]# grep -n '^[a-z]' /etc/neutron/plugins/ml2/ml2_conf.ini5:type_drivers = flat,vlan,gre,vxlan,geneve12:tenant_network_types = vlan,gre,vxlan,geneve18:mechanism_drivers = openvswitch,linuxbridge27:extension_drivers = port_security67:flat_networks = physnet1120:enable_ipset = True[root@linux-node0 ~]# grep -n '^[a-z]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini9:physical_interface_mappings = physnet1:eth016:enable_vxlan = false51:prevent_arp_spoofing = True57:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver61:enable_security_group = True[root@linux-node0 ~]# grep -n '^[a-z]' /etc/neutron/dhcp_agent.ini27:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver31:dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq52:enable_isolated_metadata = true[root@linux-node0 ~]# grep -n '^[a-z]' /etc/neutron/metadata_agent.ini4:auth_uri = http://192.168.1.17:50005:auth_url = http://192.168.1.17:353576:auth_region = RegionOne7:auth_plugin = password8:project_domain_id = default9:user_domain_id = default10:project_name = service11:username = neutron12:password = neutron29:nova_metadata_ip = 192.168.1.1752:metadata_proxy_shared_secret = neutron[root@linux-node0 ~]# grep -n '^[a-z]' /etc/nova/nova.conf 61:rpc_backend=rabbit124:my_ip=192.168.1.17268:enabled_apis=osapi_compute,metadata425:auth_strategy=keystone1053:network_api_class=nova.network.neutronv2.api.API1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver1331:security_group_api=neutron1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver1828:vncserver_listen=$my_ip1832:vncserver_proxyclient_address=$my_ip2213:connection=mysql://nova:nova@192.168.1.17/nova2334:host=$my_ip2542:auth_uri = http://192.168.1.17:50002543:auth_url = http://192.168.1.17:353572544:auth_plugin = password2545:project_domain_id = default2546:user_domain_id = default2547:project_name = service2548:username = nova2549:password = nova3033:url = http://192.168.1.17:96963034:auth_url = http://192.168.1.17:353573035:auth_plugin = password3036:project_domain_id = default3037:user_domain_id = default3038:region_name = RegionOne3039:project_name = service3040:username = neutron3041:password = neutron3049:service_metadata_proxy=true3053:metadata_proxy_shared_secret=neutron3804:lock_path=/var/lib/nova/tmp3967:rabbit_host=192.168.1.173971:rabbit_port=56723983:rabbit_userid=openstack3987:rabbit_password=openstack[root@linux-node0 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini[root@linux-node0 ~]# openstack user create --domain default --password=neutron neutron+-----------+----------------------------------+| Field | Value |+-----------+----------------------------------+| domain_id | default || enabled | True || id | 85c411a092354b29b58c7505a8905824 || name | neutron |+-----------+----------------------------------+[root@linux-node0 ~]# openstack role add --project service --user neutron admin更新数据库[root@linux-node0 ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron重新驱动下服务:[root@linux-node0 ~]# systemctl restart openstack-nova-api开机自动加载neutron及启动neutron服务systemctl enable neutron-server.service \neutron-linuxbridge-agent.service neutron-dhcp-agent.service \neutron-metadata-agent.servicesystemctl restart neutron-server.service \neutron-linuxbridge-agent.service neutron-dhcp-agent.service \neutron-metadata-agent.service执行结果:[root@linux-node0 ~]# systemctl restart openstack-nova-api[root@linux-node0 ~]# systemctl enable neutron-server.service \> neutron-linuxbridge-agent.service neutron-dhcp-agent.service \> neutron-metadata-agent.serviceln -s '/usr/lib/systemd/system/neutron-server.service' '/etc/systemd/system/multi-user.target.wants/neutron-server.service'ln -s '/usr/lib/systemd/system/neutron-linuxbridge-agent.service' '/etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service'ln -s '/usr/lib/systemd/system/neutron-dhcp-agent.service' '/etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service'ln -s '/usr/lib/systemd/system/neutron-metadata-agent.service' '/etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service'[root@linux-node0 ~]# systemctl restart neutron-server.service \> neutron-linuxbridge-agent.service neutron-dhcp-agent.service \> neutron-metadata-agent.service查看网卡的配置[root@linux-node0 ~]# source admin-openrc.sh [root@linux-node0 ~]# neutron agent-list +--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+| id | agent_type | host | alive | admin_state_up | binary |+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+| 4de08ae7-5699-47ea-986b-7c855d7eb7bd | Linux bridge agent | control-node0.xiegh.com | :-) | True | neutron-linuxbridge-agent || adf5abfc-2a74-4baa-b4cd-da7f7f05a378 | Metadata agent | control-node0.xiegh.com | :-) | True | neutron-metadata-agent || c1562203-c8ff-4189-a59b-bcf480ca70c1 | DHCP agent | control-node0.xiegh.com | :-) | True | neutron-dhcp-agent |+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+将控制节点的配置文件neutron.conf 拷贝到计算节点的目录/etc/neutron/[root@linux-node0 ~]# scp -r /etc/neutron/neutron.conf 10.0.0.81:/etc/neutron/[root@linux-node0 ~]# scp -r /etc/neutron/plugins/ml2/linuxbridge_agent.ini 10.0.0.81:/etc/neutron/plugins/ml2/[root@linux-node0 ~]# scp -r /etc/neutron/plugins/ml2/ml2_conf.ini 10.0.0.81:/etc/neutron/plugins/ml2/在已经拷贝了,这里就不拷贝了nova.conf [root@linux-node1 ~]# grep -n '^[a-z]' /etc/neutron/neutron.conf20:state_path = /var/lib/neutron60:core_plugin = ml277:service_plugins = router92:auth_strategy = keystone360:notify_nova_on_port_status_changes = True364:notify_nova_on_port_data_changes = True367:nova_url = http://192.168.1.17:8774/v2573:rpc_backend=rabbit717:auth_uri = http://192.168.1.17:5000718:auth_url = http://192.168.1.17:35357719:auth_plugin = password720:project_domain_id = default721:user_domain_id = default722:project_name = service723:username = neutron724:password = neutron737:connection = mysql://neutron:neutron@192.168.1.17:3306/neutron780:auth_url = http://192.168.1.17:35357781:auth_plugin = password782:project_domain_id = default783:user_domain_id = default784:region_name = RegionOne785:project_name = service786:username = nova787:password = nova818:lock_path = $state_path/lock998:rabbit_host = 192.168.1.171002:rabbit_port = 56721014:rabbit_userid = openstack1018:rabbit_password = openstack[root@linux-node1 ~]# grep -n '^[a-z]' /etc/neutron/plugins/ml2/linuxbridge_agent.ini9:physical_interface_mappings = physnet1:eth016:enable_vxlan = false51:prevent_arp_spoofing = True57:firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver61:enable_security_group = True[root@linux-node1 ~]# grep -n '^[a-z]' /etc/neutron/plugins/ml2/ml2_conf.ini5:type_drivers = flat,vlan,gre,vxlan,geneve12:tenant_network_types = vlan,gre,vxlan,geneve18:mechanism_drivers = openvswitch,linuxbridge27:extension_drivers = port_security67:flat_networks = physnet1120:enable_ipset = True[root@linux-node1 ~]# grep -n '^[a-z]' /etc/nova/nova.conf 61:rpc_backend=rabbit124:my_ip=10.0.0.81268:enabled_apis=osapi_compute,metadata425:auth_strategy=keystone1053:network_api_class=nova.network.neutronv2.api.API1171:linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver1331:security_group_api=neutron1760:firewall_driver = nova.virt.firewall.NoopFirewallDriver1820:novncproxy_base_url=http://192.168.1.17:6080/vnc_auto.html1828:vncserver_listen=0.0.0.01832:vncserver_proxyclient_address=10.0.0.811835:vnc_enabled=true1838:vnc_keymap=en-us2213:connection=mysql://nova:nova@192.168.1.17/nova2334:host=192.168.1.172542:auth_uri = http://192.168.1.17:50002543:auth_url = http://192.168.1.17:353572544:auth_plugin = password2545:project_domain_id = default2546:user_domain_id = default2547:project_name = service2548:username = nova2549:password = nova2727:virt_type=kvm3033:url = http://192.168.1.17:96963034:auth_url = http://192.168.1.17:353573035:auth_plugin = password3036:project_domain_id = default3037:user_domain_id = default3038:region_name = RegionOne3039:project_name = service3040:username = neutron3041:password = neutron3804:lock_path=/var/lib/nova/tmp3967:rabbit_host=192.168.1.173971:rabbit_port=56723983:rabbit_userid=openstack3987:rabbit_password=openstack[root@linux-node1 ~]# systemctl restart openstack-nova-compute[root@linux-node1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini[root@linux-node1 ~]# systemctl enable neutron-linuxbridge-agent.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.[root@linux-node1 ~]# systemctl restart neutron-linuxbridge-agent.service故障:在控制不能发现计算节点neutron-linuxbridge-agent重启计算计算节点恢复正常[root@linux-node0 ~]# neutron agent-list+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+| id | agent_type | host | alive | admin_state_up | binary |+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+| 4de08ae7-5699-47ea-986b-7c855d7eb7bd | Linux bridge agent | control-node0.xiegh.com | :-) | True | neutron-linuxbridge-agent || adf5abfc-2a74-4baa-b4cd-da7f7f05a378 | Metadata agent | control-node0.xiegh.com | :-) | True | neutron-metadata-agent || c1562203-c8ff-4189-a59b-bcf480ca70c1 | DHCP agent | control-node0.xiegh.com | :-) | True | neutron-dhcp-agent |+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+在控制节点查看:[root@linux-node0 ~]# neutron agent-list+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+| id | agent_type | host | alive | admin_state_up | binary |+--------------------------------------+--------------------+-------------------------+-------+----------------+---------------------------+| 4de08ae7-5699-47ea-986b-7c855d7eb7bd | Linux bridge agent | control-node0.xiegh.com | :-) | True | neutron-linuxbridge-agent || a7b2c76e-2c9e-42a3-89ac-725716a0c370 | Linux bridge agent | linux-node1.xiegh.com | :-) | True | neutron-linuxbridge-agent || adf5abfc-2a74-4baa-b4cd-da7f7f05a378 | Metadata agent | control-node0.xiegh.com | :-) | True | neutron-metadata-agent || c1562203-c8ff-4189-a59b-bcf480ca70c1 | DHCP agent | control-node0.xiegh.com | :-) | True | neutron-dhcp-agent |+--------------------------------------+--------------------+-------------------------+-------+------代表计算节点的Linux bridge agent已成功连接到控制节点。创建一个网络:neutron net-create flat --shared --provider:physical_network physnet1 --provider:network_type flat[root@linux-node0 ~]# neutron net-create flat --shared --provider:physical_network physnet1 --provider:network_type flatCreated a new network:+---------------------------+--------------------------------------+| Field | Value |+---------------------------+--------------------------------------+| admin_state_up | True || id | 516b5a4d-7fa5-43ae-8328-965c5e0e21d7 || mtu | 0 || name | flat || port_security_enabled | True || provider:network_type | flat || provider:physical_network | physnet1 || provider:segmentation_id | || router:external | False || shared | True || status | ACTIVE || subnets | || tenant_id | b5a578cfdb4848dba2b91dd38d1e2b93 |+---------------------------+--------------------------------------+创建一个子网neutron subnet-create flat 10.0.0.0/24 --name flat-subnet --allocation-pool start=10.0.0.100,end=10.0.0.200 --dns-nameserver 10.0.0.2 --gateway 10.0.0.2[root@linux-node0 ~]# neutron subnet-create flat 10.0.0.0/24 --name flat-subnet --allocation-pool start=10.0.0.100,end=10.0.0.200 --dns-nameserver 10.0.0.2 --gateway 10.0.0.2Created a new subnet:+-------------------+----------------------------------------------+| Field | Value |+-------------------+----------------------------------------------+| allocation_pools | {"start": "10.0.0.100", "end": "10.0.0.200"} || cidr | 10.0.0.0/24 || dns_nameservers | 10.0.0.2 || enable_dhcp | True || gateway_ip | 10.0.0.2 || host_routes | || id | 64ba9f36-3e3e-4988-a863-876759ad43c3 || ip_version | 4 || ipv6_address_mode | || ipv6_ra_mode | || name | flat-subnet || network_id | 516b5a4d-7fa5-43ae-8328-965c5e0e21d7 || subnetpool_id | || tenant_id | b5a578cfdb4848dba2b91dd38d1e2b93 |+-------------------+----------------------------------------------+查看网络和子网[root@linux-node0 ~]# neutron subnet-list +--------------------------------------+-------------+-------------+---------------------------------| id | name | cidr | allocation_pools |+--------------------------------------+-------------+-------------+---------------------------------| 64ba9f36-3e3e-4988-a863-876759ad43c3 | flat-subnet | 10.0.0.0/24 | {"start": "10.0.0.100", "end": "10.0.0.200"} |+--------------------------------------+-------------+-------------+---------------------------------[root@linux-node0 ~]# source demo-openrc.sh [root@linux-node0 ~]# ssh-keygen -q -N ""Enter file in which to save the key (/root/.ssh/id_rsa): [root@linux-node0 ~]# ls .ssh/id_rsa id_rsa.pub known_hosts[root@linux-node0 ~]# nova keypair-add --pub-key .ssh/id_rsa.pub mykey[root@linux-node0 ~]# nova keypair-list+-------+-------------------------------------------------+| Name | Fingerprint |+-------+-------------------------------------------------+| mykey | ce:ad:3c:51:2a:db:dc:4c:d1:a5:22:e6:20:53:cf:65 |+-------+-------------------------------------------------+[root@linux-node0 ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0+-------------+-----------+---------+-----------+--------------+| IP Protocol | From Port | To Port | IP Range | Source Group |+-------------+-----------+---------+-----------+--------------+| icmp | -1 | -1 | 0.0.0.0/0 | |+-------------+-----------+---------+-----------+--------------+[root@linux-node0 ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0+-------------+-----------+---------+-----------+--------------+| IP Protocol | From Port | To Port | IP Range | Source Group |+-------------+-----------+---------+-----------+--------------+| tcp | 22 | 22 | 0.0.0.0/0 | |+-------------+-----------+---------+-----------+--------------+[root@linux-node0 ~]# nova flavor-list+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True || 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True || 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True || 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True || 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+[root@linux-node0 ~]# nova p_w_picpath-list+--------------------------------------+--------+--------+--------+| ID | Name | Status | Server |+--------------------------------------+--------+--------+--------+| 07245ea1-5f76-453d-a320-f1b08433a10a | cirros | ACTIVE | |+--------------------------------------+--------+--------+--------+[root@linux-node0 ~]# neutron net-list+--------------------------------------+------+--------------------------------------------------+| id | name | subnets |+--------------------------------------+------+--------------------------------------------------+| 516b5a4d-7fa5-43ae-8328-965c5e0e21d7 | flat | 64ba9f36-3e3e-4988-a863-876759ad43c3 10.0.0.0/24 |+--------------------------------------+------+--------------------------------------------------+[root@linux-node0 ~]# nova secgroup-list+--------------------------------------+---------+------------------------+| Id | Name | Description |+--------------------------------------+---------+------------------------+| ba83d14c-2516-427b-8e88-89a49270b8d7 | default | Default security group |+--------------------------------------+---------+------------------------+nova boot --flavor m1.tiny --p_w_picpath cirros --nic net-id=516b5a4d-7fa5-43ae-8328-965c5e0e21d7 --security-group default --key-name mykey hehe-instance= 2.4> ErrorLogFormat "%{cu}t %M" ErrorLog /var/log/httpd/keystone-error.log CustomLog /var/log/httpd/keystone-access.log combined= 2.4> Require all granted Order allow,deny Allow from all
用户
节点
数据
服务
密码
配置
数据库
成功
管理
角色
控制
项目
同步
端口
网卡
接口
文件
环境
用户名
验证
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
id服务器
discuz服务器安装
服务器如何查询cpu大小
大型数据库建在哪里最安全
奥鹏教育东北大学数据库技术
软件开发 运营合同范本
网络安全协议教材
oracle数据库健壮性
完善数据库的意义
剑灵怎么替换数据库
学好软件开发
服务器有安全模式吗
服务器网络安全预案
网易邮箱链接显示服务器升级维护
检测数据库的命令
即拼七人拼团软件开发
腾讯云服务器备案电话
违反网络安全法情节严重罚款
北京蓝信网络安全大会
德诚珠宝软件开发怎么样
cf战队服务器在哪2021
航天系统网络安全
大潮政务数据库
怎么看一个数据库多大
网络安全超级风口
软件开发需求分析任务概述
河北五维互联网科技有限公司
学计算机网络技术有什么限制
linux系统网络安全卫士
排序器软件在服务器运行主要功能