如何进行docker CE on Linux中的服务编排
本篇文章给大家分享的是有关如何进行docker CE on Linux中的服务编排,小编觉得挺实用的,因此分享给大家学习,希望大家阅读完这篇文章后可以有所收获,话不多说,跟着小编一起来看看吧。
概述
分布式应用程序的部署需要处理每一个逻辑层以及各层之间的关系,如前端代理,web应用,消息队列,缓存,数据库等。容器化部署为此引入服务编排(orchestration)的概念,用于集中控制容器的生存周期与运行参数,包括但不限于以下方面:
容器部署
资源控制
负载均衡
健康检查
应用配置
规模伸缩
位置迁移
docker-ce原生提供了compose与stack两种方式,通过定义在配置文件中的容器运行参数对服务进行编排,配置文件的格式可以为yaml或json。本文以前端代理(nginx) + web程序(tomcat)为例,简述这两种方式的应用。
示例
环境
宿主机2台:dock_host_0(192.168.9.168/24),dock_host_1(192.168.9.169/24),系统与软件环境一致,均为全新最小化安装,单物理网卡,操作系统版本CentOS Linux release 7.6.1810 (Core),内核版本3.10.0-957.12.2.el7.x86_64,关闭selinux与防火墙。
docker为默认安装,版本18.09.6,无其他额外设置。
基础镜像为最新版CentOS 7官方镜像。
tomcat与jdk环境,以及nginx的配置文件与日志,均以目录的方式挂载至容器。
源码包jdk-8u212-linux-x64.tar.gz与apache-tomcat-8.5.40.tar.gz,位于宿主机的/opt/目录。
nginx为tengine,通过源码编译安装。
compose方式
安装docker-compose
https://github.com/docker/compose/releases/ 中包含docker-compose的所有版本,本文以1.24.0为例。
下载源文件:
[root@docker_host_0 ~]# ip addr show eth0 | sed -n '/inet /p' | awk '{print $2}'192.168.9.168/24[root@docker_host_0 ~]#[root@docker_host_0 ~]# uname -r3.10.0-957.12.2.el7.x86_64[root@docker_host_0 ~]#[root@docker_host_0 ~]# docker -vDocker version 18.09.6, build 481bc77156[root@docker_host_0 ~]#[root@docker_host_0 ~]# curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 617 0 617 0 0 540 0 --:--:-- 0:00:01 --:--:-- 541100 15.4M 100 15.4M 0 0 261k 0 0:01:00 0:01:00 --:--:-- 836k[root@docker_host_0 ~]#[root@docker_host_0 ~]# ll /usr/local/bin/docker-compose-rw-r--r-- 1 root root 16154160 May 30 23:23 /usr/local/bin/docker-compose[root@docker_host_0 ~]#[root@docker_host_0 ~]# chmod u+x /usr/local/bin/docker-compose[root@docker_host_0 ~]#[root@docker_host_0 ~]# which docker-compose/usr/local/bin/docker-compose[root@docker_host_0 ~]#[root@docker_host_0 ~]# docker-compose versiondocker-compose version 1.24.0, build 0aa59064docker-py version: 3.7.2CPython version: 3.6.8OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018[root@docker_host_0 ~]#
添加docker-compose的命令行补全功能:
[root@docker_host_0 ~]# curl -L https://raw.githubusercontent.com/docker/compose/1.24.0/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 13258 100 13258 0 0 14985 0 --:--:-- --:--:-- --:--:-- 14980[root@docker_host_0 ~]#[root@docker_host_0 ~]# source /etc/bash_completion.d/docker-compose[root@docker_host_0 ~]#[root@docker_host_0 ~]# docker-composebuild create exec kill port push run stop upbundle down help logs ps restart scale top versionconfig events images pause pull rm start unpause[root@docker_host_0 ~]#
部署服务
创建目录挂载的源路径:
本例中,tomcat与jdk环境对应的源挂载路径分别为/opt/apps/app_0/source与/opt/jdks。
server.xml中的pattern字段用于设置默认的访问日志格式,更改为%A:%{local}p %a:%{remote}p,表示本端IP:端口 对端IP:端口,用于区分访问来源。
[root@docker_host_0 ~]# cd /opt/[root@docker_host_0 opt]#[root@docker_host_0 opt]# lsapache-tomcat-8.5.40.tar.gz containerd jdk-8u212-linux-x64.tar.gz[root@docker_host_0 opt]#[root@docker_host_0 opt]# mkdir -p /opt/{apps/app_0/source,jdks}[root@docker_host_0 opt]#[root@docker_host_0 opt]# tar axf apache-tomcat-8.5.40.tar.gz --strip-components=1 -C apps/app_0/source/[root@docker_host_0 opt]#[root@docker_host_0 opt]# sed -i 's/pattern="%h %l %u %t/pattern="%A:%{local}p %a:%{remote}p %t/' apps/app_0/source/conf/server.xml[root@docker_host_0 opt]#[root@docker_host_0 opt]# tar axf jdk-8u212-linux-x64.tar.gz -C jdks/[root@docker_host_0 opt]#
编辑dockerfile:
[root@docker_host_0 opt]# vi dockerfile-for-nginxFROM centos:latestARG tmp_dir='/tmp'ARG repo_key='http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7'ARG repo_src='http://mirrors.163.com/.help/CentOS7-Base-163.repo'ARG repo_dst='/etc/yum.repos.d/CentOS-Base.repo'ARG tengine_ver='2.3.0'ARG tengine_src="http://tengine.taobao.org/download/tengine-${tengine_ver}.tar.gz"ARG tengine_dst="tengine-${tengine_ver}.tar.gz"ARG tengine_cfg_opts='--prefix=/usr/local/nginx \ --with-http_gzip_static_module \ --with-http_stub_status_module \ --with-http_ssl_module \ --with-http_slice_module \ --with-pcre'ARG depend_rpms='gcc make openssl-devel pcre-devel'RUN cd ${tmp_dir} \ && cp -a ${repo_dst} ${repo_dst}.ori \ && curl -L ${repo_src} -o ${repo_dst} \ && curl -L ${tengine_src} -o ${tengine_dst} \ && rpm --import ${repo_key} \ && yum -y update --downloadonly --downloaddir=. \ && yum -y install ${depend_rpms} --downloadonly --downloaddir=. \ && yum -y install ./*.rpm \ && useradd www -s /sbin/nologin \ && tar axf ${tengine_dst} \ && cd tengine-${tengine_ver} \ && ./configure ${tengine_cfg_opts} \ && make \ && make install \ && cd \ && yum -y remove gcc make cpp \ && yum clean all \ && rm -rf ${tmp_dir}/*EXPOSE 80/tcp 443/tcpENV PATH ${PATH}:/usr/local/nginx/sbinCMD nginx -g "daemon off;"
编辑服务编排配置文件:
docker中的yaml格式配置文件通常以yml或yaml作为后缀(惯例,非强制)。
本例定义了2个服务,名称分别为webapp与proxy:
webapp服务运行centos:latest镜像(image),挂载数据卷/目录(volumes),并指定环境变量(environment),工作目录(working_dir),容器内运行的命令(command),以及重启策略(restart)为命令运行失败时(on-failure)。
proxy服务运行tengine_nginx:2.3.0镜像,依赖于webapp服务内的容器启动(depends_on)。编译(build)镜像时使用的dockerfile文件名为dockerfile-for-nginx,容器的80端口开放为外部的80端口(ports)。
配置文件内可以通过顶级的networks指令设置网络相关的参数,未指定则按默认设置。对于连接至同一网络驱动下的所有容器,相互之间开放所有端口,本例使用默认网络设置,tomcat默认的8080端口对nginx开放,web服务的访问通过nginx进行转发,因此端口的开放与映射(expose/ports)可选。
[root@docker_host_0 opt]# vi tomcat-with-nginx-compose.ymlversion: '3.7'services: webapp: image: centos:latest volumes: - /opt/jdks/jdk1.8.0_212:/opt/jdks/jdk1.8.0_212:ro - /opt/apps/app_0/source:/opt/apps/app_0 environment: JAVA_HOME: /opt/jdks/jdk1.8.0_212 working_dir: /opt/apps/app_0 command: bin/catalina.sh run restart: on-failure proxy: build: context: . dockerfile: dockerfile-for-nginx depends_on: - webapp image: tengine_nginx:2.3.0 volumes: - /opt/apps/app_0/nginx/conf:/usr/local/nginx/conf:ro - /opt/apps/app_0/nginx/logs:/usr/local/nginx/logs restart: on-failure ports: - '80:80/tcp'
检查编排配置文件:
docker-compose config命令用于检查配置文件的语法与指令,并输出配置文件的所有内容,若指定-q/--quiet参数则仅执行检查而不输出。
docker-compose默认的配置文件名称为docker-compose.yml或docker-compose.yaml,-f参数用于指定自定义的配置文件。[root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml config -q[root@docker_host_0 opt]#
编译镜像:
docker-compose build命令根据配置文件中services.服务名.build定义的参数编译镜像,若配置文件中未指定build指令,则不执行该步骤。
[root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml build...[root@docker_host_0 opt]# [root@docker_host_0 opt]# docker image lsREPOSITORY TAG IMAGE ID CREATED SIZEtengine_nginx 2.3.0 9404e1b71b70 32 seconds ago 340MBcentos latest 9f38484d220f 2 months ago 202MB[root@docker_host_0 opt]#
以非守护进程模式(nginx -g "daemon off;")运行nginx 60秒,复制nginx所需文件:
[root@docker_host_0 opt]# docker run -dit --rm --name t_nginx tengine_nginx:2.3.0 bash -c 'timeout 60 nginx -g "daemon off;"'3cc8de88de3fe295657fde08552165e69514c368689e2078ec89771e23cb16e8[root@docker_host_0 opt]# [root@docker_host_0 opt]# docker container ls -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES3cc8de88de3f tengine_nginx:2.3.0 "bash -c 'timeout 60…" 7 seconds ago Up 6 seconds 80/tcp, 443/tcp t_nginx[root@docker_host_0 opt]# [root@docker_host_0 opt]# docker exec -it t_nginx ls -l /usr/local/nginxtotal 0drwx------ 2 nobody root 6 May 30 23:39 client_body_tempdrwxr-xr-x 2 root root 333 May 30 23:37 confdrwx------ 2 nobody root 6 May 30 23:39 fastcgi_tempdrwxr-xr-x 2 root root 40 May 30 23:37 htmldrwxr-xr-x 1 root root 58 May 30 23:39 logsdrwx------ 2 nobody root 6 May 30 23:39 proxy_tempdrwxr-xr-x 2 root root 19 May 30 23:37 sbindrwx------ 2 nobody root 6 May 30 23:39 scgi_tempdrwx------ 2 nobody root 6 May 30 23:39 uwsgi_temp[root@docker_host_0 opt]# [root@docker_host_0 opt]# docker cp t_nginx:/usr/local/nginx/ /opt/apps/app_0/[root@docker_host_0 opt]# [root@docker_host_0 opt]# ll /opt/apps/app_0/nginx/total 0drwx------ 2 root root 6 May 30 23:39 client_body_tempdrwxr-xr-x 2 root root 333 May 30 23:37 confdrwx------ 2 root root 6 May 30 23:39 fastcgi_tempdrwxr-xr-x 2 root root 40 May 30 23:37 htmldrwxr-xr-x 2 root root 58 May 30 23:39 logsdrwx------ 2 root root 6 May 30 23:39 proxy_tempdrwxr-xr-x 2 root root 19 May 30 23:37 sbindrwx------ 2 root root 6 May 30 23:39 scgi_tempdrwx------ 2 root root 6 May 30 23:39 uwsgi_temp[root@docker_host_0 opt]#
编辑nginx配置文件:
docker内部实现了服务发现(service discovery)功能,对连接至同一网络驱动下的容器自动提供名称解析。本例中,webapp服务成功启动后,可以被proxy服务的nginx识别,因此服务名称或别名均可以作为nginx中proxy_pass或upstream的参数。
user www www;worker_processes auto;pid logs/nginx.pid;error_log logs/error.log warn;worker_rlimit_nofile 51200;events { use epoll; worker_connections 4096;}http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 128; client_header_buffer_size 16k; large_client_header_buffers 4 32k; client_max_body_size 8m; access_log off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 30; proxy_cache_methods POST GET HEAD; open_file_cache max=655350 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; gzip on; gzip_min_length 1k; gzip_buffers 8 8k; gzip_http_version 1.0; gzip_comp_level 4; gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/x-httpd-php; gzip_vary on; server_tokens off; log_format main '$remote_addr\t$upstream_addr\t[$time_local]\t$request\t' '$status\t$body_bytes_sent\t$http_user_agent\t$http_referer\t' '$http_x_forwarded_for\t$request_time\t$upstream_response_time\t$remote_user\t' '$request_body'; map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream tomcat-app-0 { server webapp:8080; } server { listen 80; server_name 127.0.0.1; charset utf-8; client_max_body_size 75M; location / { proxy_pass http://tomcat-app-0; } access_log logs/webapp-access.log main; }}
测试nginx配置文件:
[root@docker_host_0 opt]# docker run -it --rm --mount type=bind,src=/opt/apps/app_0/nginx/conf,dst=/usr/local/nginx/conf,ro --add-host webapp:127.0.0.1 tengine_nginx:2.3.0 bash -c 'nginx -t'nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is oknginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful[root@docker_host_0 opt]#
启动服务:
docker-compose up命令用于启动服务:
默认启动配置文件内定义的所有服务,可以显式指定服务名,以启动特定的服务。若配置文件中指定的镜像名称不存在,则默认首先执行编译(build)。
-d/--detach用于指定容器后台运行,等同于docker run命令的-d/--detach选项。
--scale用于指定相应服务的容器数量,格式为服务名=数量。
[root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml up -d --scale webapp=3Creating network "opt_default" with the default driverCreating opt_webapp_1 ... doneCreating opt_webapp_2 ... doneCreating opt_webapp_3 ... doneCreating opt_proxy_1 ... done[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker container ls -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES6b55fe98a99c tengine_nginx:2.3.0 "/bin/sh -c 'nginx -…" 10 seconds ago Up 9 seconds 0.0.0.0:80->80/tcp, 443/tcp opt_proxy_10617d640c60a centos:latest "bin/catalina.sh run" 11 seconds ago Up 9 seconds opt_webapp_2c85f2de181cd centos:latest "bin/catalina.sh run" 11 seconds ago Up 10 seconds opt_webapp_32517e03f11c9 centos:latest "bin/catalina.sh run" 11 seconds ago Up 10 seconds opt_webapp_1[root@docker_host_0 opt]#
docker-compose默认创建bridge模式的网络:
[root@docker_host_0 opt]# docker network lsNETWORK ID NAME DRIVER SCOPEcb90714e47b3 bridge bridge locala019d8b63640 host host localbb7095896ade none null local80ce8533b964 opt_default bridge local[root@docker_host_0 opt]#
查看容器内的进程运行情况:
docker-compose top可以指定服务名称,以查看特定服务内的进程运行情况。
[root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml topopt_proxy_1UID PID PPID C STIME TTY TIME CMD----------------------------------------------------------------------------------------------root 13674 13657 0 00:28 ? 00:00:00 nginx: master process nginx -g daemon off;1000 13738 13674 0 00:28 ? 00:00:00 nginx: worker process1000 13739 13674 0 00:28 ? 00:00:00 nginx: worker processopt_webapp_1UID PID PPID C STIME TTY TIME CMD-------------------------------------------------------------------------------------------------root 13367 13342 1 00:28 ? 00:00:02 /opt/jdks/jdk1.8.0_212/bin/java -Djava.util.l ogging.config.file=/opt/apps/app_0/conf/loggi ng.properties -Djava.util.logging.manager=org .apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.prot ocol.handler.pkgs=org.apache.catalina.webreso urces -Dorg.apache.catalina.security.Security Listener.UMASK=0027 -Dignore.endorsed.dirs= -classpath /opt/apps/app_0/bin/bootstrap.jar: /opt/apps/app_0/bin/tomcat-juli.jar -Dcatalina.base=/opt/apps/app_0 -Dcatalina.home=/opt/apps/app_0 -Djava.io.tmpdir=/opt/apps/app_0/temp org.apache.catalina.startup.Bootstrap startopt_webapp_2UID PID PPID C STIME TTY TIME CMD-------------------------------------------------------------------------------------------------root 13436 13388 1 00:28 ? 00:00:02 /opt/jdks/jdk1.8.0_212/bin/java -Djava.util.l ogging.config.file=/opt/apps/app_0/conf/loggi ng.properties -Djava.util.logging.manager=org .apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.prot ocol.handler.pkgs=org.apache.catalina.webreso urces -Dorg.apache.catalina.security.Security Listener.UMASK=0027 -Dignore.endorsed.dirs= -classpath /opt/apps/app_0/bin/bootstrap.jar: /opt/apps/app_0/bin/tomcat-juli.jar -Dcatalina.base=/opt/apps/app_0 -Dcatalina.home=/opt/apps/app_0 -Djava.io.tmpdir=/opt/apps/app_0/temp org.apache.catalina.startup.Bootstrap startopt_webapp_3UID PID PPID C STIME TTY TIME CMD-------------------------------------------------------------------------------------------------root 13425 13397 1 00:28 ? 00:00:02 /opt/jdks/jdk1.8.0_212/bin/java -Djava.util.l ogging.config.file=/opt/apps/app_0/conf/loggi ng.properties -Djava.util.logging.manager=org .apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.prot ocol.handler.pkgs=org.apache.catalina.webreso urces -Dorg.apache.catalina.security.Security Listener.UMASK=0027 -Dignore.endorsed.dirs= -classpath /opt/apps/app_0/bin/bootstrap.jar: /opt/apps/app_0/bin/tomcat-juli.jar -Dcatalina.base=/opt/apps/app_0 -Dcatalina.home=/opt/apps/app_0 -Djava.io.tmpdir=/opt/apps/app_0/temp org.apache.catalina.startup.Bootstrap start[root@docker_host_0 opt]#
访问web服务,请求被调度至服务内的每个容器:
[root@docker_host_0 opt]# ss -atn | grep 80LISTEN 0 128 :::80 :::*[root@docker_host_0 opt]#[root@docker_host_0 opt]# for i in $(seq 6); do curl -s 127.0.0.1 -o /dev/null; done[root@docker_host_0 opt]#[root@docker_host_0 opt]# cat /opt/apps/app_0/source/logs/localhost_access_log.$(date +%F).txt172.20.0.3:80 172.20.0.5:42430 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184172.20.0.3:80 172.20.0.5:42436 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184172.20.0.4:80 172.20.0.5:45098 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184172.20.0.4:80 172.20.0.5:45122 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184172.20.0.2:80 172.20.0.5:59294 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184172.20.0.2:80 172.20.0.5:59306 [31/May/2019:00:32:16 +0000] "GET / HTTP/1.0" 200 11184[root@docker_host_0 opt]#
扩充服务内的容器数量:
对运行中的服务再次执行docker-compose up命令时,--scale用于在现有基础上动态增加或减小服务内的容器数量。
[root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml up -d --scale webapp=6Starting opt_webapp_1 ... doneStarting opt_webapp_2 ... doneStarting opt_webapp_3 ... doneCreating opt_webapp_4 ... doneCreating opt_webapp_5 ... doneCreating opt_webapp_6 ... doneopt_proxy_1 is up-to-date[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker container ls -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb9fc74985a13 centos:latest "bin/catalina.sh run" 9 seconds ago Up 7 seconds opt_webapp_429e9837c7b4d centos:latest "bin/catalina.sh run" 9 seconds ago Up 7 seconds opt_webapp_55e0a0611bb2f centos:latest "bin/catalina.sh run" 9 seconds ago Up 8 seconds opt_webapp_66b55fe98a99c tengine_nginx:2.3.0 "/bin/sh -c 'nginx -…" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp, 443/tcp opt_proxy_10617d640c60a centos:latest "bin/catalina.sh run" 3 minutes ago Up 3 minutes opt_webapp_2c85f2de181cd centos:latest "bin/catalina.sh run" 3 minutes ago Up 3 minutes opt_webapp_32517e03f11c9 centos:latest "bin/catalina.sh run" 3 minutes ago Up 3 minutes opt_webapp_1[root@docker_host_0 opt]#
移除服务:
docker-compose down命令用于移除服务,包括停止与移除与服务相关联的容器与网络,另可指定--rmi与-v/--volumes选项移除相关联的数据卷与镜像。
[root@docker_host_0 opt]# docker-compose -f tomcat-with-nginx-compose.yml downStopping opt_webapp_4 ... doneStopping opt_webapp_5 ... doneStopping opt_webapp_6 ... doneStopping opt_proxy_1 ... doneStopping opt_webapp_2 ... doneStopping opt_webapp_3 ... doneStopping opt_webapp_1 ... doneRemoving opt_webapp_4 ... doneRemoving opt_webapp_5 ... doneRemoving opt_webapp_6 ... doneRemoving opt_proxy_1 ... doneRemoving opt_webapp_2 ... doneRemoving opt_webapp_3 ... doneRemoving opt_webapp_1 ... doneRemoving network opt_default[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker container ls -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES[root@docker_host_0 opt]#[root@docker_host_0 opt]#[root@docker_host_0 opt]# ss -atn | grep 80[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker network lsNETWORK ID NAME DRIVER SCOPEcb90714e47b3 bridge bridge locala019d8b63640 host host localbb7095896ade none null local[root@docker_host_0 opt]#
stack方式
宿主机docker_host_0创建群集,docker_host_1以管理角色加入群集:
[root@docker_host_0 opt]# docker swarm initSwarm initialized: current node (u9siv3gxc4px3xa85t5tybv68) is now a manager.To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-5icsimlouv1ppt09fxovvlvn9pp3prevlu2vus6wvtdilv6w86-3y28uwlmc5hcb61hw42oxe4j2 192.168.9.168:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker swarm join-token managerTo add a manager to this swarm, run the following command: docker swarm join --token SWMTKN-1-5icsimlouv1ppt09fxovvlvn9pp3prevlu2vus6wvtdilv6w86-elvhukieu148f22dmimq914ki 192.168.9.168:2377[root@docker_host_0 opt]#
[root@docker_host_1 ~]# ip addr show eth0 | sed -n '/inet /p' | awk '{print $2}'192.168.9.169/24[root@docker_host_1 ~]#[root@docker_host_1 ~]# uname -r3.10.0-957.12.2.el7.x86_64[root@docker_host_1 ~]#[root@docker_host_1 ~]# docker -vDocker version 18.09.6, build 481bc77156[root@docker_host_1 ~]#[root@docker_host_1 ~]# docker swarm join --token SWMTKN-1-5icsimlouv1ppt09fxovvlvn9pp3prevlu2vus6wvtdilv6w86-elvhukieu148f22dmimq914ki 192.168.9.168:2377This node joined a swarm as a manager.[root@docker_host_1 ~]#[root@docker_host_1 ~]# docker node lsID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSIONu9siv3gxc4px3xa85t5tybv68 docker_host_0 Ready Active Leader 18.09.6qhgpqw9n5wwow1zfzji69eac0 * docker_host_1 Ready Active Reachable 18.09.6[root@docker_host_1 ~]#
在docker_host_0节点上导出tengine_nginx:2.3.0镜像,并传输至docker_host_1节点:
[root@docker_host_0 opt]# docker image lsREPOSITORY TAG IMAGE ID CREATED SIZEtengine_nginx 2.3.0 9404e1b71b70 About an hour ago 340MBcentos latest 9f38484d220f 2 months ago 202MB[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker image save tengine_nginx:2.3.0 -o nginx.tar[root@docker_host_0 opt]#[root@docker_host_0 opt]# ll -h nginx.tar-rw------- 1 root root 338M May 31 00:56 nginx.tar[root@docker_host_0 opt]#[root@docker_host_0 opt]# scp nginx.tar root@192.168.9.169:/optnginx.tar 100% 337MB 83.1MB/s 00:04[root@docker_host_0 opt]#
在docker_host_1节点上导入tengine_nginx:2.3.0镜像,并设置与docker_host_0节点相同的挂载源路径:
[root@docker_host_1 ~]# cd /opt/[root@docker_host_1 opt]#[root@docker_host_1 opt]# lsapache-tomcat-8.5.40.tar.gz containerd jdk-8u212-linux-x64.tar.gz nginx.tar[root@docker_host_1 opt]#[root@docker_host_1 opt]# docker image load -i nginx.tard69483a6face: Loading layer 209.5MB/209.5MB717661697400: Loading layer 144.3MB/144.3MBLoaded image: tengine_nginx:2.3.0[root@docker_host_1 opt]#[root@docker_host_1 opt]# docker image ls -aREPOSITORY TAG IMAGE ID CREATED SIZEtengine_nginx 2.3.0 9404e1b71b70 About an hour ago 340MB[root@docker_host_1 opt]#[root@docker_host_1 opt]# mkdir -p /opt/{apps/app_0/source,jdks}[root@docker_host_1 opt]#[root@docker_host_1 opt]# tar axf apache-tomcat-8.5.40.tar.gz --strip-components=1 -C apps/app_0/source/[root@docker_host_1 opt]#[root@docker_host_1 opt]# sed -i 's/pattern="%h %l %u %t/pattern="%A:%{local}p %a:%{remote}p %t/' apps/app_0/source/conf/server.xml[root@docker_host_1 opt]#[root@docker_host_1 opt]# tar axf jdk-8u212-linux-x64.tar.gz -C jdks/[root@docker_host_1 opt]#
在docker_host_0节点上编辑服务编排配置文件:
volumes与port使用长语法(long syntax)格式指定挂载点与端口。
services.服务名.deploy指定服务的运行模式(mode),副本数量(replicas),重启策略(restart_policy),服务所在节点(placement)。
[root@docker_host_0 opt]# vi tomcat-with-nginx-stack.ymlversion: "3.7"services: webapp: image: centos:latest volumes: - type: bind source: /opt/jdks/jdk1.8.0_212 target: /opt/jdks/jdk1.8.0_212 read_only: true - type: bind source: /opt/apps/app_0/source target: /opt/apps/app_0 environment: JAVA_HOME: /opt/jdks/jdk1.8.0_212 working_dir: /opt/apps/app_0 command: bin/catalina.sh run deploy: mode: replicated replicas: 3 restart_policy: condition: on-failure proxy: image: tengine_nginx:2.3.0 volumes: - type: bind source: /opt/apps/app_0/nginx/conf target: /usr/local/nginx/conf read_only: true - type: bind source: /opt/apps/app_0/nginx/logs target: /usr/local/nginx/logs deploy: placement: constraints: - node.hostname == docker_host_0 mode: global restart_policy: condition: on-failure ports: - target: 80 published: 80 protocol: tcp mode: ingress
部署服务,名称为web-cluster:
[root@docker_host_0 opt]# docker stack deploy -c tomcat-with-nginx-stack.yml web-clusterCreating network web-cluster_defaultCreating service web-cluster_webappCreating service web-cluster_proxy[root@docker_host_0 opt]#
副本服务webapp被分配至2个节点,全局服务proxy按约束条件(constraints)被分配至docker_host_0节点:
[root@docker_host_0 opt]# docker stack lsNAME SERVICES ORCHESTRATORweb-cluster 2 Swarm[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker service lsID NAME MODE REPLICAS IMAGE PORTSnjg5ngjjp9gi web-cluster_proxy global 1/1 tengine_nginx:2.3.0 *:80->80/tcpwc2uv0zllneo web-cluster_webapp replicated 3/3 centos:latest[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker service ps web-cluster_webappID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTSnh0pei38ikf7 web-cluster_webapp.1 centos:latest docker_host_1 Running Running about a minute agootzusftmorjr web-cluster_webapp.2 centos:latest docker_host_1 Running Running about a minute agotmjkmrmtbx9g web-cluster_webapp.3 centos:latest docker_host_0 Running Running about a minute ago[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker service ps web-cluster_proxyID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTStal0jywnmukv web-cluster_proxy.u9siv3gxc4px3xa85t5tybv68 tengine_nginx:2.3.0 docker_host_0 Running Running 53 seconds ago[root@docker_host_0 opt]#
stack默认使用swarm模式创建的overlay网络:
[root@docker_host_0 opt]# docker network lsNETWORK ID NAME DRIVER SCOPEb728df2e4b85 bridge bridge localdfe3ba6e0df5 docker_gwbridge bridge locala019d8b63640 host host localmxcmpb9uzjy2 ingress overlay swarmbb7095896ade none null localxxl3uk5r7s7v web-cluster_default overlay swarm[root@docker_host_0 opt]#
通过docker_host_0与docker_host_1节点均可访问被代理的web服务:
[root@docker_host_1 ~]# ss -atn | grep :80LISTEN 0 128 :::80 :::*[root@docker_host_1 ~]#[root@docker_host_1 ~]# curl -I -o /dev/null -s -w %{http_code} 192.168.9.168200[root@docker_host_1 ~]# curl -I -o /dev/null -s -w %{http_code} 192.168.9.169200[root@docker_host_1 ~]#
[root@docker_host_0 opt]# ss -atn | grep :80LISTEN 0 128 :::80 :::*[root@docker_host_0 opt]#[root@docker_host_0 opt]# curl -I -o /dev/null -s -w %{http_code} 192.168.9.168200[root@docker_host_0 opt]# curl -I -o /dev/null -s -w %{http_code} 192.168.9.169200[root@docker_host_0 opt]#
移除服务:
对于通过swarm stack方式编排的服务,部署操作为docker stack deploy 服务名,移除操作为docker stack rm 服务名。移除服务后,与之相关联的网络驱动也随之被移除。
[root@docker_host_0 opt]# docker stack lsNAME SERVICES ORCHESTRATORweb-cluster 2 Swarm[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker stack rm web-clusterRemoving service web-cluster_proxyRemoving service web-cluster_webappRemoving network web-cluster_default[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker stack lsNAME SERVICES ORCHESTRATOR[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker service lsID NAME MODE REPLICAS IMAGE PORTS[root@docker_host_0 opt]#[root@docker_host_0 opt]# docker network lsNETWORK ID NAME DRIVER SCOPEb728df2e4b85 bridge bridge localdfe3ba6e0df5 docker_gwbridge bridge locala019d8b63640 host host localmxcmpb9uzjy2 ingress overlay swarmbb7095896ade none null local[root@docker_host_0 opt]#
depends_on指令仅可用于docker-compose方式指定镜像的构建顺序,以及容器的启动与停止顺序。若需要解决容器内应用程序间的依赖关系,则需手动实现容器内命令的慢启动(如在容器的脚本内执行目标端口存活或url的可用性检测,以决定是否开启本端服务),或借助于诸如dockerize之类的第三方工具。
区别与联系
stack与compose均通过yaml或json格式的配置文件进行服务的编排,区别主要包括:
某些指令在两种方式下不兼容,如build,deploy,depends_on,restart_policy等。
stack由docker引擎内置,须开启swarm模式,组成同一服务的多个容器可能跨越多个宿主机,因此要求相应的镜像必须存在于宿主机本地或可访问的仓库中;compose需要额外安装,无须开启swarm模式,所有容器均位于当前单个宿主机。
stack仅可根据预先已编译的镜像部署服务;compose支持镜像的编译与服务的部署,二者可同时执行,或单独执行。
以上就是如何进行docker CE on Linux中的服务编排,小编相信有部分知识点可能是我们日常工作会见到或用到的。希望你能通过这篇文章学到更多知识。更多详情敬请关注行业资讯频道。