千家信息网

docke swarm怎么搭建部署zookeeper+kafka

发表于:2025-02-05 作者:千家信息网编辑
千家信息网最后更新 2025年02月05日,这篇文章主要讲解了"docke swarm怎么搭建部署zookeeper+kafka",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"docke swar
千家信息网最后更新 2025年02月05日docke swarm怎么搭建部署zookeeper+kafka

这篇文章主要讲解了"docke swarm怎么搭建部署zookeeper+kafka",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"docke swarm怎么搭建部署zookeeper+kafka"吧!

1. 机器准备

准备三台机器,IP分别为192.168.10.51、192.168.10.52、192.168.10.53;主机名分别为centos51、centos52、centos53。三台机器已经准备好docker swarm环境。docker swarm搭建可以参考另一篇文章 docker swarm 集群搭建。

2. 准备镜像。

在 https://hub.docker.com/ 上拉取zookeeper、kafka、kafka manager相关镜像

docker pull zookeeper:3.6.1
docker pull wurstmeister/kafka:2.12-2.5.0
docker pull kafkamanager/kafka-manager:3.0.0.4

3. zookeeper 相关compose准备。

文件名:docker-stack-zookeeper.yml

version: "3.2"services:#zookeeper服务  zookeeper-server-a:    hostname: zookeeper-server-a    image: zookeeper:3.6.1    ports:      - "12181:2181"    networks:      swarm-net:        aliases:          - zookeeper-server-a    environment:      TZ: Asia/Shanghai      ZOO_MY_ID: 1      ZOO_SERVERS: server.1=zookeeper-server-a:2888:3888;2181 server.2=zookeeper-server-b:2888:3888;2181 server.3=zookeeper-server-c:2888:3888;2181    volumes:      - /data/kafka_cluster/zookeeper/data:/data    deploy:      replicas: 1      restart_policy:        condition: on-failure      placement:        constraints: [node.hostname == centos51]      resources:        limits:#          cpus: '1'          memory: 1GB        reservations:#          cpus: '0.2'          memory: 512M  zookeeper-server-b:    hostname: zookeeper-server-b    image: zookeeper:3.6.1    ports:      - "22181:2181"    networks:      swarm-net:        aliases:          - zookeeper-server-b    environment:      TZ: Asia/Shanghai      ZOO_MY_ID: 2      ZOO_SERVERS: server.1=zookeeper-server-a:2888:3888;2181 server.2=zookeeper-server-b:2888:3888;2181 server.3=zookeeper-server-c:2888:3888;2181    volumes:      - /data/kafka_cluster/zookeeper/data:/data    deploy:      replicas: 1      restart_policy:        condition: on-failure      placement:        constraints: [node.hostname == centos52]      resources:        limits:#          cpus: '1'          memory: 1GB        reservations:#          cpus: '0.2'          memory: 512M  zookeeper-server-c:    hostname: zookeeper-server-c    image: zookeeper:3.6.1    ports:      - "32181:2181"    networks:      swarm-net:        aliases:          - zookeeper-server-c    environment:      TZ: Asia/Shanghai      ZOO_MY_ID: 3      ZOO_SERVERS: server.1=zookeeper-server-a:2888:3888;2181 server.2=zookeeper-server-b:2888:3888;2181 server.3=zookeeper-server-c:2888:3888;2181    volumes:      - /data/kafka_cluster/zookeeper/data:/data    deploy:      replicas: 1      restart_policy:        condition: on-failure      placement:        constraints: [node.hostname == centos53]      resources:        limits:#          cpus: '1'          memory: 1GB        reservations:#          cpus: '0.2'          memory: 512Mnetworks:  swarm-net:    external:      name: swarm-net

4. kafka 相关compose 准备

文件名: docker-stack-kafka.yml

version: "3.2"services:#kafka服务  kafka-server-a:    hostname: kafka-server-a    image: wurstmeister/kafka:2.12-2.5.0    ports:      - "19092:9092"    networks:      swarm-net:        aliases:          - kafka-server-a    environment:      - TZ=CST-8      - KAFKA_ADVERTISED_HOST_NAME=kafka-server-a      - HOST_IP=kafka-server-a      - KAFKA_ADVERTISED_PORT=9092      - KAFKA_ZOOKEEPER_CONNECT=zookeeper-server-a:2181,zookeeper-server-b:2181,zookeeper-server-c:2181      - KAFKA_BROKER_ID=0      - KAFKA_HEAP_OPTS="-Xmx512M -Xms16M"    volumes:      - /data/kafka_cluster/kafka/data:/kafka/kafka-logs-kafka-server-a      - /data/kafka_cluster/kafka/logs:/opt/kafka/logs    deploy:      replicas: 1      restart_policy:        condition: on-failure      placement:        constraints: [node.hostname == centos51]      resources:        limits:#          cpus: '1'          memory: 1GB        reservations:#          cpus: '0.2'          memory: 512M  kafka-server-b:    hostname: kafka-server-b    image: wurstmeister/kafka:2.12-2.5.0    ports:      - "29092:9092"    networks:      swarm-net:        aliases:          - kafka-server-b    environment:      - TZ=CST-8      - KAFKA_ADVERTISED_HOST_NAME=kafka-server-b      - HOST_IP=kafka-server-b      - KAFKA_ADVERTISED_PORT=9092      - KAFKA_ZOOKEEPER_CONNECT=zookeeper-server-a:2181,zookeeper-server-b:2181,zookeeper-server-c:2181      - KAFKA_BROKER_ID=1      - KAFKA_HEAP_OPTS="-Xmx512M -Xms16M"    volumes:      - /data/kafka_cluster/kafka/data:/kafka/kafka-logs-kafka-server-b      - /data/kafka_cluster/kafka/logs:/opt/kafka/logs    deploy:      replicas: 1      restart_policy:        condition: on-failure      placement:        constraints: [node.hostname == centos52]      resources:        limits:#          cpus: '1'          memory: 1GB        reservations:#          cpus: '0.2'          memory: 512M  kafka-server-c:    hostname: kafka-server-c    image: wurstmeister/kafka:2.12-2.5.0    ports:      - "39092:9092"    networks:      swarm-net:        aliases:          - kafka-server-c    environment:      - TZ=CST-8      - KAFKA_ADVERTISED_HOST_NAME=kafka-server-c      - HOST_IP=kafka-server-c      - KAFKA_ADVERTISED_PORT=9092      - KAFKA_ZOOKEEPER_CONNECT=zookeeper-server-a:2181,zookeeper-server-b:2181,zookeeper-server-c:2181      - KAFKA_BROKER_ID=2      - KAFKA_HEAP_OPTS="-Xmx512M -Xms16M"    volumes:      - /data/kafka_cluster/kafka/data:/kafka/kafka-logs-kafka-server-c      - /data/kafka_cluster/kafka/logs:/opt/kafka/logs    deploy:      replicas: 1      restart_policy:        condition: on-failure      placement:        constraints: [node.hostname == centos53]      resources:        limits:#          cpus: '1'          memory: 1GB        reservations:#          cpus: '0.2'          memory: 512Mnetworks:  swarm-net:    external:      name: swarm-net

5. kafka manager相关compose准备

文件名: docker-stack-kafka-manager.yml

version: "3.2"services:#kafka manager服务  kafka-manager:    hostname: kafka-manager    image: kafkamanager/kafka-manager:3.0.0.4    ports:      - "19000:9000"    networks:      swarm-net:        aliases:          - kafka-manager    environment:      - ZK_HOSTS=zookeeper-server-a:2181,zookeeper-server-b:2181,zookeeper-server-c:2181    deploy:      replicas: 1      restart_policy:        condition: on-failure      placement:        constraints: [node.hostname == centos51]      resources:        limits:#          cpus: '1'          memory: 1GB        reservations:#          cpus: '0.2'          memory: 512Mnetworks:  avatar-net:    external:      name: swarm-net

6. 在三台机器上创建文件映射路径

mkdir -p {/data/kafka_cluster/zookeeper/data,/data/kafka_cluster/kafka/data,/data/kafka_cluster/kafka/logs}chown -R 777 /data/kafka_cluster/

7. 执行compose

一定要按照顺序执行,执行成功再执行下一个命令

 docker stack deploy -c docker-stack-zookeeper.yml zoo --resolve-image=never --with-registry-auth
 docker stack deploy -c docker-stack-kafka.yml kafka  --resolve-image=never --with-registry-auth
docker stack deploy -c docker-stack-kafka-manager.yml kafka_manager  --resolve-image=never --with-registry-auth

感谢各位的阅读,以上就是"docke swarm怎么搭建部署zookeeper+kafka"的内容了,经过本文的学习后,相信大家对docke swarm怎么搭建部署zookeeper+kafka这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是,小编将为大家推送更多相关知识点的文章,欢迎关注!

0