千家信息网

spark 2.2.0 高可用搭建

发表于:2024-11-23 作者:千家信息网编辑
千家信息网最后更新 2024年11月23日,一、概述1.实验环境基于以前搭建的haoop HA;2.spark HA所需要的zookeeper环境前文已经配置过,此处不再重复。3.所需软件包为:scala-2.12.3.tgz、spark-2.
千家信息网最后更新 2024年11月23日spark 2.2.0 高可用搭建

一、概述

1.实验环境基于以前搭建的haoop HA;

2.spark HA所需要的zookeeper环境前文已经配置过,此处不再重复。

3.所需软件包为:scala-2.12.3.tgz、spark-2.2.0-bin-hadoop2.7.tar

4.主机规划

bd1

bd2

bd3

Worker

bd4

bd5


Master、Worker

二、配置Scala

1.解压并拷贝

[root@bd1 ~]# tar -zxf scala-2.12.3.tgz [root@bd1 ~]# cp -r scala-2.12.3 /usr/local/

2.配置环境变量

[root@bd1 ~]# vim /etc/profileexport SCALA_HOME=/usr/local/scalaexport PATH=:$SCALA_HOME/bin:$PATH[root@bd1 ~]# source /etc/profile

3.验证

[root@bd1 ~]# scala -versionScala code runner version 2.12.3 -- Copyright 2002-2017, LAMP/EPFL and Lightbend, Inc.

三、配置Spark

1.解压并拷贝

[root@bd1 ~]# tar -zxf spark-2.2.0-bin-hadoop2.7.tgz[root@bd1 ~]# cp spark-2.2.0-bin-hadoop2.7 /usr/local/spark

2.配置环境变量

[root@bd1 ~]# vim /etc/profileexport SCALA_HOME=/usr/local/scalaexport PATH=:$SCALA_HOME/bin:$PATH[root@bd1 ~]# source /etc/profile

3.修改spark-env.sh #文件不存在需要拷贝模板

[root@bd1 conf]# vim spark-env.shexport JAVA_HOME=/usr/local/jdkexport HADOOP_HOME=/usr/local/hadoopexport HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoopexport SCALA_HOME=/usr/local/scalaexport SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=bd4:2181,bd5:2181 -Dspark.deploy.zookeeper.dir=/spark"export SPARK_WORKER_MEMORY=1gexport SPARK_WORKER_CORES=2export SPARK_WORKER_INSTANCES=1

4.修改spark-defaults.conf #文件不存在需要拷贝模板

[root@bd1 conf]# vim spark-defaults.confspark.master                     spark://master:7077spark.eventLog.enabled           truespark.eventLog.dir               hdfs://master:/user/spark/historyspark.serializer                 org.apache.spark.serializer.KryoSerializer

5.在HDFS文件系统中新建日志文件目录

hdfs dfs -mkdir -p /user/spark/historyhdfs dfs -chmod 777 /user/spark/history

6.修改slaves

[root@bd1 conf]# vim slavesbd1bd2bd3bd4bd5

四、同步到其他主机

1.使用scp同步Scala到bd2-bd5

scp -r /usr/local/scala root@bd2:/usr/local/scp -r /usr/local/scala root@bd3:/usr/local/scp -r /usr/local/scala root@bd4:/usr/local/scp -r /usr/local/scala root@bd5:/usr/local/

2.同步Spark到bd2-bd5

scp -r /usr/local/spark root@bd2:/usr/local/scp -r /usr/local/spark root@bd3:/usr/local/scp -r /usr/local/spark root@bd4:/usr/local/scp -r /usr/local/spark root@bd5:/usr/local/

五、启动集群并测试HA

1.启动顺序为:zookeeper-->hadoop-->spark

2.启动spark

bd4:

[root@bd4 sbin]# cd /usr/local/spark/sbin/[root@bd4 sbin]# ./start-all.sh starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd4.outbd4: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd4.outbd2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd2.outbd3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd3.outbd5: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd5.outbd1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd1.out[root@bd4 sbin]# jps3153 DataNode7235 Jps3046 JournalNode7017 Master3290 NodeManager7116 Worker2958 QuorumPeerMain

bd5:

[root@bd5 sbin]# ./start-master.sh starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd5.out[root@bd5 sbin]# jps3584 NodeManager5602 RunJar3251 QuorumPeerMain8564 Master3447 DataNode8649 Jps8474 Worker3340 JournalNode

3.停掉bd4的Master进程

[root@bd4 sbin]# kill -9 7017[root@bd4 sbin]# jps3153 DataNode7282 Jps3046 JournalNode3290 NodeManager7116 Worker2958 QuorumPeerMain

五、总结

一开始时想把Master放到bd1和bd2上,但是启动Spark后发现两个节点上都是Standby。然后修改配置文件转移到bd4和bd5上,才顺利运行。换言之Spark HA的Master必须位于Zookeeper集群上才能正常运行,即该节点上要有JournalNode这个进程。

0