千家信息网

hadoop2.7.3+HA+YARN+zookeeper高可用集群如何部署

发表于:2025-01-31 作者:千家信息网编辑
千家信息网最后更新 2025年01月31日,本篇内容介绍了"hadoop2.7.3+HA+YARN+zookeeper高可用集群如何部署"的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情
千家信息网最后更新 2025年01月31日hadoop2.7.3+HA+YARN+zookeeper高可用集群如何部署

本篇内容介绍了"hadoop2.7.3+HA+YARN+zookeeper高可用集群如何部署"的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!

一、安装版本:

JDK1.8.0_111-b14
hadoophadoop-2.7.3
zookeeperzookeeper-3.5.2

二、安装步骤:

JDK的安装和集群的依赖环境配置不再叙述

1、hadoop配置

hadoop配置主要涉及hdfs-site.xml,core-site.xml,mapred-site.xml,yarn-site.xml四个文件。以下详细介绍每个文件的配置。

  1. core-site.xml的配置
          fs.defaultFS      hdfs://cluster1      HDFS namenode的逻辑名称,也就是namenode HA,此值要对应hdfs-site.xml里的dfs.nameservices    hadoop.tmp.dir    /usr/hadoop/tmp    hdfs中namenode和datanode的数据默认放置路径,也可以在hdfs-site.xml中分别指定        ha.zookeeper.quorum        master:2181,salve1:2181,salve2:2181        zookeeper集群的地址和端口,zookeeper集群的节点数必须为奇数


  2. hdfs-site.xml的配置(重点配置)
        dfs.name.dir    /usr/hadoop/hdfs/name    namenode的数据放置目录    dfs.data.dir    /usr/hadoop/hdfs/data    datanode的数据放置目录    dfs.replication    4    数据块的备份数,默认是3        dfs.nameservices        cluster1        HDFS namenode的逻辑名称,也就是namenode HA        dfs.ha.namenodes.cluster1        ns1,ns2        nameservices对应的namenode逻辑名        dfs.namenode.rpc-address.cluster1.ns1        master:9000        指定namenode(ns1)的rpc地址和端口        dfs.namenode.http-address.cluster1.ns1        master:50070        指定namenode(ns1)的web地址和端口        dfs.namenode.rpc-address.cluster1.ns2        salve1:9000        指定namenode(ns2)的rpc地址和端口        dfs.namenode.http-address.cluster1.ns2        salve1:50070        指定namenode(ns2)的web地址和端口        dfs.namenode.shared.edits.dir        qjournal://master:8485;salve1:8485;salve2:8485/cluster1         这是NameNode读写JNs组的uri,active NN 将 edit log 写入这些JournalNode,而 standby NameNode 读取这些 edit log,并作用在内存中的目录树中        dfs.journalnode.edits.dir        /usr/hadoop/journal        ournalNode 所在节点上的一个目录,用于存放 editlog 和其他状态信息。             dfs.ha.automatic-failover.enabled             true           启动自动failover。自动failover依赖于zookeeper集群和ZKFailoverController(ZKFC),后者是一个zookeeper客户端,用来监控NN的状态信息。每个运行NN的节点必须要运行一个zkfc          dfs.client.failover.proxy.provider.cluster1        org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider        配置HDFS客户端连接到Active NameNode的一个java类        dfs.ha.fencing.methods        sshfence        解决HA集群脑裂问题(即出现两个 master 同时对外提供服务,导致系统处于不一致状态)。在 HDFS HA中,JournalNode 只允许一个 NameNode 写数据,不会出现两个 active NameNode 的问题,但是,当主备切换时,之前的 active NameNode 可能仍在处理客户端的 RPC 请求,为此,需要增加隔离机制(fencing)将之前的 active NameNode 杀死。常用的fence方法是sshfence,要指定ssh通讯使用的密钥dfs.ha.fencing.ssh.private-key-files和连接超时时间        dfs.ha.fencing.ssh.private-key-files        /home/hadoop/.ssh/id_rsa        ssh通讯使用的密钥                dfs.ha.fencing.ssh.connect-timeout        30000        连接超时时间 


  3. mapred-site.xml的配置
            mapreduce.framework.name        yarn        指定运行mapreduce的环境是yarn,与hadoop1截然不同的地方        mapreduce.jobhistory.address        master:10020         MR JobHistory Server管理的日志的存放位置        mapreduce.jobhistory.webapp.address        master:19888        查看历史服务器已经运行完的Mapreduce作业记录的web地址,需要启动该服务才行   mapreduce.jobhistory.done-dir   /data/hadoop/done   MR JobHistory Server管理的日志的存放位置,默认:/mr-history/done   mapreduce.jobhistory.intermediate-done-dir   hdfs://mycluster-pha/mapred/tmp   MapReduce作业产生的日志存放位置,默认值:/mr-history/tmp


  4. yarn-site.xml的配置
            yarn.nodemanager.aux-services        mapreduce_shuffle        默认                     yarn.nodemanager.auxservices.mapreduce.shuffle.class        org.apache.hadoop.mapred.ShuffleHandler                yarn.resourcemanager.address        master:8032                yarn.resourcemanager.scheduler.address        master:8030                yarn.resourcemanager.resource-tracker.address        master:8031                yarn.resourcemanager.admin.address        master:8033                yarn.resourcemanager.webapp.address        master:8088        yarn.nodemanager.resource.memory-mb    1024    该值配置小于1024时,NM是无法启动的!会报错:NodeManager from  slavenode2 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.  

2.zookeeper配置

zookeeper的配置主要是zoo.cfg和myid两个文件

  1. conf/zoo.cfg配置:先将zoo_sample.cfg改成zoo.cfg
    cp  zoo_sample.cfg  zoo.cfg


  2. vi zoo.cfg
    dataDir:数据的放置路径dataLogDir:log的放置路径


    initLimit=10syncLimit=5clientPort=2181tickTime=2000dataDir=/usr/zookeeper/tmp/datadataLogDir=/usr/zookeeper/tmp/logserver.1=master:2888:3888server.2=slave1:2888:3888server.3=slave2:2888:3888


  3. 在[master,slave1,slave2]节点的dataDir目录新建文件myid
vi myid

master节点编辑:1

slave1节点编辑:2

slave2节点编辑:3

如下:

[hadoop@master data]$ vi myid 1

三、启动集群

1.zookeeper集群启动

1.启动zookeeper集群,在三个节点全部启动
bin/zkServer.sh start
2.查看集群zookeeper状态:zkServer.sh status,一个learer两个follower。
[hadoop@master hadoop-2.7.3]$ zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfgClient port found: 2181. Client address: localhost.Mode: follower
[hadoop@slave1 root]$ zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfgClient port found: 2181. Client address: localhost.Mode: leader
[hadoop@slave2 root]$ zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /usr/local/zookeeper-3.5.2-alpha/bin/../conf/zoo.cfgClient port found: 2181. Client address: localhost.Mode: follower
3.验证zookeeper(非必须): 执行zkCli.sh
[hadoop@slave1 root]$ zkCli.shConnecting to localhost:21812016-12-18 02:05:03,115 [myid:] - INFO  [main:Environment@109] - Client environment:zookeeper.version=3.5.2-alpha-1750793, built on 06/30/2016 13:15 GMT2016-12-18 02:05:03,118 [myid:] - INFO  [main:Environment@109] - Client environment:host.name=salve12016-12-18 02:05:03,118 [myid:] - INFO  [main:Environment@109] - Client environment:java.version=1.8.0_1112016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.vendor=Oracle Corporation2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.home=/usr/local/jdk1.8.0_111/jre2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.class.path=/usr/local/zookeeper-3.5.2-alpha/bin/../build/classes:/usr/local/zookeeper-3.5.2-alpha/bin/../build/lib/*.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/slf4j-log4j12-1.7.5.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/slf4j-api-1.7.5.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/servlet-api-2.5-20081211.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/log4j-1.2.17.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jline-2.11.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jetty-util-6.1.26.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jetty-6.1.26.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/javacc.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jackson-mapper-asl-1.9.11.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/jackson-core-asl-1.9.11.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../lib/commons-cli-1.2.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../zookeeper-3.5.2-alpha.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../src/java/lib/*.jar:/usr/local/zookeeper-3.5.2-alpha/bin/../conf:.:/usr/local/jdk1.8.0_111/lib/dt.jar:/usr/local/jdk1.8.0_111/lib/tools.jar:/usr/local/zookeeper-3.5.2-alpha/bin:/usr/local/hadoop-2.7.3/bin2016-12-18 02:05:03,120 [myid:] - INFO  [main:Environment@109] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:java.io.tmpdir=/tmp2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:java.compiler=2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.name=Linux2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.arch=amd642016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.version=3.10.0-327.22.2.el7.x86_642016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.name=hadoop2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.home=/home/hadoop2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:user.dir=/tmp/hsperfdata_hadoop2016-12-18 02:05:03,121 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.free=52MB2016-12-18 02:05:03,123 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.max=228MB2016-12-18 02:05:03,123 [myid:] - INFO  [main:Environment@109] - Client environment:os.memory.total=57MB2016-12-18 02:05:03,146 [myid:] - INFO  [main:ZooKeeper@855] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@593634adWelcome to ZooKeeper!2016-12-18 02:05:03,171 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1113] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)JLine support is enabled2016-12-18 02:05:03,243 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@948] - Socket connection established, initiating session, client: /127.0.0.1:56184, server: localhost/127.0.0.1:21812016-12-18 02:05:03,252 [myid:localhost:2181] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1381] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x200220f5fe30060, negotiated timeout = 30000WATCHER::WatchedEvent state:SyncConnected type:None path:null[zk: localhost:2181(CONNECTED) 0]

2.hadoop集群启动

1.第一次配置启动

1.1在三个节点上启动Journalnode deamons,然后jps,出现JournalNode进程。

sbin/./hadoop-daemon.sh start journalnode
jpsJournalNode

1.2格式化master上的namenode(任意一个),然后启动该节点的namenode。

bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode

1.3在另一个namenode节点slave1上同步master上的元数据信息

bin/hdfs namenode -bootstrapStandby

1.4停止hdfs上的所有服务

sbin/stop-dfs.sh

1.5初始化zkfc

bin/hdfs zkfc -formatZK

1.6启动hdfs

sbin/start-dfs.sh

1.7启动yarn

sbin/start-yarn.sh
2.非第一次配置启动

2.1直接启动hdfs和yarn即可,namenode、datanode、journalnode、DFSZKFailoverController都会自动启动。

sbin/start-dfs.sh

2.2启动yarn

sbin/start-yarn.sh

四、查看各节点的进程

4.1master

[hadoop@master hadoop-2.7.3]$ jps26544 QuorumPeerMain25509 JournalNode25704 DFSZKFailoverController26360 Jps25306 DataNode25195 NameNode25886 ResourceManager25999 NodeManager

4.2slave1

[hadoop@slave1 root]$ jps2289 DFSZKFailoverController9400 QuorumPeerMain2601 Jps2060 DataNode2413 NodeManager2159 JournalNode1983 NameNode

4.3slave2

[hadoop@slave2 root]$ jps11984 DataNode12370 Jps2514 QuorumPeerMain12083 JournalNode12188 NodeManager

"hadoop2.7.3+HA+YARN+zookeeper高可用集群如何部署"的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注网站,小编将为大家输出更多高质量的实用文章!

0