千家信息网

hadoop2.6 HA部署

发表于:2024-10-23 作者:千家信息网编辑
千家信息网最后更新 2024年10月23日,因为需要部署spark环境,特意重新安装了一个测试的hadoop集群,现将相关步骤记录如下:硬件环境:四台虚拟机,hadoop1~hadoop4,3G内存,60G硬盘,2核CPU软件环境:CentOS
千家信息网最后更新 2024年10月23日hadoop2.6 HA部署

因为需要部署spark环境,特意重新安装了一个测试的hadoop集群,现将相关步骤记录如下:


硬件环境:四台虚拟机,hadoop1~hadoop4,3G内存,60G硬盘,2核CPU

软件环境:CentOS6.5,hadoop-2.6.0-cdh6.8.2,JDK1.7


部署规划:

hadoop1(192.168.0.3):namenode(active)、resourcemanager

hadoop2(192.168.0.4):namenode(standby)、journalnode、datanode、nodemanager、historyserver

hadoop3(192.168.0.5):journalnode、datanode、nodemanager

hadoop4(192.168.0.6):journalnode、datanode、nodemanager


HDFS的HA采用QJM的方式(journalnode):

一、系统准备


1、每台机关闭selinux

#vi /etc/selinux/config

SELINUX=disabled


2、每台机关闭防火墙(切记要关闭,否则格式化hdfs时会报错无法连接journalnode)

#chkconfig iptables off

#service iptables stop


3、每台机安装jdk1.7

#cd /software

#tar -zxf jdk-7u65-linux-x64.gz -C /opt/

#cd /opt

#ln -s jdk-7u65-linux-x64.gz java

#vi /etc/profile

export JAVA_HOME=/opt/java

export PATH=$PATH:$JAVA_HOME/bin


4、每台机建立hadoop相关用户,并建立互信

#useradd grid

#passwd grid

(建立互信步骤略)


5、每台机建立相关目录

#mkdir -p /hadoop_data/hdfs/name

#mkdir -p /hadoop_data/hdfs/data

#mkdir -p /hadoop_data/hdfs/journal

#mkdir -p /hadoop_data/yarn/local

#chown -R grid:grid /hadoop_data


二、hadoop部署

HDFS HA主要是指定nameservices(如果不做HDFS ferderation,就只会有一个ID),同时指定该nameserviceID下面的两个namenode及其地址。此处的nameservice名设置为hadoop-spark


1、每台机解压hadoop包

#cd /software

#tar -zxf hadoop-2.6.0-cdh6.8.2.tar.gz -C /opt/

#cd /opt

#chown -R grid:grid hadoop-2.6.0-cdh6.8.2

#ln -s hadoop-2.6.0-cdh6.8.2 hadoop


2、切换到grid用户继续操作

#su - grid

$cd /opt/hadoop/etc/hadoop


3、配置hadoop-env.sh(其实只配置JAVA_HOME)

$vi hadoop-env.sh

# The java implementation to use.

export JAVA_HOME=/opt/java


4、设置hdfs-site.xml

dfs.replication1dfs.nameserviceshadoop-sparkComma-separated list of nameservices.dfs.ha.namenodes.hadoop-sparknn1,nn2The prefix for a given nameservice, contains a comma-separatedlist of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).dfs.namenode.rpc-address.hadoop-spark.nn1hadoop1:8020RPC address for nomenode1 of hadoop-sparkdfs.namenode.rpc-address.hadoop-spark.nn2hadoop2:8020RPC address for nomenode2 of hadoop-sparkdfs.namenode.http-address.hadoop-spark.nn1hadoop1:50070The address and the base port where the dfs namenode1 web ui will listen on.dfs.namenode.http-address.hadoop-spark.nn2hadoop2:50070The address and the base port where the dfs namenode2 web ui will listen on.dfs.namenode.name.dirfile:///hadoop_data/hdfs/nameDetermines where on the local filesystem the DFS name nodeshould store the name table(fsp_w_picpath).  If this is a comma-delimited listof directories then the name table is replicated in all of thedirectories, for redundancy. dfs.namenode.shared.edits.dirqjournal://hadoop2:8485;hadoop3:8485;hadoop4:8485/hadoop-sparkA directory on shared storage between the multiple namenodesin an HA cluster. This directory will be written by the active and readby the standby in order to keep the namespaces synchronized. This directorydoes not need to be listed in dfs.namenode.edits.dir above. It should beleft empty in a non-HA cluster.dfs.datanode.data.dirfile:///hadoop_data/hdfs/dataDetermines where on the local filesystem an DFS data nodeshould store its blocks.  If this is a comma-delimitedlist of directories, then data will be stored in all nameddirectories, typically on different devices.Directories that do not exist are ignored.   dfs.client.failover.proxy.provider.hadoop-spark  org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProviderdfs.ha.automatic-failover.enabledfalseWhether automatic failover is enabled. See the HDFS HighAvailability documentation for details on automatic HAconfiguration.dfs.journalnode.edits.dir/hadoop_data/hdfs/journal


5、配置core-site.xml(配置fs.defaultFS,使用HA的nameservices名称)

fs.defaultFShdfs://hadoop-sparkThe name of the default file system.  A URI whosescheme and authority determine the FileSystem implementation.  Theuri's scheme determines the config property (fs.SCHEME.impl) namingthe FileSystem implementation class.  The uri's authority is used todetermine the host, port, etc. for a filesystem.


6、配置mapred-site.xml

mapreduce.framework.nameyarnThe runtime framework for executing MapReduce jobs.Can be one of local, classic or yarn.mapreduce.jobhistory.addresshadoop2:10020MapReduce JobHistory Server IPC host:portmapreduce.jobhistory.webapp.addresshadoop2:19888MapReduce JobHistory Server Web UI host:port


7、配置yarn-site.xml

The hostname of the RM.yarn.resourcemanager.hostnamehadoop1The address of the applications manager interface in the RM.yarn.resourcemanager.address${yarn.resourcemanager.hostname}:8032The address of the scheduler interface.yarn.resourcemanager.scheduler.address${yarn.resourcemanager.hostname}:8030The http address of the RM web application.yarn.resourcemanager.webapp.address${yarn.resourcemanager.hostname}:8088The https adddress of the RM web application.yarn.resourcemanager.webapp.https.address${yarn.resourcemanager.hostname}:8090yarn.resourcemanager.resource-tracker.address${yarn.resourcemanager.hostname}:8031The address of the RM admin interface.yarn.resourcemanager.admin.address${yarn.resourcemanager.hostname}:8033The class to use as the resource scheduler.yarn.resourcemanager.scheduler.classorg.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedulerfair-scheduler conf locationyarn.scheduler.fair.allocation.file${yarn.home.dir}/etc/hadoop/fairscheduler.xmlList of directories to store localized files in. Anapplication's localized file directory will be found in:${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.Individual containers' work directories, called container_${contid}, willbe subdirectories of this.yarn.nodemanager.local-dirs/hadoop_data/yarn/localWhether to enable log aggregationyarn.log-aggregation-enabletrueWhere to aggregate logs to.yarn.nodemanager.remote-app-log-dir/tmp/logsAmount of physical memory, in MB, that can be allocatedfor containers.yarn.nodemanager.resource.memory-mb2048Number of CPU cores that can be allocatedfor containers.yarn.nodemanager.resource.cpu-vcores2the valid service name should only contain a-zA-Z0-9_ and can not start with numbersyarn.nodemanager.aux-servicesmapreduce_shuffle


8、配置slaves

hadoop2

hadoop3

hadoop4


9、配置fairscheduler.xml

0mb, 0 vcores 6144 mb, 6 vcores 503001.0grid


10、同步配置文件到各个节点

$cd /opt/hadoop/etc

$scp -r hadoop hadoop2:/opt/hadoop/etc/

$scp -r hadoop hadoop3:/opt/hadoop/etc/

$scp -r hadoop hadoop4:/opt/hadoop/etc/


三、启动集群(格式化文件系统)


1、建立环境变量

$vi ~/.bash_profile

export HADOOP_HOME=/opt/hadoop

export YARN_HOME_DIR=/opt/hadoop

export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop

export YARN_CONF_DIR=/opt/hadoop/etc/hadoop


2、启动HDFS

先启动journalnode,在hadoop2~hadoop4上:

$cd /opt/hadoop/

$sbin/hadoop-daemon.sh start journalnode


格式化HDFS,然后启动namenode。在hadoop1上:

$bin/hdfs namenode -format

$sbin/hadoop-daemon.sh start namenode


同步另一个namenode,并启动。在hadoop2上:

$bin/hdfs namenode -bootstrapStandby

$sbin/hadoop-daemon.sh start namenode


此时两个namenode都是standby状态,将hadoop1切换成active(hadoop1在hdfs-site.xml里对应的是nn1):

$bin/hdfs haadmin -transitionToActive nn1

启动datanode。在hadoop1上(active的namenode):

$sbin/hadoop-daemons.sh start datanode

注意事项:后续启动,只需使用sbin/start-dfs.sh即可。但由于没有配置zookeeper的failover,所以只能HA只能使用手工切换。所以每次启动HDFS,都要执行$bin/hdfs haadmin -transitionToActive nn1来使hadoop1的namenode变成active状态


2、启动yarn

在hadoop1上(resourcemanager):

$sbin/start-yarn.sh

--------------------------------------------

以上配置的HDFS HA并不是自动故障切换的,如果配置HDFS自动故障切换,需要添加以下步骤(先停掉集群):

1、部署zookeeper,步骤省略。部署在hadoop2、hadoop3、hadoop4,并启动

2、在hdfs-site.xml中添加:

dfs.ha.automatic-failover.enabled true

dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /home/exampleuser/.ssh/id_rsa

解释详见官方文档。这样配置设定了fencing方法是通过ssh去关闭前一个活动节点的端口。前提前两个namenode能互相SSH。


还有另外一种配置方法:

dfs.ha.automatic-failover.enabled true

dfs.ha.fencing.methods shell(/path/to/my/script.sh arg1 arg2 ...)

这样的配置实际上是使用shell来隔绝端口和程序。如果不想做实际的动作,dfs.ha.fencing.methods可配置成shell(/bin/true)


3、在core-site.xml中添加:

ha.zookeeper.quorum hadoop2:2181,hadoop3:2181,hadoop4:2181


4、初始化zkfc(在namenode上执行)

bin/hdfs zkfc -formatZK


5、启动集群



___________________________________________________________________________________________________

zkfc:每个namenode上都运行,是zk的客户端,负责自动故障切换

zk:奇数个节点,维护一致性锁、负责选举活动节点

joural node:奇数个节点,用于active和standby节点之间数据同步。活动节点把数据写入这些节点,standby节点读取

--------------------------------------------

更改成resourcemanager HA:

选择hadoop2作为另一个rm节点

1、设置hadoop2对其它节点作互信

2、编译yarn-site.xml并同步到其它机器

3、复制fairSheduler.xml到hadoop2

4、启动rm

5、启动另一个rm


0