hadoop分布式部署
Hadoop Cluster中的角色:
HDFS:
NameNode,NN
SecondaryNameNode,SNN
DataNode,DN
YARN:
ResourceManager
NodeManager
生产环境中hadoop分布式部署注意事项:
HDFS集群:
NameNode和Secondary应该分开部署,避免NameNode和SecondaryNode同时出现故障,而无法恢复
DataNode数量至少为3个,因为数据至少要保存3份
YARN集群:
ResourceManager部署在独立的节点上
NodeManager运行在DataNode上
Hadoop集群架构如下图所示:
我在测试环境中进行分布式部署时,将NameNode、SecondaryNameNode和ResourceManager三个角色部署在同一服务器Master节点上,
三个从节点部署DataNode和NodeManager
1、配置hosts文件
node1、node2、node3、node4上的/etc/hosts文件中追加以下内容
172.16.2.3 node1.hadooptest.com node1 master172.16.2.14 node2.hadooptest.com node2172.16.2.60 node3.hadooptest.com node3172.16.2.61 node4.hadooptest.com node4
2、创建hadoop用户和组
如果需要通过master节点启动或者停止整个集群,还需要在master节点上配置用于运行服务的用户(如hdfs和yarn)能以密钥的方式通过ssh远程连接到各个从节点
node1、node2、node3、node4上分别执行
useradd hadoopecho 'p@ssw0rd' | passwd --stdin hadoop
登录node1,创建密钥
su - hadoopssh-keygen -t rsa
将公钥从node1分别上传到node2、node3、node4
ssh-copy-id -i .ssh/id_rsa.pub hadoop@node2ssh-copy-id -i .ssh/id_rsa.pub hadoop@node3 ssh-copy-id -i .ssh/id_rsa.pub hadoop@node4
注意:master节点也要将公钥传到自己的hadoop账户下,否则启动secondarynamenode时,要输入密码
[hadoop@node1 hadoop]$ node1ssh-copy-id -i .ssh/id_rsa.pub hadoop@0.0.0.0
测试从node1登录到node2、node3、node4
[hadoop@OPS01-LINTEST01 ~]$ ssh node2 'date'Tue Mar 27 14:26:10 CST 2018[hadoop@OPS01-LINTEST01 ~]$ ssh node3 'date'Tue Mar 27 14:26:13 CST 2018[hadoop@OPS01-LINTEST01 ~]$ ssh node4 'date'Tue Mar 27 14:26:17 CST 2018
3、配置hadoop环境
node1、node2、node3、node4上都需要执行
vim /etc/profile.d/hadoop.sh
export HADOOP_PREFIX=/bdapps/hadoopexport PATH=$PATH:${HADOOP_PREFIX}/bin:${HADOOP_PREFIX}/sbinexport HADOOP_COMMON_HOME=${HADOOP_PREFIX}export HADOOP_YARN_HOME=${HADOOP_PREFIX}export HADOOP_HDFS_HOME=${HADOOP_PREFIX}export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
node1配置
创建目录
[root@OPS01-LINTEST01 ~]# mkdir -pv /bdapps /data//hadoop/hdfs/{nn,snn,dn}mkdir: created directory `/bdapps'mkdir: created directory `/data//hadoop'mkdir: created directory `/data//hadoop/hdfs'mkdir: created directory `/data//hadoop/hdfs/nn'mkdir: created directory `/data//hadoop/hdfs/snn'mkdir: created directory `/data//hadoop/hdfs/dn'
配置权限
chown -R hadoop:hadoop /data/hadoop/hdfs/cd /bdapps/[root@OPS01-LINTEST01 bdapps]# lshadoop-2.7.5[root@OPS01-LINTEST01 bdapps]# ln -sv hadoop-2.7.5 hadoop[root@OPS01-LINTEST01 bdapps]# cd hadoop[root@OPS01-LINTEST01 hadoop]# mkdir logs
修改hadoop目录下所有文件的属主属组为hadoop,并给logs目录添加写入权限
[root@OPS01-LINTEST01 hadoop]# chown -R hadoop:hadoop ./*[root@OPS01-LINTEST01 hadoop]# lltotal 140drwxr-xr-x 2 hadoop hadoop 4096 Dec 16 09:12 bindrwxr-xr-x 3 hadoop hadoop 4096 Dec 16 09:12 etcdrwxr-xr-x 2 hadoop hadoop 4096 Dec 16 09:12 includedrwxr-xr-x 3 hadoop hadoop 4096 Dec 16 09:12 libdrwxr-xr-x 2 hadoop hadoop 4096 Dec 16 09:12 libexec-rw-r--r-- 1 hadoop hadoop 86424 Dec 16 09:12 LICENSE.txtdrwxr-xr-x 2 hadoop hadoop 4096 Mar 27 14:51 logs-rw-r--r-- 1 hadoop hadoop 14978 Dec 16 09:12 NOTICE.txt-rw-r--r-- 1 hadoop hadoop 1366 Dec 16 09:12 README.txtdrwxr-xr-x 2 hadoop hadoop 4096 Dec 16 09:12 sbindrwxr-xr-x 4 hadoop hadoop 4096 Dec 16 09:12 share[root@OPS01-LINTEST01 hadoop]# chmod g+w logs
core-site.xml文件配置
fs.defaultFS hdfs://master:8020 true
yarm-site.xml文件配置
注意:yarn-site.xml是ResourceManager角色相关配置。生产环境下该角色和NameNode是应该分开部署的,所以该文件中的master和core-site.xml中的master不是同一台机器。由于我这里是在测试环境中模拟分布式部署,
将NameNode和ResourceManager部署在一台机器上了,所以才会需要在NameNode服务器上配置该文件。
yarn.resourcemanager.address master:8032 yarn.resourcemanager.scheduler.address master:8030 yarn.resourcemanager.resource-tracker.address master:8031 yarn.resourcemanager.admin.address master:8033 yarn.resourcemanager.webapp.address master:8088 yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.auxservices.mapreduce_shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.scheduler.class org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
hdfs-site.xml文件配置
修改 dfs.replication 副本保留数量
dfs.replication 2 dfs.namenode.name.dir file:///data/hadoop/hdfs/nn dfs.datanode.data.dir file:///data/hadoop/hdfs/dn fs.checkpoint.dir file:///data/hadoop/hdfs/snn fs.checkpoint.edits.dir file:///data/hadoop/hdfs/snn
mapred-site.xml文件配置
cp mapred-site.xml.template mapred-site.xml
mapreduce.framework.name yarn
slaves文件配置
node2node3node4
hadoop-env.sh文件配置
export JAVA_HOME=/usr/java/jdk1.8.0_151
配置node2、node3、node4节点
###创建hadoop安装目录,和数据目录以及logs目录,并修改权限mkdir -pv /bdapps /data/hadoop/hdfs/{nn,snn,dn}chown -R hadoop.hadoop /data/hadoop/hdfs/tar zxf hadoop-2.7.5.tar.gz -C /bdapps/cd /bdappsln -sv hadoop-2.7.5 hadoop cd hadoopmkdir logschmod g+w logschown -R hadoop:hadoop ./*
配置文件修改
由于我们前面在master节点(node1)已经修改了hadoop相关配置文件,所以可以直接从master节点同步到node2、node3、node4节点上
scp /bdapps/hadoop/etc/hadoop/* node2:/bdapps/hadoop/etc/hadoop/scp /bdapps/hadoop/etc/hadoop/* node3:/bdapps/hadoop/etc/hadoop/scp /bdapps/hadoop/etc/hadoop/* node4:/bdapps/hadoop/etc/hadoop/
启动hadoop相关服务
master节点
与伪分布式模式相同,在HDFS集群的NN启动之前需要先初始化其用于处处数据的目录,如果hdfs-site.xml中 dfs.namenode.name.dir属性指定的目录不存在,
格式化命令会自动创建之,如果事先存在,请确保其权限设置正确,此时格式操作会清除其内部的所有数据并重新建立一个新的文件系统。需要以hdfs用户的身份在master节点执行如下命令
hdfs namenode -format
启动集群节点有两种方式:
1、登录到各个节点手动启动服务
2、在master节点控制启动整个集群
集群规模较大时,分别启动各个节点的各个服务会比较繁琐,所以hadoop提供了start-dfs.sh和stop-dfs.sh来启动及停止整个hdfs集群,以及start-yarn.sh和stop-yarn.sh来启动和停止整个yarn集群
[hadoop@node1 hadoop]$start-dfs.sh Starting namenodes on [master]hadoop@master's password: master: starting namenode, logging to /bdapps/hadoop/logs/hadoop-hadoop-namenode-node1.outnode4: starting datanode, logging to /bdapps/hadoop/logs/hadoop-hadoop-datanode-node4.outnode2: starting datanode, logging to /bdapps/hadoop/logs/hadoop-hadoop-datanode-node2.outnode3: starting datanode, logging to /bdapps/hadoop/logs/hadoop-hadoop-datanode-node3.outStarting secondary namenodes [0.0.0.0]hadoop@0.0.0.0's password: 0.0.0.0: starting secondarynamenode, logging to /bdapps/hadoop/logs/hadoop-hadoop-secondarynamenode-node1.out[hadoop@node1 hadoop]$ jps69127 NameNode69691 Jps69566 SecondaryNameNode
登录到datanode节点查看进程
[root@node2 ~]# jps66968 DataNode67436 Jps[root@node3 ~]# jps109281 DataNode109991 Jps[root@node4 ~]# jps108753 DataNode109674 Jps
停止整个集群的服务
[hadoop@node1 hadoop]$ stop-dfs.shStopping namenodes on [master]master: stopping namenodenode4: stopping datanodenode2: stopping datanodenode3: stopping datanodeStopping secondary namenodes [0.0.0.0]0.0.0.0: stopping secondarynamenode
测试
在master节点上,上传一个文件
[hadoop@node1 ~]$ hdfs dfs -mkdir /test[hadoop@node1 ~]$ hdfs dfs -put /etc/fstab /test/fstab[hadoop@node1 ~]$ hdfs dfs -ls -R /test-rw-r--r-- 2 hadoop supergroup 223 2018-03-27 16:48 /test/fstab
登录node2
[hadoop@node2 ~]$ ls /data/hadoop/hdfs/dn/current/BP-1194588190-172.16.2.3-1522138946011/current/finalized/[hadoop@node2 ~]$
没有fstab文件
登录node3,可以看到fstab文件
[hadoop@node3 ~]$ cat /data/hadoop/hdfs/dn/current/BP-1194588190-172.16.2.3-1522138946011/current/finalized/subdir0/subdir0/blk_1073741825UUID=dbcbab6c-2836-4ecd-8d1b-2da8fd160694 / ext4 defaults 1 1tmpfs /dev/shm tmpfs defaults 0 0devpts /dev/pts devpts gid=5,mode=620 0 0sysfs /sys sysfs defaults 0 0proc /proc proc defaults 0 0dev/vdb1 none swap sw 0 0
登录node4,也可以看到fstab文件
[hadoop@node4 root]$ cat /data/hadoop/hdfs/dn/current/BP-1194588190-172.16.2.3-1522138946011/current/finalized/subdir0/subdir0/blk_1073741825UUID=dbcbab6c-2836-4ecd-8d1b-2da8fd160694 / ext4 defaults 1 1tmpfs /dev/shm tmpfs defaults 0 0devpts /dev/pts devpts gid=5,mode=620 0 0sysfs /sys sysfs defaults 0 0proc /proc proc defaults 0 0dev/vdb1 none swap sw 0 0
结论:由于我们数据保存2份,所以只在node3,node4上有文件副本,node2上没有
启动yarn集群
登录node1(master),执行start-yarn.sh
[hadoop@node1 ~]$ start-yarn.sh starting yarn daemonsstarting resourcemanager, logging to /bdapps/hadoop/logs/yarn-hadoop-resourcemanager-node1.outnode4: starting nodemanager, logging to /bdapps/hadoop/logs/yarn-hadoop-nodemanager-node4.outnode2: starting nodemanager, logging to /bdapps/hadoop/logs/yarn-hadoop-nodemanager-node2.outnode3: starting nodemanager, logging to /bdapps/hadoop/logs/yarn-hadoop-nodemanager-node3.out[hadoop@node1 ~]$ jps78115 ResourceManager71574 NameNode71820 SecondaryNameNode78382 Jps
登录node2,执行jps命令,可以看到NodeManager服务已经启动了
[ansible@node2 ~]$ sudo su - hadoop[hadoop@node2 ~]$ jps68800 DataNode75400 Jps74856 NodeManager
查看Web UI控制台
其他参考文档:
http://www.codeceo.com/understand-hadoop-hbase-hive-spark-distributed-system-architecture.html