千家信息网

Hadoop用户重新部署HDFS

发表于:2025-01-31 作者:千家信息网编辑
千家信息网最后更新 2025年01月31日,前言:在这篇文章中https://www.jianshu.com/p/eeae2f37a48c我们使用的是root用户来部署的,在生产环境中,一般某个组件是由某个用户来启动的,本篇文章介绍下怎样用ha
千家信息网最后更新 2025年01月31日Hadoop用户重新部署HDFS

前言:
在这篇文章中https://www.jianshu.com/p/eeae2f37a48c
我们使用的是root用户来部署的,在生产环境中,一般某个组件是由某个用户来启动的,本篇文章介绍下怎样用hadoop用户来重新部署伪分布式(HDFS)

1.前期准备

创建hadoop用户,及配置ssh免密登录
参考:https://www.jianshu.com/p/589bb43e0282

2.停止root启动的HDFS进程并删除/tmp目录下的存储文件
[root@hadoop000 hadoop-2.8.1]# pwd/opt/software/hadoop-2.8.1[root@hadoop000 hadoop-2.8.1]# jps32244 NameNode32350 DataNode32558 SecondaryNameNode1791 Jps[root@hadoop000 hadoop-2.8.1]# sbin/stop-dfs.sh Stopping namenodes on [hadoop000]hadoop000: stopping namenodelocalhost: stopping datanodeStopping secondary namenodes [0.0.0.0]0.0.0.0: stopping secondarynamenode[root@hadoop000 hadoop-2.8.1]# jps2288 Jps[root@hadoop000  hadoop-2.8.1]# rm -rf /tmp/hadoop-* /tmp/hsperfdata_*
3.更改文件属主
[root@hadoop000 software]# pwd/opt/software[root@hadoop000 software]# chown -R hadoop:hadoop hadoop-2.8.1
4.进入hadoop用户 修改相关配置文件
#第一步:[hadoop@hadoop000 hadoop]$ pwd/opt/software/hadoop-2.8.1/etc/hadoop[hadoop@hadoop000 hadoop]$ vi hdfs-site.xml <configuration><property><name>dfs.replication</name><value>1</value></property><property><name>dfs.namenode.secondary.http-address</name><value>192.168.6.217:50090</value></property><property><name>dfs.namenode.secondary.https-address</name><value>192.168.6.217:50091</value></property></configuration>#第二步:[hadoop@hadoop000 hadoop]$ vi core-site.xml <configuration><property><name>fs.defaultFS</name><value>hdfs://192.168.6.217:9000</value></property></configuration>#第三步:[hadoop@hadoop000 hadoop]# vi slaves 192.168.6.217
5.格式化和启动
[hadoop@hadoop000 hadoop-2.8.1]$ pwd/opt/software/hadoop-2.8.1[hadoop@hadoop000 hadoop-2.8.1]$ bin/hdfs namenode -format[hadoop@hadoop000 hadoop-2.8.1]$ sbin/start-dfs.shStarting namenodes on [hadoop000]hadoop000: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop000.out192.168.6.217: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop000.outStarting secondary namenodes [hadoop000]hadoop000: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop000.out[hadoop@hadoop000 hadoop-2.8.1]$ jps3141 Jps2806 DataNode2665 NameNode2990 SecondaryNameNode#至此发现HDFS三个进程都是以hadoop000启动,
0