千家信息网

Hadoop2.x的安装步骤

发表于:2024-10-21 作者:千家信息网编辑
千家信息网最后更新 2024年10月21日,这篇文章主要介绍"Hadoop2.x的安装步骤",在日常操作中,相信很多人在Hadoop2.x的安装步骤问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答"Hadoop2.x
千家信息网最后更新 2024年10月21日Hadoop2.x的安装步骤

这篇文章主要介绍"Hadoop2.x的安装步骤",在日常操作中,相信很多人在Hadoop2.x的安装步骤问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答"Hadoop2.x的安装步骤"的疑惑有所帮助!接下来,请跟着小编一起来学习吧!

一、安装配置

1.创建hadoop用户(我是添加到root group,也可以新增hadoop group)

[root@hftclclw0001 ~]# useradd hadoop[root@hftclclw0001 ~]# usermod -g root[root@hftclclw0001 ~]# cat /etc/passwd......hadoop:x:50295:0::/home/hadoop:/bin/bash[root@hftclclw0001 ~]# chmod 644 /etc/suders[root@hftclclw0001 ~]# vi 644 /etc/suders......root    ALL=(ALL)  ALLhadoop  ALL=(ALL)  ALL...

2.ssh免密码登陆

[hadoop@hftclclw0001 hadoop]$ ssh-keygen -t rsa[hadoop@hftclclw0001 hadoop]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys [hadoop@hftclclw0001 hadoop]$ tree ~/.ssh//home/hadoop/.ssh/├── authorized_keys├── id_rsa├── id_rsa.pub└── known_hosts0 directories, 4 files操作其他各个机器,并复制公钥(d_rsa.pub)到其他各个机器的authorized_keys中。我使用的scp,复制到其他机器,再使用cat追加到authorized_keys文件中

3. 下载hadoop-2.x.y.tar.gz

[root@hftclclw0001 hadoop]# pwd/home/hadoop[root@hftclclw0001 hadoop]# tar -zxvf hadoop-2.7.1.tar.gz[root@hftclclw0001 hadoop]# lltotal 546584drwx------ 11 hadoop root      4096 Oct 20 09:05 hadoop-2.7.1-rw-------  1 hadoop root 210606807 Oct 20 09:00 hadoop-2.7.1.tar.gzdrwx------ 13 hadoop root      4096 Oct 20 09:22 spark-1.5.1-bin-hadoop2.6-rw-------  1 hadoop root 280901736 Oct 20 09:19 spark-1.5.1-bin-hadoop2.6.tgzdrwx------ 22 hadoop root      4096 Oct 21 00:07 sqoop-1.99.6-bin-hadoop200-rw-------  1 hadoop root  68177818 May  5 22:34 sqoop-1.99.6-bin-hadoop200.tar.gz

4.配置hadoop-2.x.y

[hadoop@hftclclw0001 hadoop]$ pwd/home/hadoop/hadoop-2.7.1/etc/hadoop[hadoop@hftclclw0001 hadoop]$ vi hadoop-env.sh # The java implementation to use.export JAVA_HOME=/usr/java/latest            => 配置java_home[hadoop@hftclclw0001 hadoop]$ vi core-site.xml                        hadoop.tmp.dir                /home/hadoop/hadoop-2.7.1/tmp    => 需创建,默认在/tmp下                                fs.defaultFS                hdfs://{master:IP}:9000                  [hadoop@hftclclw0001 hadoop]$ vi hdfs-site.xml                        dfs.http.address                {master:ip}:50070                                dfs.replication                2                        =>我这有3台机器,2台datanode 1台Namenode        [hadoop@hftclclw0001 hadoop]$ vi mapred-site.xml                        mapreduce.framework.name                yarn        [hadoop@hftclclw0001 hadoop]$ vi yarn-env.sh...export JAVA_HOME=/usr/java/latest...[hadoop@hftclclw0001 hadoop]$ vi yarn-site.xml                         yarn.resourcemanager.hostname     =>需要配置,在启动时候nodemanager会访问resouremanager                {master:ip}                                 yarn.nodemanager.aux-services                mapreduce_shuffle        [hadoop@hftclclw0001 hadoop]$ vi masters                        =>其实作用的secondary namenode在那个节点上{master:ip}[hadoop@hftclclw0001 hadoop]$ vi slaves                         =>作用的datanode再那些节点上{slave-1:ip}{slave-2:ip}

5. 复制到其他机器

[hadoop@hftclclw0001 ~]$ pwd/home/hadoop[hadoop@hftclclw0001 ~]$ scp -r hadoop-2.7.1 hadoop@{ip}:/home/hadoop

6.启动

[hadoop@hftclclw0001 hadoop-2.7.1]$ ./bin/hadoop namenode -format[hadoop@hftclclw0001 hadoop-2.7.1]$ pwd/home/hadoop/hadoop-2.7.1[hadoop@hftclclw0001 hadoop-2.7.1]$ ./sbin/start-dfs.sh    => 启动dfs, jps查看进程master:namenode ,secondary namenodeslave:datanode[hadoop@hftclclw0001 hadoop-2.7.1]$ ./sbin/start-yarn.sh   =>启动yarn

7.验证

a.jps => 校验各个进程

b.netstat => 校验端口

c.webui => 可以校验cluster整体状况

d.也可以操作hdfs、或是submit mr job

[hadoop@hftclclw0001 hadoop-2.7.1]$ pwd/home/hadoop/hadoop-2.7.1[hadoop@hftclclw0001 hadoop-2.7.1]$ ./bin/hdfs dfs -ls /......[hadoop@hftclclw0001 hadoop-2.7.1]$ ./bin/hdfs dfs -mkdir /test......[hadoop@hftclclw0001 hadoop-2.7.1]$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount {in} {out}[hadoop@hftclclw0001 hadoop-2.7.1]$ ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 10 10

二、trouble shooting

  1. 文件写入权限问题

    当外部程序写入hdfs时,默认都要进行用户认证。如按照上述配置,只能hadoop账户可以写hdfs

    dfs.premissions.enabled=true 即对用户进行认证。修改为false

    dfs.datanode.data.dir.perm=700 即本地目录的写入权限。修改为755

到此,关于"Hadoop2.x的安装步骤"的学习就结束了,希望能够解决大家的疑惑。理论与实践的搭配能更好的帮助大家学习,快去试试吧!若想继续学习更多相关知识,请继续关注网站,小编会继续努力为大家带来更多实用的文章!

0