千家信息网

hadoop&spark安装(上)

发表于:2025-01-25 作者:千家信息网编辑
千家信息网最后更新 2025年01月25日,硬件环境:hddcluster1 10.0.0.197 redhat7hddcluster2 10.0.0.228 centos7 这台作为masterhddcluster3 10.0.0.202 r
千家信息网最后更新 2025年01月25日hadoop&spark安装(上)

硬件环境:

hddcluster1 10.0.0.197 redhat7

hddcluster2 10.0.0.228 centos7 这台作为master

hddcluster3 10.0.0.202 redhat7

hddcluster4 10.0.0.181 centos7

软件环境:

关闭所有防火墙firewall

openssh-clients

openssh-server

java-1.8.0-openjdk

java-1.8.0-openjdk-devel

hadoop-2.7.3.tar.gz

流程:

  1. 选定一台机器作为 Master

  2. 在 Master 节点上配置 hadoop 用户、安装 SSH server、安装 Java 环境

  3. 在 Master 节点上安装 Hadoop,并完成配置

  4. 在其他 Slave 节点上配置 hadoop 用户、安装 SSH server、安装 Java 环境

  5. 将 Master 节点上的 /usr/local/hadoop 目录复制到其他 Slave 节点上

  6. 在 Master 节点上开启 Hadoop

#节点的名称与对应的 IP 关系[hadoop@hddcluster2 ~]$ cat /etc/hosts127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain610.0.0.228      hddcluster210.0.0.197      hddcluster110.0.0.202      hddcluster310.0.0.181      hddcluster4
创建hadoop用户su  # 上述提到的以 root 用户登录useradd -m hadoop -s /bin/bash   # 创建新用户hadooppasswd hadoop                     #设置hadoop密码visudo                            #root ALL=(ALL) ALL 这行下面添加hadoop ALL=(ALL) ALL
#登录hadoop用户,安装SSH、配置SSH无密码登陆[hadoop@hddcluster2 ~]$ rpm -qa | grep ssh[hadoop@hddcluster2 ~]$ sudo yum install openssh-clients[hadoop@hddcluster2 ~]$ sudo yum install openssh-server[hadoop@hddcluster2 ~]$cd ~/.ssh/     # 若没有该目录,请先执行一次ssh localhost[hadoop@hddcluster2 ~]$ssh-keygen -t rsa              # 会有提示,都按回车就可以[hadoop@hddcluster2 ~]$ssh-copy-id -i ~/.ssh/id_rsa.pub localhost # 加入授权[hadoop@hddcluster2 ~]$chmod 600 ./authorized_keys    # 修改文件权限[hadoop@hddcluster2 ~]$ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@hddcluster1[hadoop@hddcluster2 ~]$ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@hddcluster3[hadoop@hddcluster2 ~]$ssh-copy-id -i ~/.ssh/id_rsa.pub  hadoop@hddcluster4
#解压hadoop文件到/usr/local/hadoop[hadoop@hddcluster2 ~]$sudo tar -zxf hadoop-2.7.3.tar.gz -C /usr/local/[hadoop@hddcluster2 ~]$sudo mv /usr/local/hadoop-2.7.3 /usr/local/hadoop[hadoop@hddcluster2 ~]$sudo chown -R hadoop:hadoop /usr/local/hadoopcd /usr/local/hadoop./bin/hadoop version#安装java环境[hadoop@hddcluster2 ~]$sudo yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel[hadoop@hddcluster2 ~]$ rpm -ql java-1.8.0-openjdk-devel | grep '/bin/javac' /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-2.b15.el7_3.x86_64/bin/javac[hadoop@hddcluster2 ~]$ vim ~/.bashrcexport JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-2.b15.el7_3.x86_64export HADOOP_HOME=/usr/local/hadoopexport HADOOP_INSTALL=$HADOOP_HOMEexport HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/binexport HADOOP_PREFIX=$HADOOP_HOMEexport HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib:$HADOOP_PREFIX/lib/native"#测试java环境source ~/.bashrcjava -version$JAVA_HOME/bin/java -version  # 与直接执行 java -version 一样
#修改hadoop文件配置[hadoop@hddcluster2 hadoop]$ pwd/usr/local/hadoop/etc/hadoop[hadoop@hddcluster2 hadoop]$ cat core-site.xml                        fs.defaultFS                hdfs://hddcluster2:9000                                hadoop.tmp.dir                file:/usr/local/hadoop/tmp                Abase for other temporary directories.        [hadoop@hddcluster2 hadoop]$ cat hdfs-site.xml                        dfs.namenode.secondary.http-address                hddcluster2:50090                                dfs.replication                3                                dfs.namenode.name.dir                file:/usr/local/hadoop/tmp/dfs/name                                dfs.datanode.data.dir                file:/usr/local/hadoop/tmp/dfs/data        [hadoop@hddcluster2 hadoop]$ [hadoop@hddcluster2 hadoop]$ cat mapred-site.xml                        mapreduce.framework.name                yarn                                mapreduce.jobhistory.address                hddcluster2:10020                                mapreduce.jobhistory.webapp.address                hddcluster2:19888        [hadoop@hddcluster2 hadoop]$ [hadoop@hddcluster2 hadoop]$ cat yarn-site.xml                         yarn.resourcemanager.hostname                hddcluster2                                yarn.nodemanager.aux-services                mapreduce_shuffle        [hadoop@hddcluster2 hadoop]$ [hadoop@hddcluster2 hadoop]$ cat slaves hddcluster1hddcluster2hddcluster3hddcluster4
$cd /usr/local$sudo rm -r ./hadoop/tmp     # 删除 Hadoop 临时文件$sudo rm -r ./hadoop/logs/*   # 删除日志文件$tar -zcf ~/hadoop.master.tar.gz ./hadoop   # 先压缩再复制$cd ~$scp ./hadoop.master.tar.gz hddcluster1:/home/hadoop$scp ./hadoop.master.tar.gz hddcluster3:/home/hadoop$scp ./hadoop.master.tar.gz hddcluster4:/home/hadoop
在salve节点上操作,安装软件环境并配置好.bashrcsudo tar -zxf ~/hadoop.master.tar.gz -C /usr/localsudo chown -R hadoop /usr/local/hadoop
[hadoop@hddcluster2 ~]$hdfs namenode -format       # 首次运行需要执行初始化,之后不需要接着可以启动 hadoop 了,启动需要在 Master 节点上进行启动命令:$start-dfs.sh$start-yarn.sh$mr-jobhistory-daemon.sh start historyserver通过命令 jps 可以查看各个节点所启动的进程。正确的话,在 Master 节点上可以看到 NameNode、ResourceManager、SecondrryNameNode、JobHistoryServer 进程,另外还需要在 Master 节点上通过命令 hdfs dfsadmin -report 查看 DataNode 是否正常启动,如果 Live datanodes 不为 0 ,则说明集群启动成功。[hadoop@hddcluster2 ~]$ hdfs dfsadmin -reportConfigured Capacity: 2125104381952 (1.93 TB)Present Capacity: 1975826509824 (1.80 TB)DFS Remaining: 1975824982016 (1.80 TB)DFS Used: 1527808 (1.46 MB)DFS Used%: 0.00%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0Missing blocks (with replication factor 1): 0-------------------------------------------------Live datanodes (4):也可以通过 Web 页面看到查看 DataNode 和 NameNode 的状态:http://hddcluster2:50070/。如果不成功,可以通过启动日志排查原因。
在 Slave 节点操作可以看到 DataNode 和 NodeManager 进程
测试hadoop分布式实例首先创建 HDFS 上的用户目录:hdfs dfs -mkdir -p /user/hadoop将 /usr/local/hadoop/etc/hadoop 中的配置文件作为输入文件复制到分布式文件系统中:hdfs dfs -mkdir inputhdfs dfs -put /usr/local/hadoop/etc/hadoop/*.xml input通过查看   的DataNode 的状态(占用大小有改变),输入文件确实复制到了 DataNode 中。接着就可以运行 MapReduce 作业了:hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output 'dfs[a-z.]+'等待执行完毕后的输出结果:
hadoop启动命令:start-dfs.shstart-yarn.shmr-jobhistory-daemon.sh start historyserverhadoop关闭命令:stop-dfs.shstop-yarn.shmr-jobhistory-daemon.sh stop historyserver


PS:如果集群有一两台无法启动的话,先尝试一下删除hadoop临时文件

cd /usr/local

sudo rm -r ./hadoop/tmp

sudo rm -r ./hadoop/logs/*

然后执行

hdfs namenode -format

再启动


本文参考了一下网站并实验成功:

http://www.powerxing.com/install-hadoop-cluster/

0