千家信息网

从零开始构建 Hadoop 集群

发表于:2024-10-01 作者:千家信息网编辑
千家信息网最后更新 2024年10月01日,一、简介当今只要谈到大数据,自然想到Hadoop,以前Hadoop还只是个软件、系统,而如今更多代表的是一个大数据生态圈。本文谈的 Hadoop 特指一个软件,它是 Apache 基金会的顶级项目之一
千家信息网最后更新 2024年10月01日从零开始构建 Hadoop 集群
一、简介当今只要谈到大数据,自然想到Hadoop,以前Hadoop还只是个软件、系统,而如今更多代表的是一个大数据生态圈。本文谈的 Hadoop 特指一个软件,它是 Apache 基金会的顶级项目之一,它本身主要解决了大数据领域的两大核心问题,如何存储(hdfs)、如何计算(mapreduce)。官方文档:http://hadoop.apache.org闲言少叙,下面进入正题。二、部署环境的准备工作1、服务器本次选择使用4台物理机来搭建Hadoop集群,配置如下Master                  CPU(24核)        MEM(48G)        DISK(2T*1 -> RAID10)Slave01                 CPU(24核)        MEM(12G)        DISK(1T*4)Slave02                 CPU(12核)        MEM(24G)        DISK(1T*4)Slave03                 CPU(24核)        MEM(16G)        DISK(1T*4)# 还是挺惨的~2、系统选择的是 CentOS 6.9 x86_64三、系统初始化1、配置主机名、停用多余的服务、关闭SELinux、Iptables2、修改最大打开文件描述符shell > tail -4 /etc/security/limits.conf * - nofile 65536# End of file3、关闭透明大页shell > tail -1 /etc/rc.local echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag4、创建 Unix 用户账号shell > useradd hadoop | echo hadoop | passwd --stdin hadoop# 所有服务器都需要创建5、格式化硬盘shell > mkdir /dfs && chown -R hadoop.hadoop /dfs# Master 执行shell > mkdir -p /dfs/{disk1,disk2,disk3}shell > mkfs.ext4 /dev/sdb、/dev/sdc、/dev/sddshell > tail -3 /etc/fstab /dev/sdb                /dfs/disk1              ext4    defaults,noatime 0 0/dev/sdc                /dfs/disk2              ext4    defaults,noatime 0 0/dev/sdd                /dfs/disk3              ext4    defaults,noatime 0 0shell > mount -ashell > chown -R hadoop.hadoop /dfs# 所有 Slave 都需要执行6、时间同步shell > crontab -l0 */2 * * * /usr/sbin/ntpdate us.pool.ntp.org | /sbin/hwclock -w > /dev/null# 时间同步挺重要的, 上次遇到过由于时间不同步导致 HBase 的 RegionServer 无法启动# init 6四、搭建 Hadoop 集群1、配置用户免密登陆root shell > ssh-keygen                       # 生成密钥root shell > ssh-copy-id -i ~/.ssh/id_rsa.pub slave01root shell > ssh-copy-id -i ~/.ssh/id_rsa.pub slave02root shell > ssh-copy-id -i ~/.ssh/id_rsa.pub slave03hadoop shell > ssh-keygen             # 生成密钥hadoop shell > ssh-copy-id -i ~/.ssh/id_rsa.pub masterhadoop shell > ssh-copy-id -i ~/.ssh/id_rsa.pub slave01hadoop shell > ssh-copy-id -i ~/.ssh/id_rsa.pub slave02hadoop shell > ssh-copy-id -i ~/.ssh/id_rsa.pub slave03# Hadoop 控制脚本(不是 Hadoop 守护进程)依赖 SSH 来管理服务的启停。# Tips: ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@slave012、安装 jdkshell > rpm -ivh jdk-8u161-linux-x64.rpmshell > ansible slave -m shell -a 'rpm -ivh jdk-8u161-linux-x64.rpm'shell > vim /etc/profileexport JAVA_HOME=/usr/java/defaultshell > source /etc/profileshell > java -versionjava version "1.8.0_161"Java(TM) SE Runtime Environment (build 1.8.0_161-b12)Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)# jdk 是需要所有机器都安装的, 且配置环境变量3、下载、安装 Hadoopshell > wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.9.1/hadoop-2.9.1.tar.gzshell > tar xf hadoop-2.9.1.tar.gz -C /usr/localshell > chown -R hadoop.hadoop /usr/local/hadoop-2.9.1shell > tail -2 /etc/profileexport HADOOP_HOME=/usr/local/hadoop-2.9.1export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbinshell > source /etc/profile4、配置 Hadoop1> hadoop-env.sh 一个全局环境变量控制文件, 该文件中的值会被 yarn-env.sh、mapred-env.sh 覆盖shell > vim /usr/local/hadoop-2.9.1/etc/hadoop/hadoop-env.shexport JAVA_HOME=/usr/java/default         # JAVA_HOMEexport HADOOP_HEAPSIZE=1000                                # 内存堆大小# Hadoop 默认为每个守护进程分配1000MB内存, 资料显示, 以 NameNode 进程来说, 保守计算每100万个数据块需要1000MB内存,# 我们集群为3个节点, 每个节点有3T硬盘, 数据块大小为256MB, 每个数据块有3个复本, 大概有12000个数据块 -> 3*3000000MB/(256MB*3), 默认值足够了。# Tips: 可以单独为每个进程设置不同的内存大小2> core-site.xmlshell > vim /usr/local/hadoop-2.9.1/etc/hadoop/core-site.xml          fs.defaultFS      hdfs://192.168.10.50              hadoop.tmp.dir      file:///dfs/tmp/hadoop-${user.name}              io.file.buffer.size      131072              fs.trash.interval      1440    # 模板文件:HADOOP_HOME/share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml3> hdfs-site.xmlshell > vim /usr/local/hadoop-2.9.1/etc/hadoop/hdfs-site.xml          dfs.blocksize      134217728              dfs.replication      3              dfs.namenode.handler.count      100              dfs.namenode.name.dir      file:///dfs/name              dfs.datanode.data.dir      file:///dfs/disk1/data,/dfs/disk2/data,/dfs/disk3/data              dfs.namenode.checkpoint.dir      file:///dfs/namesecondary    # 模板文件:HADOOP_HOME/share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml4> yarn-site.xmlshell > vim /usr/local/hadoop-2.9.1/etc/hadoop/yarn-site.xml          yarn.resourcemanager.hostname      192.168.10.50              yarn.nodemanager.local-dirs      file:///dfs/disk1/nm-local-dir,/dfs/disk2/nm-local-dir,/dfs/disk3/nm-local-dir              yarn.nodemanager.aux-services      mapreduce_shuffle              yarn.nodemanager.resource.memory-mb      10240              yarn.nodemanager.resource.cpu-vcores      10    # 模板文件:HADOOP_HOME/share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/yarn-default.xml5> mapred-site.xmlshell > vim /usr/local/hadoop-2.9.1/etc/hadoop/mapred-site.xml          mapreduce.framework.name      yarn              mapreduce.map.memory.mb      2048              mapreduce.reduce.memory.mb      2048              mapred.child.java.opts      -Xmx1024m    # 模板文件:HADOOP_HOME/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml6> slavesshell > tail /usr/local/hadoop-2.9.1/etc/hadoop/slaves 192.168.10.51192.168.10.52192.168.10.535、同步配置文件shell > ansible slave -m synchronize -a 'src=/usr/local/hadoop-2.9.1 dest=/usr/local/'6、启动 Hadoopshell > su - hadoop -c "hdfs namenode -format"     # 格式化文件系统18/06/03 16:24:39 INFO common.Storage: Storage directory /dfs/name has been successfully formatted.# 表示成功shell > su - hadoophadoop shell > start-dfs.sh           # 启动 hdfshadoop shell > start-yarn.sh  # 启动 yarnhadoop shell > jps                            # Master 节点服务启动成功14032 ResourceManager13745 SecondaryNameNode14364 Jps13406 NameNodehadoop shell > ansible slave -m shell -a 'jps'       # Slave 节点服务启动成功slave02 | SUCCESS | rc=0 >>4324 DataNode4936 Jps4572 NodeManagerslave01 | SUCCESS | rc=0 >>4807 DataNode5065 NodeManager5455 Jpsslave03 | SUCCESS | rc=0 >>4720 DataNode5365 Jps4975 NodeManager五、附加1、hdfs dfs -ls                # 列出当前用户家目录18/06/03 17:16:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicablels: `.': No such file or directory解决方法:1> 根据你 hadoop 的版本,来 http://dl.bintray.com/sequenceiq/sequenceiq-bin 下载一个对应版本的 hadoop-native-64 包2> 解压压缩包,覆盖到 HADOOP_HOME/lib/native/ 目录下即可!shell > tar xf hadoop-native-64-2.7.0.tar -C /usr/local/hadoop-2.9.1/lib/native/hadoop shell > hdfs dfs -lsls: `.': No such file or directory            # 创建用户家目录即可 hdfs -mkdir -p /user/hadoop# 如果没有对应版本就下载个最接近的也行,我的环境 hadoop 2.9.1,下载 2.7.0 没有问题。


0