千家信息网

Hadoop集群(二) HDFS搭建

发表于:2025-01-23 作者:千家信息网编辑
千家信息网最后更新 2025年01月23日,HDFS只是Hadoop最基本的一个服务,很多其他服务,都是基于HDFS展开的。所以部署一个HDFS集群,是很核心的一个动作,也是大数据平台的开始。安装Hadoop集群,首先需要有Zookeeper才
千家信息网最后更新 2025年01月23日Hadoop集群(二) HDFS搭建

HDFS只是Hadoop最基本的一个服务,很多其他服务,都是基于HDFS展开的。所以部署一个HDFS集群,是很核心的一个动作,也是大数据平台的开始。

安装Hadoop集群,首先需要有Zookeeper才可以完成安装。如果没有Zookeeper,请先部署一套Zookeeper。另外,JDK以及物理主机的一些设置等。请参考:

Hadoop集群(一) Zookeeper搭建

Hadoop集群(三) Hbase搭建

Hadoop集群(四) Hadoop升级

下面开始HDFS的安装

HDFS主机分配

123192.168.67.101 c6701 --Namenode+datanode192.168.67.102 c6702 --datanode192.168.67.103 c6703 --datanode

1. 安装HDFS,解压hadoop-2.6.0-EDH-0u2.tar.gz

我同时下载2.6和2.7版本的软件,先安装2.6,然后在执行2.6到2.7的升级步骤

useradd hdfsecho "hdfs:hdfs" | chpasswdsu - hdfscd /tmp/softwaretar -zxvf hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs/mkdir -p /data/hadoop/temp mkdir -p /data/hadoop/journal mkdir -p /data/hadoop/hdfs/name mkdir -p /data/hadoop/hdfs/datachown -R hdfs:hdfs /data/hadoopchown -R hdfs:hdfs /data/hadoop/temp chown -R hdfs:hdfs /data/hadoop/journal chown -R hdfs:hdfs /data/hadoop/hdfs/name chown -R hdfs:hdfs /data/hadoop/hdfs/data $ pwd/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop

2. 修改core-site.xml对应的参数

$ cat core-site.xml           fs.defaultFS         hdfs://ns            hadoop.tmp.dir      /data/hadoop/temp                                     io.file.buffer.size         4096            ha.zookeeper.quorum      c6701:2181,c6702:2181,c6703:2181  

3. 修改hdfs-site.xml对应的参数

cat hdfs-site.xml                      dfs.nameservices           ns                      dfs.ha.namenodes.ns      nn1,nn2                  dfs.namenode.rpc-address.ns.nn1      c6701:9000                    dfs.namenode.http-address.ns.nn1        c6701:50070                    dfs.namenode.rpc-address.ns.nn2        c6702:9000                    dfs.namenode.http-address.ns.nn2        c6702:50070                    dfs.namenode.shared.edits.dir        qjournal://c6701:8485;c6702:8485;c6703:8485/ns                      dfs.journalnode.edits.dir          /data/hadoop/journal                      dfs.ha.automatic-failover.enabled          true                        dfs.client.failover.proxy.provider.ns            org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider                        dfs.ha.fencing.methods            sshfence                        dfs.ha.fencing.ssh.private-key-files            /home/hdfs/.ssh/id_rsa                                                dfs.namenode.name.dir           /data/hadoop/hdfs/name                            dfs.datanode.data.dir           /data/hadoop/hdfs/data                          dfs.replication         2                                                                                            dfs.webhdfs.enabled         true          

4. 添加slaves文件

$ more slavesc6701c6702c6703

--- 安装C6702的hdfs---

5. 创建c6702的用户,并为hdfs用户ssh免密

ssh c6702 "useradd hdfs"ssh c6702 "echo "hdfs:hdfs" | chpasswd"ssh-copy-id  hdfs@c6702

6. 拷贝软件

scp -r /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz root@c6702:/tmp/software/.ssh c6702 "chmod 777 /tmp/software/*"

7. 创建目录,解压软件

ssh hdfs@c6702 "mkdir hdfs"ssh hdfs@c6702 "tar -zxvf /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs"ssh hdfs@c6702 "ls -al hdfs"ssh hdfs@c6702 "ls -al hdfs/hadoop*"

复制配置文件

ssh hdfs@c6702 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml"ssh hdfs@c6702 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml"scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves hdfs@c6702:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves

创建hdfs需要的目录

ssh root@c6702 "mkdir -p /data/hadoop"ssh root@c6702 " chown -R hdfs:hdfs  /data/hadoop"ssh hdfs@c6702 "mkdir -p /data/hadoop/temp"ssh hdfs@c6702 "mkdir -p /data/hadoop/journal"ssh hdfs@c6702 "mkdir -p /data/hadoop/hdfs/name"ssh hdfs@c6702 "mkdir -p /data/hadoop/hdfs/data"

--- 安装C6703的hdfs---

8. 创建c6703的用户,并为hdfs用户ssh免密

ssh c6703 "useradd hdfs"ssh c6703 "echo "hdfs:hdfs" | chpasswd"ssh-copy-id  hdfs@c6703

9. 拷贝软件

scp -r /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz root@c6703:/tmp/software/.ssh c6703 "chmod 777 /tmp/software/*"10. 创建目录,解压软件ssh hdfs@c6703 "mkdir hdfs"ssh hdfs@c6703 "tar -zxvf /tmp/software/hadoop-2.6.0-EDH-0u2.tar.gz -C /home/hdfs"ssh hdfs@c6703 "ls -al hdfs"ssh hdfs@c6703 "ls -al hdfs/hadoop*"

复制配置文件

ssh hdfs@c6703 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml"ssh hdfs@c6703 "rm -rf /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml"scp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xml hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/core-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xml hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/hdfs-site.xmlscp -r /home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves hdfs@c6703:/home/hdfs/hadoop-2.6.0-EDH-0u2/etc/hadoop/slaves

创建hdfs需要的目录

ssh root@c6703 "mkdir -p /data/hadoop"ssh root@c6703 " chown -R hdfs:hdfs  /data/hadoop"ssh hdfs@c6703 "mkdir -p /data/hadoop/temp"ssh hdfs@c6703 "mkdir -p /data/hadoop/journal"ssh hdfs@c6703 "mkdir -p /data/hadoop/hdfs/name"ssh hdfs@c6703 "mkdir -p /data/hadoop/hdfs/data"

11. 启动HDFS,先启动三个节点的journalnode

/home/hdfs/hadoop-2.6.0-EDH-0u2/sbin/hadoop-daemon.sh start journalnode

检查状态

$ jps3958 Jps3868 JournalNode

12. 然后启动namenode,首次启动namenode之前,先在其中一个节点(主节点)format namenode信息,信息会存在于dfs.namenode.name.dir指定的路径中

 dfs.namenode.name.dir    /data/hadoop/hdfs/name
$ ./hdfs namenode -format17/09/26 07:52:17 INFO namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = c6701.python279.org/192.168.67.101STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 2.6.0-EDH-0u2STARTUP_MSG:   classpath = /home/hdfs/hadoop-2.6.0-EDHxxxxxxxxxxSTARTUP_MSG:   build = http://gitlab-xxxxxSTARTUP_MSG:   java = 1.8.0_144************************************************************/17/09/26 07:52:17 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]17/09/26 07:52:17 INFO namenode.NameNode: createNameNode [-format]17/09/26 07:52:18 WARN common.Util: Path /data/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.17/09/26 07:52:18 WARN common.Util: Path /data/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.Formatting using clusterid: CID-b2f01411-862f-44b2-a6dc-7d17bd48d52217/09/26 07:52:18 INFO namenode.FSNamesystem: No KeyProvider found.17/09/26 07:52:18 INFO namenode.FSNamesystem: fsLock is fair:true17/09/26 07:52:18 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100017/09/26 07:52:18 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true17/09/26 07:52:18 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00017/09/26 07:52:18 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Sep 26 07:52:1817/09/26 07:52:18 INFO util.GSet: Computing capacity for map BlocksMap17/09/26 07:52:18 INFO util.GSet: VM type       = 64-bit17/09/26 07:52:18 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB17/09/26 07:52:18 INFO util.GSet: capacity      = 2^21 = 2097152 entries17/09/26 07:52:18 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false17/09/26 07:52:18 INFO blockmanagement.BlockManager: defaultReplication         = 217/09/26 07:52:18 INFO blockmanagement.BlockManager: maxReplication             = 51217/09/26 07:52:18 INFO blockmanagement.BlockManager: minReplication             = 117/09/26 07:52:18 INFO blockmanagement.BlockManager: maxReplicationStreams      = 217/09/26 07:52:18 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false17/09/26 07:52:18 INFO blockmanagement.BlockManager: replicationRecheckInterval = 300017/09/26 07:52:18 INFO blockmanagement.BlockManager: encryptDataTransfer        = false17/09/26 07:52:18 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 100017/09/26 07:52:18 INFO namenode.FSNamesystem: fsOwner             = hdfs (auth:SIMPLE)17/09/26 07:52:18 INFO namenode.FSNamesystem: supergroup          = supergroup17/09/26 07:52:18 INFO namenode.FSNamesystem: isPermissionEnabled = true17/09/26 07:52:18 INFO namenode.FSNamesystem: Determined nameservice ID: ns17/09/26 07:52:18 INFO namenode.FSNamesystem: HA Enabled: true17/09/26 07:52:18 INFO namenode.FSNamesystem: Append Enabled: true17/09/26 07:52:18 INFO util.GSet: Computing capacity for map INodeMap17/09/26 07:52:18 INFO util.GSet: VM type       = 64-bit17/09/26 07:52:18 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB17/09/26 07:52:18 INFO util.GSet: capacity      = 2^20 = 1048576 entries17/09/26 07:52:18 INFO namenode.NameNode: Caching file names occuring more than 10 times17/09/26 07:52:18 INFO util.GSet: Computing capacity for map cachedBlocks17/09/26 07:52:18 INFO util.GSet: VM type       = 64-bit17/09/26 07:52:18 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB17/09/26 07:52:18 INFO util.GSet: capacity      = 2^18 = 262144 entries17/09/26 07:52:18 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603317/09/26 07:52:18 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 017/09/26 07:52:18 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 3000017/09/26 07:52:18 INFO namenode.FSNamesystem: Retry cache on namenode is enabled17/09/26 07:52:18 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis17/09/26 07:52:18 INFO util.GSet: Computing capacity for map NameNodeRetryCache17/09/26 07:52:18 INFO util.GSet: VM type       = 64-bit17/09/26 07:52:18 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB17/09/26 07:52:18 INFO util.GSet: capacity      = 2^15 = 32768 entries17/09/26 07:52:18 INFO namenode.NNConf: ACLs enabled? false17/09/26 07:52:18 INFO namenode.NNConf: XAttrs enabled? true17/09/26 07:52:18 INFO namenode.NNConf: Maximum size of an xattr: 1638417/09/26 07:52:19 INFO namenode.FSImage: Allocated new BlockPoolId: BP-144216011-192.168.67.101-150641233975717/09/26 07:52:19 INFO common.Storage: Storage directory /data/hadoop/hdfs/name has been successfully formatted.17/09/26 07:52:20 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 017/09/26 07:52:20 INFO util.ExitUtil: Exiting with status 017/09/26 07:52:20 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at c6701.python279.org/192.168.67.101************************************************************/

13. standby namenode需要先执行bootstrapstandby,输出如下

[hdfs@c6702 sbin]$ ../bin/hdfs namenode -bootstrapstandby17/09/26 09:44:58 INFO namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = c6702.python279.org/192.168.67.102STARTUP_MSG:   args = [-bootstrapstandby]STARTUP_MSG:   version = 2.6.0-EDH-0u2STARTUP_MSG:   classpath = /home/hdfs/haxxxSTARTUP_MSG:   build = http://gitlab-xxxxSTARTUP_MSG:   java = 1.8.0_144************************************************************/17/09/26 09:44:58 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]17/09/26 09:44:58 INFO namenode.NameNode: createNameNode [-bootstrapstandby]17/09/26 09:44:59 WARN common.Util: Path /data/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.17/09/26 09:44:59 WARN common.Util: Path /data/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.=====================================================About to bootstrap Standby ID nn2 from:           Nameservice ID: ns        Other Namenode ID: nn1  Other NN's HTTP address: http://c6701:50070  Other NN's IPC  address: c6701/192.168.67.101:9000             Namespace ID: 793662207            Block pool ID: BP-144216011-192.168.67.101-1506412339757               Cluster ID: CID-b2f01411-862f-44b2-a6dc-7d17bd48d522           Layout version: -60=====================================================Re-format filesystem in Storage Directory /data/hadoop/hdfs/name ? (Y or N) y17/09/26 09:45:16 INFO common.Storage: Storage directory /data/hadoop/hdfs/name has been successfully formatted.17/09/26 09:45:16 WARN common.Util: Path /data/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.17/09/26 09:45:16 WARN common.Util: Path /data/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.17/09/26 09:45:17 INFO namenode.TransferFsImage: Opening connection to http://c6701:50070/imagetransfer?getimage=1&txid=0&storageInfo=-60:793662207:0:CID-b2f01411-862f-44b2-a6dc-7d17bd48d52217/09/26 09:45:17 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds17/09/26 09:45:17 INFO namenode.TransferFsImage: Transfer took 0.01s at 0.00 KB/s17/09/26 09:45:17 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 351 bytes.17/09/26 09:45:17 INFO util.ExitUtil: Exiting with status 017/09/26 09:45:17 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at c6702.python279.org/192.168.67.102************************************************************/

14. 检查状态,namenode还没有启动

[hdfs@c6702 sbin]$ jps4539 Jps3868 JournalNode

15. 启动standby namenode,命令和master启动的方式相同

[hdfs@c6702 sbin]$ ./hadoop-daemon.sh start namenodestarting namenode, logging to /home/hdfs/hadoop-2.6.0-EDH-0u2/logs/hadoop-hdfs-namenode-c6702.python279.org.out

16. 再次检查,namenode已经启动

[hdfs@c6702 sbin]$ jps4640 Jps4570 NameNode3868 JournalNode

17. 格式化zkfc,让在zookeeper中生成ha节点,在master上执行如下命令,完成格式化

[hdfs@c6701 bin]$ ./hdfs zkfc -formatZK17/09/26 09:59:20 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at c6701/192.168.67.101:900017/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:host.name=c6701.python279.org17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_14417/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/local/jdk1.8.0_144/jre17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/hdfs/hadoop-2.6.0-EDH-0u2/exxxx17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hdfs/hadoop-2.6.0-EDH-0u2/lib/native17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:java.compiler=17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd6417/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-573.el6.x86_6417/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:user.name=hdfs17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hdfs17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hdfs/hadoop-2.6.0-EDH-0u2/bin17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=c6701:2181,c6702:2181,c6703:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@20deea7f17/09/26 09:59:20 INFO zookeeper.ClientCnxn: Opening socket connection to server c6703.python279.org/192.168.67.103:2181. Will not attempt to authenticate using SASL (unknown error)17/09/26 09:59:20 INFO zookeeper.ClientCnxn: Socket connection established to c6703.python279.org/192.168.67.103:2181, initiating session17/09/26 09:59:20 INFO zookeeper.ClientCnxn: Session establishment complete on server c6703.python279.org/192.168.67.103:2181, sessionid = 0x35ebc5163710000, negotiated timeout = 500017/09/26 09:59:20 INFO ha.ActiveStandbyElector: Session connected.17/09/26 09:59:20 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ns in ZK.17/09/26 09:59:20 INFO zookeeper.ZooKeeper: Session: 0x35ebc5163710000 closed17/09/26 09:59:20 INFO zookeeper.ClientCnxn: EventThread shut down

18. 格式化完成的检查

格式成功后,查看zookeeper中可以看到 <<<<<<<<<<<命令没确认

[zk: localhost:2181(CONNECTED) 1] ls /hadoop-ha

19. 启动zkfc,这个就是为namenode使用的

./hadoop-daemon.sh start zkfcstarting zkfc, logging to /home/hdfs/hadoop-2.6.0-EDH-0u2/logs/hadoop-hdfs-zkfc-c6701.python279.org.out$ jps4272 DataNode4402 JournalNode6339 Jps6277 DFSZKFailoverController4952 NameNode

20. 另一个节点启动zkfc,

ssh  hdfs@c6702 /home/hdfs/hadoop-2.6.0-EDH-0u2/sbin/hadoop-daemon.sh start zkfc$ jps4981 Jps4935 DFSZKFailoverController4570 NameNode3868 JournalNode

21. 注意:进行初始化的时候,必须保证zk集群已经启动了。

1、在ZK中创建znode来存储automatic Failover的数据,任选一个NN执行完成即可:

sh bin/hdfs zkfc -formatZK

2、启动zkfs,在所有的NN节点中执行以下命令:

sh sbin/hadoop-daemon.sh start zkfc

22. 启动datanode

最后启动集群

/home/hdfs/hadoop-2.6.0-EDH-0u2/sbin/hadoop-daemon.sh start zkfc    sh sbin/start-dfs.sh


HDFS安装过程中的重点,最后在软件启动过程中,一些初始化操作,很重要。

1. 启动全部的journalnode

2. 在namenode1上执行, hdfs namenode -format

3. 在namenode1上执行, 启动namenode1,命令hadoop-daemon.sh start namenode

4. 在namenode2上执行, hdfs namenode -bootstrapstandby

5. 在namenode1上执行,格式化zkfc,在zookeeper中生成HA节点, hdfs zkfc -formatZK

6. 启动zkfc,hadoop-daemon.sh start zkfc。 有namenode运行的节点,都要启动ZKFC

7. 启动 datanode


HDFS只是Hadoop最基本的一个模块,这里已经安装完成,可以为后面的Hbase提供服务了。

0