千家信息网

hadoop redis mongodb

发表于:2025-02-03 作者:千家信息网编辑
千家信息网最后更新 2025年02月03日,一、环境系统 CentOS7.0 64位namenode01 192.168.0.220namenode02 192.168.0.221datanode01 192.168.0.222datanode
千家信息网最后更新 2025年02月03日hadoop redis mongodb

一、环境

系统 CentOS7.0 64位

namenode01 192.168.0.220

namenode02 192.168.0.221

datanode01 192.168.0.222

datanode02 192.168.0.223

datanode03 192.168.0.224

二、配置基础环境

在所有的机器上添加本地hosts文件解析

[root@namenode01 ~]# tail -5 /etc/hosts192.168.0.220   namenode01192.168.0.221   namenode02192.168.0.222   datanode01192.168.0.223   datanode02192.168.0.224   datanode03

在5台机器上创建hadoop用户,并设置密码是hadoop,这里只以naemenode01为例子

[root@namenode01 ~]# useradd hadoop[root@namenode01 ~]# passwd hadoopChanging password for user hadoop.New password: BAD PASSWORD: The password is shorter than 8 charactersRetype new password: passwd: all authentication tokens updated successfully.

配置5台机器hadoop用户之间互相免密码ssh登录

#namenode01的操作[root@namenode01 ~]# su - hadoop[hadoop@namenode01 ~]$ ssh-keygen Generating public/private rsa key pair.Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Created directory '/home/hadoop/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa.Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.The key fingerprint is:1c:7e:89:9d:14:9a:10:fc:69:1e:11:3d:6d:18:a5:01 hadoop@namenode01The key's randomart p_w_picpath is:+--[ RSA 2048]----+|     .o.E++=.    ||      ...o++o    ||       .+ooo     ||       o== o     ||       oS.=      ||        ..       ||                 ||                 ||                 |+-----------------+[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02[hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03#验证结果[hadoop@namenode01 ~]$ ssh namenode01 hostnamenamenode01[hadoop@namenode01 ~]$ ssh namenode02 hostnamenamenode02[hadoop@namenode01 ~]$ ssh datanode01 hostnamedatanode01[hadoop@namenode01 ~]$ ssh datanode02 hostnamedatanode02[hadoop@namenode01 ~]$ ssh datanode03 hostnamedatanode03#在namenode02上操作[root@namenode02 ~]# su - hadoop[hadoop@namenode02 ~]$ ssh-keygen Generating public/private rsa key pair.Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa.Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.The key fingerprint is:a9:f5:0d:cb:c9:88:7b:71:f5:71:d8:a9:23:c6:85:6a hadoop@namenode02The key's randomart p_w_picpath is:+--[ RSA 2048]----+|                 ||                 ||            .  o.||         . ...o.o||        S +....o ||       +.E.O o.  ||      o ooB o .  ||       ..        ||      ..         |+-----------------+[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02[hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03#验证结果[hadoop@namenode02 ~]$ ssh namenode01 hostnamenamenode01[hadoop@namenode02 ~]$ ssh namenode02 hostnamenamenode02[hadoop@namenode02 ~]$ ssh datanode01 hostnamedatanode01[hadoop@namenode02 ~]$ ssh datanode02 hostnamedatanode02[hadoop@namenode02 ~]$ ssh datanode03 hostnamedatanode03#在datanode01上操作[root@datanode01 ~]# su - hadoop[hadoop@datanode01 ~]$ ssh-keygen Generating public/private rsa key pair.Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa.Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.The key fingerprint is:48:72:20:69:64:e7:81:b7:03:64:41:5e:fa:88:db:5e hadoop@datanode01The key's randomart p_w_picpath is:+--[ RSA 2048]----+| +O+=            || +=*.o           || .ooo.o          || . oo+ .         ||. . ... S        || o               ||. . E            || . .             ||  .              |+-----------------+[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02[hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03#验证结果[hadoop@datanode01 ~]$ ssh namenode01 hostnamenamenode01[hadoop@datanode01 ~]$ ssh namenode02 hostnamenamenode02[hadoop@datanode01 ~]$ ssh datanode01 hostnamedatanode01[hadoop@datanode01 ~]$ ssh datanode02 hostnamedatanode02[hadoop@datanode01 ~]$ ssh datanode03 hostnamedatanode03#datanode02上操作[hadoop@datanode02 ~]$ ssh-keygen Generating public/private rsa key pair.Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa.Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.The key fingerprint is:32:aa:88:fa:ce:ec:51:6f:de:f4:06:c9:4e:9c:10:31 hadoop@datanode02The key's randomart p_w_picpath is:+--[ RSA 2048]----+|      E.         ||      ..         ||       .         ||      .          ||    . o+So       ||   . o oB        ||  . . oo..       ||.+ o o o...      ||=+B   . ...      |+-----------------+[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02[hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03#验证结果[hadoop@datanode02 ~]$ ssh namenode01 hostnamenamenode01[hadoop@datanode02 ~]$ ssh namenode02 hostnamenamenode02[hadoop@datanode02 ~]$ ssh datanode01 hostnamedatanode01[hadoop@datanode02 ~]$ ssh datanode02 hostnamedatanode02[hadoop@datanode02 ~]$ ssh datanode03 hostnamedatanode03#datanode03上操作[root@datanode03 ~]# su - hadoop[hadoop@datanode03 ~]$ ssh-keygen Generating public/private rsa key pair.Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/hadoop/.ssh/id_rsa.Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.The key fingerprint is:f3:f3:3c:85:61:c6:e4:82:58:10:1f:d8:bf:71:89:b4 hadoop@datanode03The key's randomart p_w_picpath is:+--[ RSA 2048]----+|      o=.        ||      ..o.. .    ||       o.+ * .   ||      . . E O    ||        S  B o   ||         o. . .  ||          o  .   ||           +.    ||            o.   |+-----------------+[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02[hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03#验证结果[hadoop@datanode03 ~]$ ssh namenode01 hostnamenamenode01[hadoop@datanode03 ~]$ ssh namenode02 hostnamenamenode02[hadoop@datanode03 ~]$ ssh datanode01 hostnamedatanode01[hadoop@datanode03 ~]$ ssh datanode02 hostnamedatanode02[hadoop@datanode03 ~]$ ssh datanode03 hostnamedatanode03

三、安装jdk环境

[root@namenode01 ~]# wget http://download.oracle.com/otn-pub/java/jdk/8u74-b02/jdk-8u74-linux-x64.tar.gz?AuthParam=1461828883_648d68bc6c7b0dfd253a6332a5871e06[root@namenode01 ~]# tar xf jdk-8u74-linux-x64.tar.gz -C /usr/local/#配置环境变量配置文件[root@namenode01 ~]# cat /etc/profile.d/java.shJAVA_HOME=/usr/local/jdk1.8.0_74JAVA_BIN=/usr/local/jdk1.8.0_74/binJRE_HOME=/usr/local/jdk1.8.0_74/jrePATH=$PATH:/usr/local/jdk1.8.0_74/bin:/usr/local/jdk1.8.0_74/jre/binCLASSPATH=/usr/local/jdk1.8.0_74/jre/lib:/usr/local/jdk1.8.0_74/lib:/usr/local/jdk1.8.0_74/jre/lib/charsets.jarexport JAVA_HOME PATH#加载环境变量[root@namenode01 ~]# source /etc/profile.d/java.sh[root@namenode01 ~]# which java/usr/local/jdk1.8.0_74/bin/java#测试结果[root@namenode01 ~]# java -versionjava version "1.8.0_74"Java(TM) SE Runtime Environment (build 1.8.0_74-b02)Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)#将环境变量配置文件和二进制包复制到其余的4台机器上[root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 namenode02:/usr/local/[root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode01:/usr/local/[root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode02:/usr/local/[root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode03:/usr/local/[root@namenode01 ~]# scp /etc/profile.d/java.sh namenode02:/etc/profile.d/                                                                                                      100%  308     0.3KB/s   00:00    [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode01:/etc/profile.d/                                                                                            100%  308     0.3KB/s   00:00    [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode02:/etc/profile.d/                                                                                                         100%  308     0.3KB/s   00:00    [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode03:/etc/profile.d/#测试结果,以namenode02为例子[root@namenode02 ~]# source /etc/profile.d/java.sh [root@namenode02 ~]# java -versionjava version "1.8.0_74"Java(TM) SE Runtime Environment (build 1.8.0_74-b02)Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)

四、安装hadoop

#下载hadoop软件[root@namenode01 ~]# wget http://apache.fayea.com/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz[root@namenode01 ~]# tar xf hadoop-2.5.2.tar.gz -C /usr/local/[root@namenode01 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/[root@namenode01 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop'/usr/local/hadoop' -> '/usr/local/hadoop-2.5.2/'#添加hadoop的环境变量配置文件[root@namenode01 ~]# cat /etc/profile.d/hadoop.shHADOOP_HOME=/usr/local/hadoopPATH=$HADOOP_HOME/bin:$PATHexport HADOOP_BASE PATH#切换到hadoop用户下,检查jdk环境是否正常[root@namenode01 ~]# su - hadoopLast login: Thu Apr 28 15:17:16 CST 2016 from datanode01 on pts/1[hadoop@namenode01 ~]$ java -versionjava version "1.8.0_74"Java(TM) SE Runtime Environment (build 1.8.0_74-b02)Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)#开始编辑hadoop的配置文件#编辑hadoop的环境变量文件[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hadoop-env.shexport JAVA_HOME=/usr/local/jdk1.8.0_74        #修改JAVA_HOME变量的值#编辑core-site.xml文件[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml                        hadoop.tmp.dir                /home/hadoop/temp                                fs.defaultFS                hdfs://mycluster                                io.file.buffers.size                131072        #编辑hdfs-site.xml文件[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml                        dfs.namenode.name.dir                /data/hdfs/dfs/name    #namenode目录                                dfs.datanode.data.dir                /data/hdfs/data        #datanode目录                                dfs.permissions                false                                dfs.nameservices                mycluster        #和core-site.xml文件中保持一致                                dfs.ha.namenodes.mycluster                namenode01,namenode02        #namenode节点                                dfs.namenode.rpc-address.mycluster.namenode01                namenode01:8020                                dfs.namenode.rpc-address.mycluster.namenode02                namenode02:8020                                dfs.namenode.http-address.mycluster.namenode01                namenode01:50070                                dfs.namenode.http-address.mycluster.namenode02                namenode02:50070                                #namenode往journalnode写edits文件,填写所有的journalnode节点                dfs.namenode.shared.edits.dir                qjournal://namenode01:8485;namenode02:8485;datanode01:8485;datanode02:8485;datanode03:8485/mycluster                                dfs.journalnode.edits.dir                /data/hdfs/journal    #journalnode目录                                dfs.client.faliover.proxy.provider.mycluster                org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider                                dfs.ha.fening.methods                sshfence        #通过什么方法进行fence操作                                dfs.ha.fencing.ssh.private-key-files                /home/hadoop/.ssh/id_rsa    #主机之间的认证                                dfs.ha.fencing.ssh.connect-timeout                6000                                dfs.ha.automatic-failover.enabled                false    #关闭主备自动切换,后面通过zookeeper来切换                                dfs.replication                3        #replicaion的数量,默认为3分,少于这个数量会报错                                dfs.webhdfs.enabled                true                                dfs.permissions                false        #编辑yarn-site.xml文件[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/yarn-site.xml                         yarn.nodemanager.aux-service                mapreduce_shuffle                                yarn.resourcemanager.address                namenode01:8032                                yarn.resourcemanager.scheduler.address                namenode01:8030                                yarn.resourcemanager.resource-tracker.address                namenode01:8031                                yarn.resourcemanager.admin.address                namenode01:8033                                yarn.resourcemanager.webapp.address                namenode01:8033                                yarn.nodemanager.resource.memory-mb                15360        #编辑mapred-site.xml文件[hadoop@namenode01 ~]$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml[hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/mapred-site.xml                        mapreduce.framework.name                yarn                                mapredue.jobtracker.http.address                namenode01:50030                                mapreduce.jobhistory.address                namenode01:10020                                mapreduce.jobhistory.webapp.address                namenode01:19888        #编辑slaves配置文件[hadoop@namenode01 ~]$ cat /usr/local/hadoop/etc/hadoop/slaves datanode01datanode02datanode03#在namenodee01上切换到root用户下,创建相应的目录[root@namenode01 ~]# mkdir /data/hdfs[root@namenode01 ~]# chown hadoop.hadoop /data/hdfs/#将hadoop用户的环境变量配置文件复制到其余4台机器上[root@namenode01 ~]# scp /etc/profile.d/hadoop.sh namenode02:/etc/profile.d/[root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode01:/etc/profile.d/[root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode02:/etc/profile.d/  [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode03:/etc/profile.d/#复制hadoop安装文件到其余的4台机器上[root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ namenode02:/usr/local/[root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode01:/usr/local/[root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode02:/usr/local/[root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode03:/usr/local/#修改目录的权限,以namenode02为例[root@namenode02 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/[root@namenode02 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop'/usr/local/hadoop' -> '/usr/local/hadoop-2.5.2/'[root@namenode02 ~]# ll /usr/local |grep hadooplrwxrwxrwx  1 root   root     24 Apr 28 17:19 hadoop -> /usr/local/hadoop-2.5.2/drwxr-xr-x  9 hadoop hadoop  139 Apr 28 17:16 hadoop-2.5.2#创建目录[root@namenode02 ~]# mkdir /data/hdfs[root@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/#检查jdk环境[root@namenode02 ~]# su - hadoopLast login: Thu Apr 28 15:12:24 CST 2016 on pts/0[hadoop@namenode02 ~]$ java -versionjava version "1.8.0_74"Java(TM) SE Runtime Environment (build 1.8.0_74-b02)Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)[hadoop@namenode02 ~]$ which hadoop/usr/local/hadoop/bin/hadoop

五、启动hadoop

#在所有服务器执行hadoop-daemon.sh start journalnode,要在hadoop用户下执行#只贴出namenoe01的过程[hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start journalnodestarting journalnode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-hadoop-journalnode-namenode01.out#在namenode01上执行[hadoop@namenode01 ~]$ hadoop namenode -format#说明:第一次启动的时候需要执行hadoop namenoe -format,非首次启动则运行hdfs namenode  -initializeSharedEdits这里需要解释一下。   首次启动是指安装的时候就配置了HA,hdfs还没有数据。这时需要用format命令把namenode1格式化。   非首次启动是指原来有一个没有配置HA的HDFS已经在运行了,HDFS上已经有数据了,现在需要配置HA而加入一台namenode。这时候namenode1通过initializeSharedEdits命令来初始化journalnode,把edits文件共享到journalnode上。#开始启动namenode节点#在namenode01上执行[hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode#在namenode02上执行[hadoop@namenode02 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode-bootstrapStandby#启动datanode节点[hadoop@datanode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode[hadoop@datanode02 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode[hadoop@datanode03 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode#验证结果#查看namenode01结果[hadoop@namenode01 ~]$ jps2467 NameNode        #namenode角色2270 JournalNode2702 Jps#查看namenode02的结果[hadoop@namenode01 ~]$ ssh namenode02 jps2264 JournalNode2680 Jps#查看datanode01的结果[hadoop@namenode01 ~]$ ssh datanode01 jps2466 Jps2358 DataNode        #datanode角色2267 JournalNode#查看datannode02的结果[hadoop@namenode01 ~]$ ssh datanode02 jps2691 Jps2612 DataNode        #datanode角色2265 JournalNode#查看datanode03的结果[hadoop@namenode01 ~]$ ssh datanode03 jps11987 DataNode        #datanode角色12067 Jps11895 JournalNode

六、zookeeper高可用环境搭建

#下载软件,使用root用户的身份去安装[root@namenode01 ~]# wget http://apache.fayea.com/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz#解压文件/usr/local下,并修改权限[root@namenode01 ~]# tar xf zookeeper-3.4.6.tar.gz -C /usr/local/[root@namenode01 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/#修改zookeeper配置文件[root@namenode01 ~]# cp /usr/local/zookeeper-3.4.6/conf/zoo_sample.cfg /usr/local/zookeeper-3.4.6/conf/zoo.cfg[root@namenode01 ~]# egrep -v "^#|^$" /usr/local/zookeeper-3.4.6/conf/zoo.cfg tickTime=2000initLimit=10syncLimit=5dataDir=/data/hdfs/zookeeper/datadataLogDir=/data/hdfs/zookeeper/logsclientPort=2181server.1=namenode01:2888:3888server.2=namenode02:2888:3888server.3=datanode01:2888:3888server.4=datanode02:2888:3888server.5=datanode03:2888:3888#配置zookeeper环境变量[root@namenode01 ~]# cat /etc/profile.d/zookeeper.shexport ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.6export PATH=$PATH:$ZOOKEEPER_HOME/bin#在namenode01上创建相关的目录和myid文件[root@namenode01 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs}[root@namenode01 ~]# tree /data/hdfs/zookeeper/data/hdfs/zookeeper├── data└── logs[root@namenode01 ~]# echo "1" >/data/hdfs/zookeeper/data/myid[root@namenode01 ~]# cat /data/hdfs/zookeeper/data/myid1[root@namenode01 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper[root@namenode01 ~]# ll /data/hdfs/total 0drwxrwxr-x 3 hadoop hadoop 17 Apr 29 10:05 dfsdrwxrwxr-x 3 hadoop hadoop 22 Apr 29 10:05 journaldrwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:42 zookeeper#将zookeeper安装目录和环境变量配置文件复制到其余的几台机器上,以复制到namenode02为例[root@namenode01 ~]# scp -r /usr/local/zookeeper-3.4.6 namenode02:/usr/local/[root@namenode01 ~]# scp /etc/profile.d/zookeeper.sh namenode02:/etc/profile.d/#namenode02上创建相关的目录和文件,并修改相应目录的权限[root@namenode02 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/[root@namenode02 ~]# ll /usr/local/ |grep zookdrwxr-xr-x  10 hadoop hadoop 4096 Apr 29 10:47 zookeeper-3.4.6[root@namenode02 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs}[root@namenode02 ~]# echo "2" >/data/hdfs/zookeeper/data/myid[root@namenode02 ~]# cat /data/hdfs/zookeeper/data/myid2[root@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper[root@namenode02 ~]# ll /data/hdfs/ |grep zookdrwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:50 zookeeper#在datanode01上创建相关的目录和文件,并修改相应目录的权限[root@datanode01 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/[root@datanode01 ~]# ll /usr/local/ |grep zookdrwxr-xr-x  10 hadoop hadoop 4096 Apr 29 10:48 zookeeper-3.4.6[root@datanode01 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs}[root@datanode01 ~]# echo "3" >/data/hdfs/zookeeper/data/myid[root@datanode01 ~]# cat /data/hdfs/zookeeper/data/myid3[root@datanode01 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper[root@datanode01 ~]# ll /data/hdfs/ |grep zookdrwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:54 zookeeper#在datanode02上创建相关的目录和文件,并修改相应目录的权限[root@datanode02 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/[root@datanode02 ~]# ll /usr/local/ |grep zookdrwxr-xr-x  10 hadoop hadoop 4096 Apr 29 10:49 zookeeper-3.4.6[root@datanode02 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs}[root@datanode02 ~]# echo "4" >/data/hdfs/zookeeper/data/myid[root@datanode02 ~]# cat /data/hdfs/zookeeper/data/myid4[root@datanode02 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper[root@datanode02 ~]# ll /data/hdfs/ |grep zookdrwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:56 zookeeper#在datanode03上创建相关的目录和文件,并修改相应目录的权限[root@datanode03 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/[root@datanode03 ~]# ll /usr/local/ |grep zookdrwxr-xr-x  10 hadoop hadoop 4096 Apr 29 18:49 zookeeper-3.4.6[root@datanode03 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs}[root@datanode03 ~]# echo "5" >/data/hdfs/zookeeper/data/myid[root@datanode03 ~]# cat /data/hdfs/zookeeper/data/myid5[root@datanode03 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper[root@datanode03 ~]# ll /data/hdfs/ |grep zookdrwxr-xr-x 4 hadoop hadoop 28 Apr 29 18:57 zookeeper#在5台机器上已hadoop的身份穷zookeeper#namenode01上启动[hadoop@namenode01 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh startJMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgStarting zookeeper ... STARTED#namenode02上启动[hadoop@namenode02 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh startJMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgStarting zookeeper ... STARTED#datanode01上启动[hadoop@datanode01 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh startJMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgStarting zookeeper ... STARTED#datanode02上启动[hadoop@datanode02 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh startJMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgStarting zookeeper ... STARTED#datanode03上启动[hadoop@datanode03 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh startJMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgStarting zookeeper ... STARTED#查看namenode01的结果[hadoop@namenode01 ~]$ jps2467 NameNode3348 QuorumPeerMain    #zookeeper进程3483 Jps2270 JournalNode[hadoop@namenode01 ~]$ zkServer.sh statusJMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: follower#查看namenode02的结果[hadoop@namenode01 ~]$ ssh namenode02 jps2264 JournalNode2888 QuorumPeerMain2936 Jps[hadoop@namenode01 ~]$ ssh namenode02 'zkServer.sh status'JMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: follower#查看datanode01的结果[hadoop@namenode01 ~]$ ssh datanode01 jps2881 QuorumPeerMain2358 DataNode2267 JournalNode2955 Jps[hadoop@namenode01 ~]$ ssh datanode01 'zkServer.sh status'JMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: follower#查看datanode02的结果[hadoop@namenode01 ~]$ ssh datanode02 jps2849 QuorumPeerMain2612 DataNode2885 Jps2265 JournalNode[hadoop@namenode01 ~]$ ssh datanode02 'zkServer.sh status'JMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: follower#查看datanode03的结果[hadoop@namenode01 ~]$ ssh datanode03 jps11987 DataNode12276 Jps12213 QuorumPeerMain11895 JournalNode[hadoop@namenode01 ~]$ ssh datanode03 'zkServer.sh status'JMX enabled by defaultUsing config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfgMode: leader


0