千家信息网

Hadoop+hbase节点删除和添加

发表于:2025-01-31 作者:千家信息网编辑
千家信息网最后更新 2025年01月31日,本分档主要分四个部分,安装部署hadoop_hbase、hbase基本命令、移除一个hadoop+hbase节点、添加一个移除一个hadoop+hbase节点1、安装配置Hadoop 1.0.3+hb
千家信息网最后更新 2025年01月31日Hadoop+hbase节点删除和添加

本分档主要分四个部分,安装部署hadoop_hbase、hbase基本命令、移除一个hadoop+hbase节点、添加一个移除一个hadoop+hbase节点

1、安装配置Hadoop 1.0.3+hbase-0.92.1

环境概括

HostnameRole
sht-sgmhadoopcm-01(172.16.101.54)NameNode, ZK, HMaster
sht-sgmhadoopdn-01(172.16.101.58)DataNode, ZK, HRegionServer
sht-sgmhadoopdn-02(172.16.101.59)DataNode, ZK, HRegionServer
sht-sgmhadoopdn-03(172.16.101.60)DataNode, HRegionServer
sht-sgmhadoopdn-04(172.16.101.66)DataNode, HRegionServer


使用tnuser用户,每个机器节点之间需要ssh互信

每个机器节点都需要安装jdk1.6.0_12并配置好环境变量

[tnuser@sht-sgmhadoopcm-01 ~]$ cat .bash_profile

export JAVA_HOME=/usr/java/jdk1.6.0_12

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

export HADOOP_HOME=/usr/local/contentplatform/hadoop-1.0.3

export HBASE_HOME=/usr/local/contentplatform/hbase-0.92.1

export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin

[tnuser@sht-sgmhadoopcm-01 ~]$ rsync -avz --progress ~/.bash_profile sht-sgmhadoopdn-01:~/

[tnuser@sht-sgmhadoopcm-01 ~]$ rsync -avz --progress ~/.bash_profile sht-sgmhadoopdn-02:~/

[tnuser@sht-sgmhadoopcm-01 ~]$ rsync -avz --progress ~/.bash_profile sht-sgmhadoopdn-03:~/

[tnuser@sht-sgmhadoopcm-01 ~]$ rsync -avz --progress ~/.bash_profile sht-sgmhadoopdn-04:~/


创建目录

[tnuser@sht-sgmhadoopcm-01 contentplatform]$ mkdir -p /usr/local/contentplatform/data/dfs/{name,data}

[tnuser@sht-sgmhadoopcm-01 contentplatform]$ mkdir -p /usr/local/contentplatform/temp

[tnuser@sht-sgmhadoopcm-01 contentplatform]$ mkdir -p /usr/local/contentplatform/logs/{hadoop,hbase}


修改hadoop的相关配置文件

[tnuser@sht-sgmhadoopcm-01 conf]$ vim hadoop-env.shexport JAVA_HOME=/usr/java/jdk1.6.0_12export HADOOP_HEAPSIZE=3072export HADOOP_OPTS=-Djava.net.preferIPv4Stack=trueexport HADOOP_LOG_DIR=/usr/local/contentplatform/logs/hadoop[tnuser@sht-sgmhadoopcm-01 conf]$ cat core-site.xml  hadoop.tmp.dir  /usr/local/contentplatform/temp  fs.default.name  hdfs://sht-sgmhadoopcm-01:9000  hadoop.proxyuser.tnuser.hosts  sht-sgmhadoopdn-01.telenav.cn  hadoop.proxyuser.tnuser.groups  appuser[tnuser@sht-sgmhadoopcm-01 conf]$ cat hdfs-site.xml    dfs.replication    3    dfs.name.dir    /usr/local/contentplatform/data/dfs/name      dfs.data.dir    /usr/local/contentplatform/data/dfs/data      dfs.permissions    false    dfs.support.append    true  dfs.datanode.max.xcievers  4096  dfs.datanode.dns.nameserver  10.224.0.102  mapred.min.split.size  100663296   dfs.datanode.socket.write.timeout   0dfs.datanode.socket.write.timeout3000000dfs.socket.timeout3000000  dfs.http.address  0.0.0.0:50070[tnuser@sht-sgmhadoopcm-01 conf]$ cat mapred-site.xml  mapred.job.tracker  sht-sgmhadoopcm-01:9001  mapred.system.dir  /usr/local/contentplatform/data/mapred/system/  mapred.local.dir  /usr/local/contentplatform/data/mapred/local/  mapred.tasktracker.map.tasks.maximum  4  mapred.tasktracker.reduce.tasks.maximum  1  io.sort.mb  200m  true  io.sort.factor  20  true  mapred.task.timeout  7200000  mapred.child.java.opts  -Xmx2048m


修改hbase的相关配置文件

[tnuser@sht-sgmhadoopcm-01 conf]$ cat hbase-env.shexport JAVA_HOME=/usr/java/jdk1.6.0_12export HBASE_HEAPSIZE=5120export HBASE_LOG_DIR=/usr/local/contentplatform/logs/hbaseexport HBASE_OPTS="-XX:+UseConcMarkSweepGC"export HBASE_OPTS="-server -Djava.net.preferIPv4Stack=true -XX:+UseParallelGC -XX:ParallelGCThreads=4 -XX:+AggressiveHeap -XX:+HeapDumpOnOutOfMemoryError"export HBASE_MANAGES_ZK=true  #这里为true表示使用hbase自带zk,不需要另外安装zk[tnuser@sht-sgmhadoopcm-01 conf]$ cat regionserverssht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03sht-sgmhadoopdn-04[tnuser@sht-sgmhadoopcm-01 conf]$ cat hbase-site.xml    hbase.zookeeper.quorum    sht-sgmhadoopcm-01,sht-sgmhadoopdn-01,sht-sgmhadoopdn-02        hbase.zookeeper.dns.nameserver    10.224.0.102      hbase.regionserver.dns.nameserver    10.224.0.102        hbase.zookeeper.property.dataDir    /usr/local/contentplatform/data/zookeeper        hbase.rootdir    hdfs://sht-sgmhadoopcm-01:9000/hbase        hbase.cluster.distributed    true    The mode the cluster will be in. Possible values are      false: standalone and pseudo-distributed setups with managed Zookeeper      true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)          hbase.hregion.max.filesize    536870912      hbase.regionserver.global.memstore.upperLimit    0.2      hbase.regionserver.global.memstore.lowerLimit    0.1      hfile.block.cache.size    0.5      dfs.support.append    true      hbase.regionserver.lease.period    1800000      hbase.rpc.timeout    1800000    hbase.hstore.blockingStoreFiles    40    zookeeper.session.timeout    900000      hbase.hregion.memstore.flush.size    134217728         hbase.hstore.compaction.max    30        hbase.regionserver.handler.count    10  

拷贝整个目录分配到其他节点

[tnuser@sht-sgmhadoopcm-01 contentplatform]$ ll /usr/local/contentplatform/

total 103572

drwxr-xr-x 4 tnuser appuser 34 Apr 6 21:41 data

drwxr-xr-x 14 tnuser appuser 4096 May 9 2012 hadoop-1.0.3

-rw-r--r-- 1 tnuser appuser 62428860 Apr 5 14:59 hadoop-1.0.3.tar.gz

drwxr-xr-x 10 tnuser appuser 255 Apr 6 21:36 hbase-0.92.1

-rw-r--r-- 1 tnuser appuser 43621631 Apr 5 15:00 hbase-0.92.1.tar.gz

drwxr-xr-x 4 tnuser appuser 33 Apr 6 22:43 logs

drwxr-xr-x 3 tnuser appuser 17 Apr 6 20:44 temp


[root@sht-sgmhadoopcm-01 local]# rsync -avz --progress /usr/local/contentplatform sht-sgmhadoopdn-01:/usr/local/

[root@sht-sgmhadoopcm-01 local]# rsync -avz --progress /usr/local/contentplatform sht-sgmhadoopdn-02:/usr/local/

[root@sht-sgmhadoopcm-01 local]# rsync -avz --progress /usr/local/contentplatform sht-sgmhadoopdn-03:/usr/local/

[root@sht-sgmhadoopcm-01 local]# rsync -avz --progress /usr/local/contentplatform sht-sgmhadoopdn-04:/usr/local/


启动hadoop

[tnuser@sht-sgmhadoopcm-01 data]$ hadoop namenode -format

[tnuser@sht-sgmhadoopcm-01 bin]$ start-all.sh

[tnuser@sht-sgmhadoopcm-01 conf]$jps

6008 NameNode

6392 Jps

6191 SecondaryNameNode

6279 JobTracker


访问HDFS文件系统:

http://172.16.101.54:50070

http://172.16.101.59:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/



启动hbase

[tnuser@sht-sgmhadoopcm-01 ~]$ start-hbase.sh

[tnuser@sht-sgmhadoopcm-01 ~]$ jps

3792 HQuorumPeer

4103 Jps

3876 HMaster

3142 NameNode

3323 SecondaryNameNode

3408 JobTracker

http://172.16.101.54:60010



2、hbase基本命令

查看hbase运行状态hbase(main):001:0> status4 servers, 0 dead, 0.7500 average load创建表t1hbase(main):008:0> create 't1','info'查看所有表hbase(main):009:0> listTABLE                                                                  t1    查看表对应的hdfs文件:[tnuser@sht-sgmhadoopcm-01 hbase-0.92.1]$ hadoop dfs -ls /hbase/Found 7 itemsdrwxr-xr-x   - tnuser supergroup          0 2019-04-06 22:41 /hbase/-ROOT-drwxr-xr-x   - tnuser supergroup          0 2019-04-06 22:41 /hbase/.META.drwxr-xr-x   - tnuser supergroup          0 2019-04-06 23:14 /hbase/.logsdrwxr-xr-x   - tnuser supergroup          0 2019-04-06 22:41 /hbase/.oldlogs-rw-r--r--   3 tnuser supergroup         38 2019-04-06 22:41 /hbase/hbase.id-rw-r--r--   3 tnuser supergroup          3 2019-04-06 22:41 /hbase/hbase.versiondrwxr-xr-x   - tnuser supergroup          0 2019-04-07 16:53 /hbase/t1                                                              查看表详情hbase(main):017:0> describe 't1'DESCRIPTION  ENABLED                                                            {NAME => 't1', FAMILIES => [{NAME => 'info', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSIO true                                                               N => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]                                                                    }                                                                  判断表是否存在hbase(main):018:0> exists 't1'Table t1 does exist                                                   禁用和启用表hbase(main):019:0> is_disabled 't1'false或者:disable 't1'                    hbase(main):020:0> is_enabled 't1'true  或者:enable 't1'                                                                                                                                             插入记录:put ,,,                                     hbase(main):010:0> put 't1','row1','info:name','xiaoming'hbase(main):011:0> put 't1','row2','info:age','18'hbase(main):012:0> put 't1','row3','info:sex','male'查询表记录get 
,,[, ...]hbase(main):014:0> get 't1','row1'COLUMN CELLinfo:name timestamp=1554621994538, value=xiaoming hbase(main):015:0> get 't1','row2','info:age'COLUMN CELLinfo:age timestamp=1554623754957, value=18hbase(main):017:0> get 't1','row2',{COLUMN=> 'info:age'}COLUMN CELLinfo:age timestamp=1554623754957, value=18范围扫描:hbase(main):026:0* scan 't1'ROW COLUMN+CELLrow1 column=info:name, timestamp=1554621994538, value=xiaomingrow2 column=info:age, timestamp=1554625223482, value=18row3 column=info:sex, timestamp=1554625229782, value=malehbase(main):027:0> scan 't1',{LIMIT => 2}ROW COLUMN+CELLrow1 column=info:name, timestamp=1554621994538, value=xiaomingrow2 column=info:age, timestamp=1554625223482, value=18hbase(main):034:0> scan 't1',{STARTROW => 'row2'}ROW COLUMN+CELLrow2 column=info:age, timestamp=1554625223482, value=18row3 column=info:sex, timestamp=1554625229782, ue=malehbase(main):038:0> scan 't1',{STARTROW => 'row2',ENDROW => 'row3'}ROW COLUMN+CELLrow2 column=info:age, timestamp=1554625223482, value=18计数表hbase(main):042:0> count 't1'3 row(s) in 0.0200 seconds删除行指定行的列簇hbase(main):013:0> delete 't1','row3','info:sex'删除整行hbase(main):047:0> deleteall 't1','row2'清空表hbase(main):049:0> truncate 't1'Truncating 't1' table (it may take a while):- Disabling table...- Dropping table...- Creating table...0 row(s) in 4.8050 seconds删除表(删除表之前要禁用表)hbase(main):058:0> disable 't1'hbase(main):059:0> drop 't1'

创建HBase测试数据

create 't1','info'put 't1','row1','info:name','xiaoming'put 't1','row2','info:age','18'put 't1','row3','info:sex','male'create 'emp','personal','professional'put 'emp','1','personal:name','raju'put 'emp','1','personal:city','hyderabad'put 'emp','1','professional:designation','manager'put 'emp','1','professional:salary','50000'put 'emp','2','personal:name','ravi'put 'emp','2','personal:city','chennai'put 'emp','2','professional:designation','sr.engineer'put 'emp','2','professional:salary','30000'put 'emp','3','personal:name','rajesh'put 'emp','3','personal:city','delhi'put 'emp','3','professional:designation','jr.engineer'put 'emp','3','professional:salary','25000'hbase(main):040:0> scan 't1'ROW                                              COLUMN+CELLrow1                                            column=info:name, timestamp=1554634306493, value=xiaomingrow2                                            column=info:age, timestamp=1554634306540, value=18row3                                            column=info:sex, timestamp=1554634307409, value=male3 row(s) in 0.0290 secondshbase(main):041:0> scan 'emp'ROW                           COLUMN+CELL1    column=personal:city, timestamp=1554634236024, value=hyderabad1    column=personal:name, timestamp=1554634235959, value=raju1    column=professional:designation, timestamp=1554634236063, value=manager1    column=professional:salary, timestamp=1554634237419, value=500002    column=personal:city, timestamp=1554634241879, value=chennai2    column=personal:name, timestamp=1554634241782, value=ravi2    column=professional:designation, timestamp=1554634241920, value=sr.engineer2    column=professional:salary, timestamp=1554634242923, value=300003    column=personal:city, timestamp=1554634246842, value=delhi3    column=personal:name, timestamp=1554634246784, value=rajesh3    column=professional:designation, timestamp=1554634246879, value=jr.engineer3    column=professional:salary, timestamp=1554634247692, value=250003 row(s) in 0.0330 seconds


3、移除一个hadoop+hbase节点

先移除sht-sgmhadoopdn-04的hbase节点

graceful_stop.sh脚本自动关闭平衡器,移动region到其他regionserver,如果数据量比较大,这个步骤大概需要花费较长的时间.

[tnuser@sht-sgmhadoopcm-01 bin]$ graceful_stop.sh sht-sgmhadoopdn-04Disabling balancer!HBase Shell; enter 'help' for list of supported commands.Type "exit" to leave the HBase ShellVersion 0.92.1, r1298924, Fri Mar  9 16:58:34 UTC 2012balance_switch falseSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.true                                                                                                                                                                                          0 row(s) in 0.7580 secondsUnloading sht-sgmhadoopdn-04 region(s)SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopcm-0119/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_1219/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.6.0_12/jre19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/usr/local/contentplatform/hbase-0.92.1/lib/native/Linux-amd64-6419/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.compiler=19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd6419/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_6419/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:user.name=tnuser19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/tnuser19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/contentplatform/hbase-0.92.1/bin19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopcm-01:2181,sht-sgmhadoopdn-02:2181 sessionTimeout=900000 watcher=hconnection19/04/07 20:11:14 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.58:218119/04/07 20:11:14 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24569@sht-sgmhadoopcm-01.telenav.cn19/04/07 20:11:14 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.19/04/07 20:11:14 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.19/04/07 20:11:14 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-01/172.16.101.58:2181, initiating session19/04/07 20:11:14 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-01/172.16.101.58:2181, sessionid = 0x169f7b052050003, negotiated timeout = 90000019/04/07 20:11:15 INFO region_mover: Moving 2 region(s) from sht-sgmhadoopdn-04,60020,1554638724252 during this cycle19/04/07 20:11:15 INFO region_mover: Moving region 1028785192 (0 of 2) to server=sht-sgmhadoopdn-01,60020,155463872358119/04/07 20:11:16 INFO region_mover: Moving region d3a10ae012afde8e1e401a2e400accc8 (1 of 2) to server=sht-sgmhadoopdn-01,60020,155463872358119/04/07 20:11:17 INFO region_mover: Wrote list of moved regions to /tmp/sht-sgmhadoopdn-04Unloaded sht-sgmhadoopdn-04 region(s)sht-sgmhadoopdn-04: ********************************************************************sht-sgmhadoopdn-04: *                                                                  *sht-sgmhadoopdn-04: * This system is for the use of authorized users only.  Usage of   *sht-sgmhadoopdn-04: * this system may be monitored and recorded by system personnel.   *sht-sgmhadoopdn-04: *                                                                  *sht-sgmhadoopdn-04: * Anyone using this system expressly consents to such monitoring   *sht-sgmhadoopdn-04: * and they are advised that if such monitoring reveals possible    *sht-sgmhadoopdn-04: * evidence of criminal activity, system personnel may provide the  *sht-sgmhadoopdn-04: * evidence from such monitoring to law enforcement officials.      *sht-sgmhadoopdn-04: *                                                                  *sht-sgmhadoopdn-04: ********************************************************************sht-sgmhadoopdn-04: stopping regionserver....[tnuser@sht-sgmhadoopcm-01 hbase]$ echo "balance_switch true" | hbase shell查看hbase节点状态[tnuser@sht-sgmhadoopcm-01 hbase]$ echo "status" | hbase shell3 servers, 1 dead, 1.3333 average load[tnuser@sht-sgmhadoopdn-04 hbase]$ jps23940 Jps23375 DataNode23487 TaskTracker


http://172.16.101.54:60010

再移除sht-sgmhadoopdn-04的datanode和TaskTracker节点

[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/includesht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/excludessht-sgmhadoopdn-04[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/hdfs-site.xml  dfs.hosts  /usr/local/contentplatform/hadoop-1.0.3/conf/include  true  dfs.hosts.exclude  /usr/local/contentplatform/hadoop-1.0.3/conf/excludes  true[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/mapred-site.xml  mapred.hosts  /usr/local/contentplatform/hadoop-1.0.3/conf/include  true  mapred.hosts.exclude  /usr/local/contentplatform/hadoop-1.0.3/conf/excludes  true重新加载配置,NameNode会检查并将数据复制到其它节点上以恢复副本数,但是不会删除sht-sgmhadoopdn-04把原本的数据,如果数据量比较大,这个过程比较耗时-refreshNodes :Re-read the hosts and exclude files to update the set of Datanodes that are allowed to connect to the Namenode and those that should be decommissioned or recommissioned.[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3]$ hadoop dfsadmin -refreshNodes[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3]$ hadoop mradmin -refreshNodes如果节点sht-sgmhadoopdn-04上DataNode和TaskTracker进程还存活,则使用下面命令关闭(正常情况在上一步中已经被关闭了)[tnuser@sht-sgmhadoopdn-04 hbase]$ hadoop-daemon.sh stop datanode  [tnuser@sht-sgmhadoopdn-04 hbase]$ hadoop-daemon.sh stop tasktracker[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3]$ hadoop dfsadmin -reportWarning: $HADOOP_HOME is deprecated.Configured Capacity: 246328578048 (229.41 GB)Present Capacity: 93446351917 (87.03 GB)DFS Remaining: 93445607424 (87.03 GB)DFS Used: 744493 (727.04 KB)DFS Used%: 0%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0-------------------------------------------------Datanodes available: 3 (4 total, 1 dead)Name: 172.16.101.58:50010Decommission Status : NormalConfigured Capacity: 82109526016 (76.47 GB)DFS Used: 259087 (253.01 KB)Non DFS Used: 57951808497 (53.97 GB)DFS Remaining: 24157458432(22.5 GB)DFS Used%: 0%DFS Remaining%: 29.42%Last contact: Sun Apr 07 20:45:42 CST 2019Name: 172.16.101.60:50010Decommission Status : NormalConfigured Capacity: 82109526016 (76.47 GB)DFS Used: 246799 (241.01 KB)Non DFS Used: 45172382705 (42.07 GB)DFS Remaining: 36936896512(34.4 GB)DFS Used%: 0%DFS Remaining%: 44.98%Last contact: Sun Apr 07 20:45:43 CST 2019Name: 172.16.101.59:50010Decommission Status : NormalConfigured Capacity: 82109526016 (76.47 GB)DFS Used: 238607 (233.01 KB)Non DFS Used: 49758034929 (46.34 GB)DFS Remaining: 32351252480(30.13 GB)DFS Used%: 0%DFS Remaining%: 39.4%Last contact: Sun Apr 07 20:45:42 CST 2019Name: 172.16.101.66:50010Decommission Status : DecommissionedConfigured Capacity: 0 (0 KB)DFS Used: 0 (0 KB)Non DFS Used: 0 (0 KB)DFS Remaining: 0(0 KB)DFS Used%: 100%DFS Remaining%: 0%Last contact: Thu Jan 01 08:00:00 CST 1970此时节点sht-sgmhadoopdn-04已经没有进程[tnuser@sht-sgmhadoopdn-04 hbase]$ jps23973 Jpssht-sgmhadoopdn-04节点的数据仍然保留[tnuser@sht-sgmhadoopdn-04 hbase]$ hadoop dfs -ls /hbaseWarning: $HADOOP_HOME is deprecated.Found 8 itemsdrwxr-xr-x   - tnuser supergroup          0 2019-04-07 17:46 /hbase/-ROOT-drwxr-xr-x   - tnuser supergroup          0 2019-04-07 18:23 /hbase/.META.drwxr-xr-x   - tnuser supergroup          0 2019-04-07 20:11 /hbase/.logsdrwxr-xr-x   - tnuser supergroup          0 2019-04-07 20:45 /hbase/.oldlogsdrwxr-xr-x   - tnuser supergroup          0 2019-04-07 18:50 /hbase/emp-rw-r--r--   3 tnuser supergroup         38 2019-04-06 22:41 /hbase/hbase.id-rw-r--r--   3 tnuser supergroup          3 2019-04-06 22:41 /hbase/hbase.versiondrwxr-xr-x   - tnuser supergroup          0 2019-04-07 18:51 /hbase/t1平衡数据文件节点[tnuser@sht-sgmhadoopcm-01 conf]$ start-balancer.sh -threshold 10


最后修改一些配置文件

删除regionservers文件中sht-sgmhadoopdn-04行[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hbase-0.92.1/conf/regionserverssht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-01:/usr/local/contentplatform/hbase-0.92.1/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-02:/usr/local/contentplatform/hbase-0.92.1/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-03:/usr/local/contentplatform/hbase-0.92.1/conf/删除slave文件中sht-sgmhadoopdn-04行[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/slavessht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-01:/usr/local/contentplatform/hadoop-1.0.3/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-02:/usr/local/contentplatform/hadoop-1.0.3/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-03:/usr/local/contentplatform/hadoop-1.0.3/conf/注释并删除excludes文件[tnuser@sht-sgmhadoopcm-01 conf]$ rm -rf /usr/local/contentplatform/hadoop-1.0.3/conf/excludes[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/hdfs-site.xml[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/mapred-site.xml

重启hadoop和hbase

[tnuser@sht-sgmhadoopcm-01 conf]$ stop-hbase.sh[tnuser@sht-sgmhadoopcm-01 hbase]$ stop-all.sh[tnuser@sht-sgmhadoopcm-01 hbase]$ start-all.sh[tnuser@sht-sgmhadoopcm-01 hbase]$ start-hbase.shcheck 数据:hbase(main):040:0> scan 't1'ROW                                              COLUMN+CELLrow1                                            column=info:name, timestamp=1554634306493, value=xiaomingrow2                                            column=info:age, timestamp=1554634306540, value=18row3                                            column=info:sex, timestamp=1554634307409, value=male3 row(s) in 0.0290 secondshbase(main):041:0> scan 'emp'ROW                           COLUMN+CELL1    column=personal:city, timestamp=1554634236024, value=hyderabad1    column=personal:name, timestamp=1554634235959, value=raju1    column=professional:designation, timestamp=1554634236063, value=manager1    column=professional:salary, timestamp=1554634237419, value=500002    column=personal:city, timestamp=1554634241879, value=chennai2    column=personal:name, timestamp=1554634241782, value=ravi2    column=professional:designation, timestamp=1554634241920, value=sr.engineer2    column=professional:salary, timestamp=1554634242923, value=300003    column=personal:city, timestamp=1554634246842, value=delhi3    column=personal:name, timestamp=1554634246784, value=rajesh3    column=professional:designation, timestamp=1554634246879, value=jr.engineer3    column=professional:salary, timestamp=1554634247692, value=250003 row(s) in 0.0330 seconds


4、添加一个hadoop+hbase节点

先添加sht-sgmhadoopdn-04的hadoop节点

准备工作:

java环境,ssh互信,/etc/hosts文件

添加slave文件中sht-sgmhadoopdn-04行,并同步到其他节点[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/slavessht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03sht-sgmhadoopdn-04[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-01:/usr/local/contentplatform/hadoop-1.0.3/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-02:/usr/local/contentplatform/hadoop-1.0.3/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-03:/usr/local/contentplatform/hadoop-1.0.3/conf/添加regionservers文件中sht-sgmhadoopdn-04行,并同步到其他节点[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hbase-0.92.1/conf/regionserverssht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03sht-sgmhadoopdn-04[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-01:/usr/local/contentplatform/hbase-0.92.1/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-02:/usr/local/contentplatform/hbase-0.92.1/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-03:/usr/local/contentplatform/hbase-0.92.1/conf/在sht-sgmhadoopdn-04上删除已经存在的数据文件rm -rf /usr/local/contentplatform/data/dfs/name/*rm -rf /usr/local/contentplatform/data/dfs/data/*rm -rf /usr/local/contentplatform/data/mapred/local/*rm -rf /usr/local/contentplatform/data/zookeeper/*rm -rf /usr/local/contentplatform/logs/hadoop/*rm -rf /usr/local/contentplatform/logs/hbase/*在sht-sgmhadoopdn-04启动datanode和tasktracker[tnuser@sht-sgmhadoopdn-04 conf]$ hadoop-daemon.sh start datanode[tnuser@sht-sgmhadoopdn-04 conf]$ hadoop-daemon.sh start tasktracker检查live nodes[tnuser@sht-sgmhadoopcm-01 contentplatform]$ hadoop dfsadmin -reporthttp://172.16.101.54:50070在namenode上执行平衡数据[tnuser@sht-sgmhadoopcm-01 conf]$ start-balancer.sh -threshold 10


再添加sht-sgmhadoopdn-04的hbase节点

在sht-sgmhadoopdn-04上启动reginserver[tnuser@sht-sgmhadoopdn-04 conf]$ hbase-daemon.sh start regionserver检查hbase节点状态http://172.16.101.54:60010


添加一个backup master

添加配置文件并同步到所有节点上[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hbase-0.92.1/conf/backup-masterssht-sgmhadoopdn-01rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/backup-masters sht-sgmhadoopdn-01:/usr/local/contentplatform/hbase-0.92.1/conf/rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/backup-masters sht-sgmhadoopdn-02:/usr/local/contentplatform/hbase-0.92.1/conf/rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/backup-masters sht-sgmhadoopdn-03:/usr/local/contentplatform/hbase-0.92.1/conf/启动hbase,如果hbase集群已经启动,则重启hbase集群[tnuser@sht-sgmhadoopcm-01 conf]$ stop-hbase.sh[tnuser@sht-sgmhadoopcm-01 conf]$ start-hbase.sh[tnuser@sht-sgmhadoopdn-01 conf]$ vim /usr/local/contentplatform/logs/hbase/hbase-tnuser-master-sht-sgmhadoopdn-01.log2019-04-12 13:58:50,893 INFO org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized2019-04-12 13:58:50,899 DEBUG org.apache.hadoop.hbase.master.HMaster: HMaster started in backup mode.  Stalling until master znode is written.2019-04-12 13:58:50,924 ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node /hbase/master already exists and this is not a retry2019-04-12 13:58:50,925 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Adding ZNode for /hbase/backup-masters/sht-sgmhadoopdn-01,60000,1555048730644 in backup master directory2019-04-12 13:58:50,941 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Another master is the active master, sht-sgmhadoopcm-01,60000,1555048728172; waiting to become the next active master[tnuser@sht-sgmhadoopcm-01 hbase-0.92.1]$ jps2913 JobTracker2823 SecondaryNameNode3667 Jps3410 HMaster3332 HQuorumPeer2639 NameNode[tnuser@sht-sgmhadoopdn-01 conf]$ jps7539 HQuorumPeer7140 DataNode7893 HMaster8054 Jps7719 HRegionServer7337 TaskTracker故障切换:[tnuser@sht-sgmhadoopcm-01 hbase]$ cat /tmp/hbase-tnuser-master.pid|xargs kill -9http://172.16.101.58:60010


















0