, ...]hbase(main):014:0> get 't1','row1'COLUMN CELLinfo:name timestamp=1554621994538, value=xiaoming hbase(main):015:0> get 't1','row2','info:age'COLUMN CELLinfo:age timestamp=1554623754957, value=18hbase(main):017:0> get 't1','row2',{COLUMN=> 'info:age'}COLUMN CELLinfo:age timestamp=1554623754957, value=18范围扫描:hbase(main):026:0* scan 't1'ROW COLUMN+CELLrow1 column=info:name, timestamp=1554621994538, value=xiaomingrow2 column=info:age, timestamp=1554625223482, value=18row3 column=info:sex, timestamp=1554625229782, value=malehbase(main):027:0> scan 't1',{LIMIT => 2}ROW COLUMN+CELLrow1 column=info:name, timestamp=1554621994538, value=xiaomingrow2 column=info:age, timestamp=1554625223482, value=18hbase(main):034:0> scan 't1',{STARTROW => 'row2'}ROW COLUMN+CELLrow2 column=info:age, timestamp=1554625223482, value=18row3 column=info:sex, timestamp=1554625229782, ue=malehbase(main):038:0> scan 't1',{STARTROW => 'row2',ENDROW => 'row3'}ROW COLUMN+CELLrow2 column=info:age, timestamp=1554625223482, value=18计数表hbase(main):042:0> count 't1'3 row(s) in 0.0200 seconds删除行指定行的列簇hbase(main):013:0> delete 't1','row3','info:sex'删除整行hbase(main):047:0> deleteall 't1','row2'清空表hbase(main):049:0> truncate 't1'Truncating 't1' table (it may take a while):- Disabling table...- Dropping table...- Creating table...0 row(s) in 4.8050 seconds删除表(删除表之前要禁用表)hbase(main):058:0> disable 't1'hbase(main):059:0> drop 't1'创建HBase测试数据
create 't1','info'put 't1','row1','info:name','xiaoming'put 't1','row2','info:age','18'put 't1','row3','info:sex','male'create 'emp','personal','professional'put 'emp','1','personal:name','raju'put 'emp','1','personal:city','hyderabad'put 'emp','1','professional:designation','manager'put 'emp','1','professional:salary','50000'put 'emp','2','personal:name','ravi'put 'emp','2','personal:city','chennai'put 'emp','2','professional:designation','sr.engineer'put 'emp','2','professional:salary','30000'put 'emp','3','personal:name','rajesh'put 'emp','3','personal:city','delhi'put 'emp','3','professional:designation','jr.engineer'put 'emp','3','professional:salary','25000'hbase(main):040:0> scan 't1'ROW COLUMN+CELLrow1 column=info:name, timestamp=1554634306493, value=xiaomingrow2 column=info:age, timestamp=1554634306540, value=18row3 column=info:sex, timestamp=1554634307409, value=male3 row(s) in 0.0290 secondshbase(main):041:0> scan 'emp'ROW COLUMN+CELL1 column=personal:city, timestamp=1554634236024, value=hyderabad1 column=personal:name, timestamp=1554634235959, value=raju1 column=professional:designation, timestamp=1554634236063, value=manager1 column=professional:salary, timestamp=1554634237419, value=500002 column=personal:city, timestamp=1554634241879, value=chennai2 column=personal:name, timestamp=1554634241782, value=ravi2 column=professional:designation, timestamp=1554634241920, value=sr.engineer2 column=professional:salary, timestamp=1554634242923, value=300003 column=personal:city, timestamp=1554634246842, value=delhi3 column=personal:name, timestamp=1554634246784, value=rajesh3 column=professional:designation, timestamp=1554634246879, value=jr.engineer3 column=professional:salary, timestamp=1554634247692, value=250003 row(s) in 0.0330 seconds
3、移除一个hadoop+hbase节点
先移除sht-sgmhadoopdn-04的hbase节点
graceful_stop.sh脚本自动关闭平衡器,移动region到其他regionserver,如果数据量比较大,这个步骤大概需要花费较长的时间.
[tnuser@sht-sgmhadoopcm-01 bin]$ graceful_stop.sh sht-sgmhadoopdn-04Disabling balancer!HBase Shell; enter 'help' for list of supported commands.Type "exit" to leave the HBase ShellVersion 0.92.1, r1298924, Fri Mar 9 16:58:34 UTC 2012balance_switch falseSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.true 0 row(s) in 0.7580 secondsUnloading sht-sgmhadoopdn-04 region(s)SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/usr/local/contentplatform/hadoop-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:host.name=sht-sgmhadoopcm-0119/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_1219/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.6.0_12/jre19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/contentplatform/hadoop-1.0.3/libexec/../lib/native/Linux-amd64-64:/usr/local/contentplatform/hbase-0.92.1/lib/native/Linux-amd64-6419/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:java.compiler=19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd6419/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.el7.x86_6419/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:user.name=tnuser19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/tnuser19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Client environment:user.dir=/usr/local/contentplatform/hbase-0.92.1/bin19/04/07 20:11:14 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopcm-01:2181,sht-sgmhadoopdn-02:2181 sessionTimeout=900000 watcher=hconnection19/04/07 20:11:14 INFO zookeeper.ClientCnxn: Opening socket connection to server /172.16.101.58:218119/04/07 20:11:14 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 24569@sht-sgmhadoopcm-01.telenav.cn19/04/07 20:11:14 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.19/04/07 20:11:14 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.19/04/07 20:11:14 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-01/172.16.101.58:2181, initiating session19/04/07 20:11:14 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-01/172.16.101.58:2181, sessionid = 0x169f7b052050003, negotiated timeout = 90000019/04/07 20:11:15 INFO region_mover: Moving 2 region(s) from sht-sgmhadoopdn-04,60020,1554638724252 during this cycle19/04/07 20:11:15 INFO region_mover: Moving region 1028785192 (0 of 2) to server=sht-sgmhadoopdn-01,60020,155463872358119/04/07 20:11:16 INFO region_mover: Moving region d3a10ae012afde8e1e401a2e400accc8 (1 of 2) to server=sht-sgmhadoopdn-01,60020,155463872358119/04/07 20:11:17 INFO region_mover: Wrote list of moved regions to /tmp/sht-sgmhadoopdn-04Unloaded sht-sgmhadoopdn-04 region(s)sht-sgmhadoopdn-04: ********************************************************************sht-sgmhadoopdn-04: * *sht-sgmhadoopdn-04: * This system is for the use of authorized users only. Usage of *sht-sgmhadoopdn-04: * this system may be monitored and recorded by system personnel. *sht-sgmhadoopdn-04: * *sht-sgmhadoopdn-04: * Anyone using this system expressly consents to such monitoring *sht-sgmhadoopdn-04: * and they are advised that if such monitoring reveals possible *sht-sgmhadoopdn-04: * evidence of criminal activity, system personnel may provide the *sht-sgmhadoopdn-04: * evidence from such monitoring to law enforcement officials. *sht-sgmhadoopdn-04: * *sht-sgmhadoopdn-04: ********************************************************************sht-sgmhadoopdn-04: stopping regionserver....[tnuser@sht-sgmhadoopcm-01 hbase]$ echo "balance_switch true" | hbase shell查看hbase节点状态[tnuser@sht-sgmhadoopcm-01 hbase]$ echo "status" | hbase shell3 servers, 1 dead, 1.3333 average load[tnuser@sht-sgmhadoopdn-04 hbase]$ jps23940 Jps23375 DataNode23487 TaskTracker
http://172.16.101.54:60010
再移除sht-sgmhadoopdn-04的datanode和TaskTracker节点
[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/includesht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/excludessht-sgmhadoopdn-04[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/hdfs-site.xml dfs.hosts /usr/local/contentplatform/hadoop-1.0.3/conf/include true dfs.hosts.exclude /usr/local/contentplatform/hadoop-1.0.3/conf/excludes true[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/mapred-site.xml mapred.hosts /usr/local/contentplatform/hadoop-1.0.3/conf/include true mapred.hosts.exclude /usr/local/contentplatform/hadoop-1.0.3/conf/excludes true重新加载配置,NameNode会检查并将数据复制到其它节点上以恢复副本数,但是不会删除sht-sgmhadoopdn-04把原本的数据,如果数据量比较大,这个过程比较耗时-refreshNodes :Re-read the hosts and exclude files to update the set of Datanodes that are allowed to connect to the Namenode and those that should be decommissioned or recommissioned.[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3]$ hadoop dfsadmin -refreshNodes[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3]$ hadoop mradmin -refreshNodes如果节点sht-sgmhadoopdn-04上DataNode和TaskTracker进程还存活,则使用下面命令关闭(正常情况在上一步中已经被关闭了)[tnuser@sht-sgmhadoopdn-04 hbase]$ hadoop-daemon.sh stop datanode [tnuser@sht-sgmhadoopdn-04 hbase]$ hadoop-daemon.sh stop tasktracker[tnuser@sht-sgmhadoopcm-01 hadoop-1.0.3]$ hadoop dfsadmin -reportWarning: $HADOOP_HOME is deprecated.Configured Capacity: 246328578048 (229.41 GB)Present Capacity: 93446351917 (87.03 GB)DFS Remaining: 93445607424 (87.03 GB)DFS Used: 744493 (727.04 KB)DFS Used%: 0%Under replicated blocks: 0Blocks with corrupt replicas: 0Missing blocks: 0-------------------------------------------------Datanodes available: 3 (4 total, 1 dead)Name: 172.16.101.58:50010Decommission Status : NormalConfigured Capacity: 82109526016 (76.47 GB)DFS Used: 259087 (253.01 KB)Non DFS Used: 57951808497 (53.97 GB)DFS Remaining: 24157458432(22.5 GB)DFS Used%: 0%DFS Remaining%: 29.42%Last contact: Sun Apr 07 20:45:42 CST 2019Name: 172.16.101.60:50010Decommission Status : NormalConfigured Capacity: 82109526016 (76.47 GB)DFS Used: 246799 (241.01 KB)Non DFS Used: 45172382705 (42.07 GB)DFS Remaining: 36936896512(34.4 GB)DFS Used%: 0%DFS Remaining%: 44.98%Last contact: Sun Apr 07 20:45:43 CST 2019Name: 172.16.101.59:50010Decommission Status : NormalConfigured Capacity: 82109526016 (76.47 GB)DFS Used: 238607 (233.01 KB)Non DFS Used: 49758034929 (46.34 GB)DFS Remaining: 32351252480(30.13 GB)DFS Used%: 0%DFS Remaining%: 39.4%Last contact: Sun Apr 07 20:45:42 CST 2019Name: 172.16.101.66:50010Decommission Status : DecommissionedConfigured Capacity: 0 (0 KB)DFS Used: 0 (0 KB)Non DFS Used: 0 (0 KB)DFS Remaining: 0(0 KB)DFS Used%: 100%DFS Remaining%: 0%Last contact: Thu Jan 01 08:00:00 CST 1970此时节点sht-sgmhadoopdn-04已经没有进程[tnuser@sht-sgmhadoopdn-04 hbase]$ jps23973 Jpssht-sgmhadoopdn-04节点的数据仍然保留[tnuser@sht-sgmhadoopdn-04 hbase]$ hadoop dfs -ls /hbaseWarning: $HADOOP_HOME is deprecated.Found 8 itemsdrwxr-xr-x - tnuser supergroup 0 2019-04-07 17:46 /hbase/-ROOT-drwxr-xr-x - tnuser supergroup 0 2019-04-07 18:23 /hbase/.META.drwxr-xr-x - tnuser supergroup 0 2019-04-07 20:11 /hbase/.logsdrwxr-xr-x - tnuser supergroup 0 2019-04-07 20:45 /hbase/.oldlogsdrwxr-xr-x - tnuser supergroup 0 2019-04-07 18:50 /hbase/emp-rw-r--r-- 3 tnuser supergroup 38 2019-04-06 22:41 /hbase/hbase.id-rw-r--r-- 3 tnuser supergroup 3 2019-04-06 22:41 /hbase/hbase.versiondrwxr-xr-x - tnuser supergroup 0 2019-04-07 18:51 /hbase/t1平衡数据文件节点[tnuser@sht-sgmhadoopcm-01 conf]$ start-balancer.sh -threshold 10
最后修改一些配置文件
删除regionservers文件中sht-sgmhadoopdn-04行[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hbase-0.92.1/conf/regionserverssht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-01:/usr/local/contentplatform/hbase-0.92.1/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-02:/usr/local/contentplatform/hbase-0.92.1/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-03:/usr/local/contentplatform/hbase-0.92.1/conf/删除slave文件中sht-sgmhadoopdn-04行[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/slavessht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-01:/usr/local/contentplatform/hadoop-1.0.3/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-02:/usr/local/contentplatform/hadoop-1.0.3/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-03:/usr/local/contentplatform/hadoop-1.0.3/conf/注释并删除excludes文件[tnuser@sht-sgmhadoopcm-01 conf]$ rm -rf /usr/local/contentplatform/hadoop-1.0.3/conf/excludes[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/hdfs-site.xml[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/mapred-site.xml
重启hadoop和hbase
[tnuser@sht-sgmhadoopcm-01 conf]$ stop-hbase.sh[tnuser@sht-sgmhadoopcm-01 hbase]$ stop-all.sh[tnuser@sht-sgmhadoopcm-01 hbase]$ start-all.sh[tnuser@sht-sgmhadoopcm-01 hbase]$ start-hbase.shcheck 数据:hbase(main):040:0> scan 't1'ROW COLUMN+CELLrow1 column=info:name, timestamp=1554634306493, value=xiaomingrow2 column=info:age, timestamp=1554634306540, value=18row3 column=info:sex, timestamp=1554634307409, value=male3 row(s) in 0.0290 secondshbase(main):041:0> scan 'emp'ROW COLUMN+CELL1 column=personal:city, timestamp=1554634236024, value=hyderabad1 column=personal:name, timestamp=1554634235959, value=raju1 column=professional:designation, timestamp=1554634236063, value=manager1 column=professional:salary, timestamp=1554634237419, value=500002 column=personal:city, timestamp=1554634241879, value=chennai2 column=personal:name, timestamp=1554634241782, value=ravi2 column=professional:designation, timestamp=1554634241920, value=sr.engineer2 column=professional:salary, timestamp=1554634242923, value=300003 column=personal:city, timestamp=1554634246842, value=delhi3 column=personal:name, timestamp=1554634246784, value=rajesh3 column=professional:designation, timestamp=1554634246879, value=jr.engineer3 column=professional:salary, timestamp=1554634247692, value=250003 row(s) in 0.0330 seconds
4、添加一个hadoop+hbase节点
先添加sht-sgmhadoopdn-04的hadoop节点
准备工作:
java环境,ssh互信,/etc/hosts文件
添加slave文件中sht-sgmhadoopdn-04行,并同步到其他节点[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hadoop-1.0.3/conf/slavessht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03sht-sgmhadoopdn-04[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-01:/usr/local/contentplatform/hadoop-1.0.3/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-02:/usr/local/contentplatform/hadoop-1.0.3/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hadoop-1.0.3/conf/slaves sht-sgmhadoopdn-03:/usr/local/contentplatform/hadoop-1.0.3/conf/添加regionservers文件中sht-sgmhadoopdn-04行,并同步到其他节点[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hbase-0.92.1/conf/regionserverssht-sgmhadoopdn-01sht-sgmhadoopdn-02sht-sgmhadoopdn-03sht-sgmhadoopdn-04[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-01:/usr/local/contentplatform/hbase-0.92.1/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-02:/usr/local/contentplatform/hbase-0.92.1/conf/[tnuser@sht-sgmhadoopcm-01 conf]$ rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/regionservers sht-sgmhadoopdn-03:/usr/local/contentplatform/hbase-0.92.1/conf/在sht-sgmhadoopdn-04上删除已经存在的数据文件rm -rf /usr/local/contentplatform/data/dfs/name/*rm -rf /usr/local/contentplatform/data/dfs/data/*rm -rf /usr/local/contentplatform/data/mapred/local/*rm -rf /usr/local/contentplatform/data/zookeeper/*rm -rf /usr/local/contentplatform/logs/hadoop/*rm -rf /usr/local/contentplatform/logs/hbase/*在sht-sgmhadoopdn-04启动datanode和tasktracker[tnuser@sht-sgmhadoopdn-04 conf]$ hadoop-daemon.sh start datanode[tnuser@sht-sgmhadoopdn-04 conf]$ hadoop-daemon.sh start tasktracker检查live nodes[tnuser@sht-sgmhadoopcm-01 contentplatform]$ hadoop dfsadmin -reporthttp://172.16.101.54:50070在namenode上执行平衡数据[tnuser@sht-sgmhadoopcm-01 conf]$ start-balancer.sh -threshold 10
再添加sht-sgmhadoopdn-04的hbase节点
在sht-sgmhadoopdn-04上启动reginserver[tnuser@sht-sgmhadoopdn-04 conf]$ hbase-daemon.sh start regionserver检查hbase节点状态http://172.16.101.54:60010
添加一个backup master
添加配置文件并同步到所有节点上[tnuser@sht-sgmhadoopcm-01 conf]$ vim /usr/local/contentplatform/hbase-0.92.1/conf/backup-masterssht-sgmhadoopdn-01rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/backup-masters sht-sgmhadoopdn-01:/usr/local/contentplatform/hbase-0.92.1/conf/rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/backup-masters sht-sgmhadoopdn-02:/usr/local/contentplatform/hbase-0.92.1/conf/rsync -avz --progress /usr/local/contentplatform/hbase-0.92.1/conf/backup-masters sht-sgmhadoopdn-03:/usr/local/contentplatform/hbase-0.92.1/conf/启动hbase,如果hbase集群已经启动,则重启hbase集群[tnuser@sht-sgmhadoopcm-01 conf]$ stop-hbase.sh[tnuser@sht-sgmhadoopcm-01 conf]$ start-hbase.sh[tnuser@sht-sgmhadoopdn-01 conf]$ vim /usr/local/contentplatform/logs/hbase/hbase-tnuser-master-sht-sgmhadoopdn-01.log2019-04-12 13:58:50,893 INFO org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized2019-04-12 13:58:50,899 DEBUG org.apache.hadoop.hbase.master.HMaster: HMaster started in backup mode. Stalling until master znode is written.2019-04-12 13:58:50,924 ERROR org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: Node /hbase/master already exists and this is not a retry2019-04-12 13:58:50,925 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Adding ZNode for /hbase/backup-masters/sht-sgmhadoopdn-01,60000,1555048730644 in backup master directory2019-04-12 13:58:50,941 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Another master is the active master, sht-sgmhadoopcm-01,60000,1555048728172; waiting to become the next active master[tnuser@sht-sgmhadoopcm-01 hbase-0.92.1]$ jps2913 JobTracker2823 SecondaryNameNode3667 Jps3410 HMaster3332 HQuorumPeer2639 NameNode[tnuser@sht-sgmhadoopdn-01 conf]$ jps7539 HQuorumPeer7140 DataNode7893 HMaster8054 Jps7719 HRegionServer7337 TaskTracker故障切换:[tnuser@sht-sgmhadoopcm-01 hbase]$ cat /tmp/hbase-tnuser-master.pid|xargs kill -9http://172.16.101.58:60010
0