hadoop动态添加datanode启动失败的经验
发表于:2025-01-31 作者:千家信息网编辑
千家信息网最后更新 2025年01月31日,动态添加datanode节点,主机名node14.cnshell>hadoop-daemon.sh start datanodeshell>jps #查看datanode进程是否已启动发现DataNo
千家信息网最后更新 2025年01月31日hadoop动态添加datanode启动失败的经验
动态添加datanode节点,主机名node14.cnshell>hadoop-daemon.sh start datanode
shell>jps #查看datanode进程是否已启动
发现DataNode进程启动后立即消失,查询日志发现一下记录:
2018-04-15 00:08:43,158 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]2018-04-15 00:08:43,168 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []2018-04-15 00:08:43,673 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2018-04-15 00:08:43,837 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).2018-04-15 00:08:43,837 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started2018-04-15 00:08:43,839 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is node11.cn:90002018-04-15 00:08:44,138 WARN org.apache.hadoop.fs.FileSystem: "node11.cn:9000" is a deprecated filesystem name. Use "hdfs://node11.cn:9000/" instead.2018-04-15 00:08:44,196 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://node11.cn:90012018-04-15 00:08:44,266 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog2018-04-15 00:08:44,273 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined2018-04-15 00:08:44,293 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static2018-04-15 00:08:44,374 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)2018-04-15 00:08:44,377 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*2018-04-15 00:08:44,411 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOExceptionjava.net.BindException: Port in use: node11.cn:9001 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:892) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:828) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:749) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:887) ... 8 more2018-04-15 00:08:44,414 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...2018-04-15 00:08:44,415 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2018-04-15 00:08:44,415 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2018-04-15 00:08:44,415 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: node11.cn:9001 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:892) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:828) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:749) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:887) ... 8 more2018-04-15 00:08:44,423 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 12018-04-15 00:08:44,426 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at node14.cn/192.168.74.114************************************************************/
解决方式:
删除dfs目录下的内容重新执行一下命令即可
shell>rm -rf dfs/
shell>hadoop-daemon.sh start datanode
shell>yarn-daemon.sh start nodemanager
刷新nanenode节点
shell>hdfs dfsadmin -refreshNodes
shell>start-balancer.sh
新增datanode成功,
将数据分发到新增datanode节点主机上
shell>hadoop balancer -threshold 10 #50控制磁盘使用率的参数,数值越小,各个节点磁盘使用率越均衡
节点
主机
使用率
磁盘
进程
动态
均衡
成功
内容
参数
命令
数值
数据
方式
日志
目录
控制
查询
经验
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
凤城卓拉网络技术
山东挑选软件开发公司
互联网金融科技产业链
三友互联网络科技公司
天津虚拟服务器管理软件
网络安全 不信谣 不传谣
甘肃服务器维修维保哪家便宜
网页登录首页密码 数据库
哪个是软件开发人才
数据库分析技术基础知识
数据库码的英文
数据库出现重ID原因
山东磐石网络技术有限公司
数据库选择全部数据的快捷方式
普洱高密度存储服务器生产厂家
吴中软件开发园
网络安全的通讯稿题目
区狠抓网络安全工作
吴江区专业性网络技术收费
网络安全视频感想400字
区县关于支持网络安全政策
服务器centos系统好用吗
卫生院医保网络安全自查
系统综合管理服务器作用
网络安全中的风险有哪些
ie时间服务器
闵行区挑选网络技术开发优缺点
凤起网络技术
发表不当言论属于网络安全吗
win7 远程服务器管理