hadoop动态添加datanode启动失败的经验
发表于:2024-11-14 作者:千家信息网编辑
千家信息网最后更新 2024年11月14日,动态添加datanode节点,主机名node14.cnshell>hadoop-daemon.sh start datanodeshell>jps #查看datanode进程是否已启动发现DataNo
千家信息网最后更新 2024年11月14日hadoop动态添加datanode启动失败的经验
动态添加datanode节点,主机名node14.cnshell>hadoop-daemon.sh start datanode
shell>jps #查看datanode进程是否已启动
发现DataNode进程启动后立即消失,查询日志发现一下记录:
2018-04-15 00:08:43,158 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]2018-04-15 00:08:43,168 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []2018-04-15 00:08:43,673 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2018-04-15 00:08:43,837 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).2018-04-15 00:08:43,837 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started2018-04-15 00:08:43,839 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is node11.cn:90002018-04-15 00:08:44,138 WARN org.apache.hadoop.fs.FileSystem: "node11.cn:9000" is a deprecated filesystem name. Use "hdfs://node11.cn:9000/" instead.2018-04-15 00:08:44,196 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://node11.cn:90012018-04-15 00:08:44,266 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog2018-04-15 00:08:44,273 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined2018-04-15 00:08:44,293 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static2018-04-15 00:08:44,374 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)2018-04-15 00:08:44,377 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*2018-04-15 00:08:44,411 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOExceptionjava.net.BindException: Port in use: node11.cn:9001 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:892) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:828) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:749) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:887) ... 8 more2018-04-15 00:08:44,414 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...2018-04-15 00:08:44,415 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2018-04-15 00:08:44,415 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2018-04-15 00:08:44,415 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: node11.cn:9001 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:892) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:828) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:765) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:749) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:887) ... 8 more2018-04-15 00:08:44,423 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 12018-04-15 00:08:44,426 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at node14.cn/192.168.74.114************************************************************/
解决方式:
删除dfs目录下的内容重新执行一下命令即可
shell>rm -rf dfs/
shell>hadoop-daemon.sh start datanode
shell>yarn-daemon.sh start nodemanager
刷新nanenode节点
shell>hdfs dfsadmin -refreshNodes
shell>start-balancer.sh
新增datanode成功,
将数据分发到新增datanode节点主机上
shell>hadoop balancer -threshold 10 #50控制磁盘使用率的参数,数值越小,各个节点磁盘使用率越均衡
节点
主机
使用率
磁盘
进程
动态
均衡
成功
内容
参数
命令
数值
数据
方式
日志
目录
控制
查询
经验
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
软件开发诉讼判决书
网络安全助理工程师考试
计算机网络技术实习目的和意义
数据库集群如何数据一致性
党员档案数据库建立与维护流程
服务器文件系统错误
西昌游戏软件开发招聘
靳爱东软件开发
人工智能软件开发技术就业
网络服务器ip地址是什么
河南零起网络技术有限公司
人头检测数据库
sql服务器事务问题
腾讯服务器绑定域名
小学网络安全活动班队会方案
给服务器降温的办法
网络安全学院 招生
mysq改数据库密码 语句
惠普服务器在管理网上配置
网络安全员证书等级
战地5自建服务器没人
手机 网络安全防范措施
事业单位网络安全培训总结
导入数据库数据表消失了
什么软件开发进阶最好
安卓创建数据库全过程
两会 网络安全 保障
网络安全教育工作汇报
惠州app软件开发优势
华为网络技术工程师岗位