hadoop动态添加datanode启动失败的经验

动态添加datanode节点,主机名node14.cn
shell>hadoop-daemon.sh start datanode
shell>jps #查看datanode进程是否已启动
发现DataNode进程启动后立即消失,查询日志发现一下记录:

2018-04-15 00:08:43,158 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2018-04-15 00:08:43,168 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2018-04-15 00:08:43,673 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2018-04-15 00:08:43,837 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2018-04-15 00:08:43,837 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2018-04-15 00:08:43,839 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is node11.cn:9000
2018-04-15 00:08:44,138 WARN org.apache.hadoop.fs.FileSystem: "node11.cn:9000" is a deprecated filesystem name. Use "hdfs://node11.cn:9000/" instead.
2018-04-15 00:08:44,196 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://node11.cn:9001
2018-04-15 00:08:44,266 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2018-04-15 00:08:44,273 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2018-04-15 00:08:44,293 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2018-04-15 00:08:44,298 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2018-04-15 00:08:44,374 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2018-04-15 00:08:44,377 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resource
s, pathSpec=/webhdfs/v1/*
2018-04-15 00:08:44,411 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: node11.cn:9001
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:892)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:828)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:765)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:749)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446)
                Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:887)
        ... 8 more
2018-04-15 00:08:44,414 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2018-04-15 00:08:44,415 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.

2018-04-15 00:08:44,415 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2018-04-15 00:08:44,415 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.

java.net.BindException: Port in use: node11.cn:9001
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:892)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:828)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:706)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:593)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:765)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:749)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1446)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1512)
Caused by: java.net.BindException: Cannot assign requested address
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:433)
        at sun.nio.ch.Net.bind(Net.java:425)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:887)
        ... 8 more
2018-04-15 00:08:44,423 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2018-04-15 00:08:44,426 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node14.cn/192.168.74.114
************************************************************/

解决方式:
删除dfs目录下的内容重新执行一下命令即可
shell>rm -rf dfs/
shell>hadoop-daemon.sh start datanode
shell>yarn-daemon.sh start nodemanager

刷新nanenode节点
shell>hdfs dfsadmin -refreshNodes
shell>start-balancer.sh
新增datanode成功,
将数据分发到新增datanode节点主机上
shell>hadoop balancer -threshold 10 #50控制磁盘使用率的参数,数值越小,各个节点磁盘使用率越均衡

猜你喜欢

转载自blog.51cto.com/maoxiaoxiong/2103543