hadoop起动后没有datanode的问题

转自:hadoop起动后没有datanode的问题

查看日志后发现:

java.io.IOException: Incompatible clusterIDs in /home/storm/hadoop/dfs/data: namenode clusterID = CID-bee17bb7-308b-4e4d-b059-3c73519a9d0e; datanode clusterID = CID-7fca37f0-a600-4e89-a012-aec2a3499151
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:760)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:293)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:388)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1566)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1527)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:327)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:266)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:746)
    at java.lang.Thread.run(Thread.java:745)
2017-10-06 12:01:35,752 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid 7044cfaa-9868-44d8-a50d-15acd0a7fff1) service to h1/172.18.18.189:9000. Exiting. 
java.io.IOException: All specified directories are failed to load.
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1566)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1527)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:327)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:266)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:746)
    at java.lang.Thread.run(Thread.java:745)
2017-10-06 12:01:35,752 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid 7044cfaa-9868-44d8-a50d-15acd0a7fff1) service to h1/172.18.18.189:9000
2017-10-06 12:01:35,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid 7044cfaa-9868-44d8-a50d-15acd0a7fff1)
2017-10-06 12:01:37,754 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-10-06 12:01:37,757 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-10-06 12:01:37,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at h3/172.18.18.173
************************************************************/
    
    
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

从日志上看,datanode的clusterID 和 namenode的clusterID 不匹配。

解决办法一:

根据日志中的路径,cd /home/storm/hadoop/tmp/dfs,能看到 data和name两个文件夹,

将name/current下的VERSION中的clusterID复制到data/current下的VERSION中,覆盖掉原来的clusterID。

让两个保持一致,然后重启,启动后执行jps,查看进程:

20131 SecondaryNameNode
20449 NodeManager
19776 NameNode
21123 Jps
19918 DataNode
20305 ResourceManager

解决办法二:

直接删除掉dfs文件夹中name和data文件夹里的所有内容,重启。

出现该问题的原因

在第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令(hdfs namenode -format),这时namenode的clusterID会重新生成,而datanode的clusterID 保持不变。

        <link rel="stylesheet" href="https://csdnimg.cn/release/phoenix/template/css/markdown_views-ea0013b516.css">
            </div>

查看日志后发现:

java.io.IOException: Incompatible clusterIDs in /home/storm/hadoop/dfs/data: namenode clusterID = CID-bee17bb7-308b-4e4d-b059-3c73519a9d0e; datanode clusterID = CID-7fca37f0-a600-4e89-a012-aec2a3499151
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:760)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:293)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:388)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1566)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1527)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:327)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:266)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:746)
    at java.lang.Thread.run(Thread.java:745)
2017-10-06 12:01:35,752 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid 7044cfaa-9868-44d8-a50d-15acd0a7fff1) service to h1/172.18.18.189:9000. Exiting. 
java.io.IOException: All specified directories are failed to load.
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1566)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1527)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:327)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:266)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:746)
    at java.lang.Thread.run(Thread.java:745)
2017-10-06 12:01:35,752 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid 7044cfaa-9868-44d8-a50d-15acd0a7fff1) service to h1/172.18.18.189:9000
2017-10-06 12:01:35,754 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid 7044cfaa-9868-44d8-a50d-15acd0a7fff1)
2017-10-06 12:01:37,754 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-10-06 12:01:37,757 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-10-06 12:01:37,760 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at h3/172.18.18.173
************************************************************/
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

从日志上看,datanode的clusterID 和 namenode的clusterID 不匹配。

解决办法一:

根据日志中的路径,cd /home/storm/hadoop/tmp/dfs,能看到 data和name两个文件夹,

将name/current下的VERSION中的clusterID复制到data/current下的VERSION中,覆盖掉原来的clusterID。

让两个保持一致,然后重启,启动后执行jps,查看进程:

20131 SecondaryNameNode
20449 NodeManager
19776 NameNode
21123 Jps
19918 DataNode
20305 ResourceManager

解决办法二:

直接删除掉dfs文件夹中name和data文件夹里的所有内容,重启。

出现该问题的原因

在第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令(hdfs namenode -format),这时namenode的clusterID会重新生成,而datanode的clusterID 保持不变。

        <link rel="stylesheet" href="https://csdnimg.cn/release/phoenix/template/css/markdown_views-ea0013b516.css">
            </div>

猜你喜欢

转载自blog.csdn.net/feng_zhiyu/article/details/81042760