记一次hadoop环境搭建中DataNode启动失败的问题!

版权声明:本文为博主原创文章,未经博主允许不得转载(需要转载联系QQ:787824374)!!! https://blog.csdn.net/qq_19107011/article/details/86065455

hadoop的下载

使用的是Hadoop 2.6.0-cdh5.15.0这个版本。
之前为了搭建hadoop,看了很多网站上的文章,到了自己动手,出现一个奇怪的问题。
就是我格式化hdfs以后,datanode节点启动失败!

下面看看我的排查问题的过程,希望看到博客的朋友少绕弯路!

先看下我的配置文件

[root@miv hadoop]# cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
                <property>
                        <name>hadoop.tmp.dir</name>
                        <value>/apps/data/hadoop/tmp</value>
                </property>
                <property>
                        <name>fs.default.name</name>
                        <value>hdfs://hadoop:9000</value>
                </property>
                <property>
                        <name>fs.defaultFS</name>
                        <value>hdfs://hadoop:9000</value>
                </property>
</configuration>
[root@miv hadoop]# cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
        <property>
        <name>dfs.name.dir</name>
        <value>/apps/data/hadoop/name</value>
        </property>

        <property>
        <name>dfs.data.dir</name>
        <value>/apps/data/hadoop/data</value>
        </property>

        <property>
        <name>dfs.replication</name>
        <value>1</value>
        </property>
</configuration>
[root@miv hadoop]# cat slaves
hadoop
[root@miv hadoop]#


问题出现

我使用命令hdfs namenode -format
进行格式化,这个过程很顺利,没有出现什么问题。
然后使用start-all.sh命令重启hadoop以后,还是有问题

这个时候,参考考网上一种方法,说是把data目录下的current下面的VERSION文件,复制一份,放到data下面。我做了,不过没啥用!!!

顺藤摸瓜,从出问题的datanode入手

发现有个命令hdfs

[root@miv sbin]# hdfs
Usage: hdfs [--config confdir] COMMAND
       where COMMAND is one of:
  dfs                  run a filesystem command on the file systems supported in Hadoop.
  namenode -format     format the DFS filesystem
  secondarynamenode    run the DFS secondary namenode
  namenode             run the DFS namenode
  journalnode          run the DFS journalnode
  zkfc                 run the ZK Failover Controller daemon
  datanode             run a DFS datanode
  dfsadmin             run a DFS admin client
  diskbalancer         Distributes data evenly among disks on a given node
  haadmin              run a DFS HA admin client
  fsck                 run a DFS filesystem checking utility
  balancer             run a cluster balancing utility
  jmxget               get JMX exported values from NameNode or DataNode.
  mover                run a utility to move block replicas across
                       storage types
  oiv                  apply the offline fsimage viewer to an fsimage
  oiv_legacy           apply the offline fsimage viewer to an legacy fsimage
  oev                  apply the offline edits viewer to an edits file
  fetchdt              fetch a delegation token from the NameNode
  getconf              get config values from configuration
  groups               get the groups which users belong to
  snapshotDiff         diff two snapshots of a directory or diff the
                       current directory contents with a snapshot
  lsSnapshottableDir   list all snapshottable dirs owned by the current user
                                                Use -help to see options
  portmap              run a portmap service
  nfs3                 run an NFS version 3 gateway
  cacheadmin           configure the HDFS cache
  crypto               configure HDFS encryption zones
  storagepolicies      list/get/set block storage policies
  version              print the version

Most commands print help when invoked w/o parameters.
[root@miv sbin]#

查看帮助,可以知道使用hdfs datanode就可以直接启动datanode,那好吧,我敲命令跑一下
出现了一个异常

java.net.BindException: Port in use: localhost:0 Caused by: java.net.BindException: Cannot assign requested address

解决问题,原来时候hosts的锅

百度搜索这个异常,发现了解决问题的办法
原来是host的问题,导致datanode启动失败
修改host文件

[root@miv sbin]# cat /etc/hosts
192.168.0.119 hadoop
127.0.0.1       localhost localhost.localdomain
::1             localhost localhost.localdomain

重新启动

重新启动hadoop,运行正常,完美!!!给自己一波掌声,哈哈。

[root@miv sbin]# jps
9456 SecondaryNameNode
9126 NameNode
9737 NodeManager
12651 Jps
9276 DataNode
9628 ResourceManager

猜你喜欢

转载自blog.csdn.net/qq_19107011/article/details/86065455