hadoop01:8020 failed on connection exception: java.net.ConnectException: 拒绝接

实验中使用hdfs报错,怎么也能连接hadoop,重启还是报错,仔细排查下

[root@hadoop01 ~]# hadoop fs -ls /

ls: Call From hadoop01/172.16.18.133 to hadoop01:8020 failed on connection exception: java.net.ConnectException: 拒绝接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
[root@hadoop01 ~]# jps
10211 DataNode
10475 SecondaryNameNode
10871 NodeManager
8524 -- process information unavailable ----进程不可用
10740 ResourceManager
9206 -- process information unavailable
11878 Jps
[root@hadoop01 ~]# ps -ef |grep 8524
hive      8524  7849  0 May01 ?        00:12:00 /usr/java/jdk1.7.0_79/bin/java -Xmx1000m -Dwebhcat.log.dir=/var/log/hcatalog -Dlog4j.configuration=file:/opt/cloudera-manager/cm-5.9.3/run/cloudera-scm-agent/process/182-hive-WEBHCAT/webhcat-log4j.properties -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Xms268435456 -Xmx268435456 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/hive_hive-WEBHCAT-10fa72533c89501f894de40757964c0f_pid8524.hprof -XX:OnOutOfMemoryError=/opt/cloudera-manager/cm-5.9.3/lib64/cmf/service/common/killparent.sh -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.1.0-cdh5.9.0.jar org.apache.hive.hcatalog.templeton.Main

root     11911  8872  0 14:28 pts/1    00:00:00 grep 8524

进程8524运行在hive用户下

kill 进程 删除pid文件

[root@hadoop01 hsperfdata_hive]# pwd
/tmp/hsperfdata_hive
[root@hadoop01 hsperfdata_hive]# ls
8524
[root@hadoop01 hsperfdata_hive]# rm 8524 
rm:是否删除普通文件 "8524"?y

[root@hadoop01 ~]# ps -ef |grep 9206 

root     11997  8872  0 14:30 pts/1    00:00:00 grep 9206

进程9206 不存在  删除进程pid

[root@hadoop01 hsperfdata_hadoop]# pwd
/tmp/hsperfdata_hadoop
[root@hadoop01 hsperfdata_hadoop]# pwd
/tmp/hsperfdata_hadoop
[root@hadoop01 hsperfdata_hadoop]# ls
9206
[root@hadoop01 hsperfdata_hadoop]# rm 9206 
rm:是否删除普通文件 "9206"?y

停止8524

[root@hadoop01 ~]# kill -9 8524

重启hdfs

[root@hadoop01 ~]# start-dfs.sh 
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-root-namenode-hadoop1.out
hadoop01: datanode running as process 10211. Stop it first.
Starting secondary namenodes [hadoop01]

hadoop01: secondarynamenode running as process 10475. Stop it first.

[root@hadoop01 ~]# jps
12560 Jps
10211 DataNode
10475 SecondaryNameNode
10871 NodeManager
8524 -- process information unavailable
10740 ResourceManager
9206 -- process information unavailable

提示我们10211  DataNode进程已经运行,重新stop全部进程

进程文件清理干净重新用hadoop启动

[hadoop@hadoop01 ~]$ jps
2488 NameNode
2614 DataNode
3339 Jps
2848 SecondaryNameNode
3036 ResourceManager

3158 NodeManager

[hadoop@hadoop01 ~]$ hadoop fs -ls /
Found 4 items
-rw-r--r--   3 root   supergroup    8617253 2018-04-26 16:11 /apache-maven-3.3.9-bin.zip
drwxr-xr-x   - root   supergroup          0 2018-04-26 16:16 /dir
drwx------   - hadoop supergroup          0 2018-04-27 12:58 /tmp
drwxr-xr-x   - hadoop supergroup          0 2018-04-27 12:58 /user

总结:单机版和CDH集群都部署在同一台上面,相互之间环境混乱导致,发现主机名一直在变,首先看jps进程,之后检查进程是在哪个用户运行,进程不存在用删除/tmp下面对应用户的pid进程号文件

猜你喜欢

转载自blog.csdn.net/ycwyong/article/details/80483393