hadoop 2.2.0 cluster errors messages


1. when put local file to HDFS using #hadoop fs -put in.txt  /test,  there is a error message:

   hdfs.DFSClient: Exception in createBlockOutputStream java.net.NoReouteToHostException

solution: shutdown firewall in all nodes(non-secure mode, if run in secure mode, you should maual configure the firewall rules).

   centos :# service iptables save

               #service iptables stop

               #chkconfig iptables off

   ubuntu 12.04 :  #sudo ufw disable


   fedora20:

#sudo systemctl status firewalld.service

#sudo systemctl stop firewalld.service

#sudo systemctl disable firewalld.service


2. Could Only Be Replicated To .

    see:http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo

                  

3.when using sqoop to import data from mysql to hdfs,

sqoop>start job -j 3

there result is faild. Due to the following errors:

2014-03-20 02:31:14,695 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
Cannot assign container Container: [ContainerId:
container_1395399417464_0002_01_000012, NodeId: f2.zhj:40543,
NodeHttpAddress: f2.zhj:8042, Resource: ,
Priority: 20, Token: Token { kind: ContainerToken,
service: 192.168.122.3:40543 }, ] for a map as either  container memory
less than required 1024 or no pending map tasks - maps.isEmpty=true



2014-03-20 02:33:49,930 WARN [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter:
Could not delete hdfs://192.168.122.1:2014/test/actor/_temporary/1
/_temporary/attempt_1395399417464_0002_m_000002


the above errors are all removed by changing all /etc/hosts in all nodes. You should comment all lines start with 127.0.0.1 and 127.0.1.1

猜你喜欢

转载自ylzhj02.iteye.com/blog/2011492