大数据 hadoop 部署 分析错误

hadoop Check MapReduce2报出错误

stderr: /var/lib/ambari-agent/data/errors-50.txt
Traceback (most recent call last):
File “/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py”, line 168, in
MapReduce2ServiceCheck().execute()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py”, line 367, in execute
method(env)
File “/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/mapred_service_check.py”, line 154, in service_check
logoutput=True)
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 166, in init
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 160, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 124, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/libraries/providers/execute_hadoop.py”, line 44, in action_run
environment = self.resource.environment,
File “/usr/lib/python2.6/site-packages/resource_management/core/base.py”, line 166, in init
self.env.run()
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 160, in run
self.run_action(resource, action)
File “/usr/lib/python2.6/site-packages/resource_management/core/environment.py”, line 124, in run_action
provider_action()
File “/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py”, line 262, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 72, in inner
result = function(command, **kwargs)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File “/usr/lib/python2.6/site-packages/resource_management/core/shell.py”, line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of ‘hadoop --config /usr/hdp/2.6.1.0-129/hadoop/conf jar /usr/hdp/2.6.1.0-129/hadoop-mapreduce/hadoop-mapreduce-examples-2.*.jar wordcount /user/ambari-qa/mapredsmokeinput /user/ambari-qa/mapredsmokeoutput’ returned 1. 19/01/10 05:43:39 INFO client.RMProxy: Connecting to ResourceManager at slaver1.hadoop/192.168.200.5:8050
19/01/10 05:43:39 INFO client.AHSProxy: Connecting to Application History server at slaver1.hadoop/192.168.200.5:10200
19/01/10 05:43:40 INFO input.FileInputFormat: Total input paths to process : 1
19/01/10 05:43:40 INFO mapreduce.JobSubmitter: number of splits:1
19/01/10 05:43:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1547070063184_0002
19/01/10 05:43:41 INFO impl.YarnClientImpl: Submitted application application_1547070063184_0002
19/01/10 05:43:41 INFO mapreduce.Job: The url to track the job: http://slaver1.hadoop:8088/proxy/application_1547070063184_0002/
19/01/10 05:43:41 INFO mapreduce.Job: Running job: job_1547070063184_0002
19/01/10 05:45:09 INFO mapreduce.Job: Job job_1547070063184_0002 running in uber mode : false
19/01/10 05:45:09 INFO mapreduce.Job: map 0% reduce 0%
19/01/10 05:45:09 INFO mapreduce.Job: Job job_1547070063184_0002 failed with state FAILED due to: Application application_1547070063184_0002 failed 2 times due to AM Container for appattempt_1547070063184_0002_000002 exited with exitCode: -104
For more detailed output, check the application tracking page: http://slaver1.hadoop:8088/cluster/app/application_1547070063184_0002 Then click on links to logs of each attempt.
Diagnostics: Container [pid=27282,containerID=container_1547070063184_0002_02_000001] is running beyond physical memory limits. Current usage: 139.9 MB of 128 MB physical memory used; 1.9 GB of 268.8 MB virtual memory used. Killing container.
Dump of the process-tree for container_1547070063184_0002_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 27296 27282 27282 27282 (java) 560 111 1884561408 35497 /usr/jdk64/jdk1.8.0_77/bin/java -Djava.io.tmpdir=/home/hadoop/yarn/local/usercache/ambari-qa/appcache/application_1547070063184_0002/container_1547070063184_0002_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/yarn/log/application_1547070063184_0002/container_1547070063184_0002_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dhdp.version=2.6.1.0-129 -Xmx102m -Dhdp.version=2.6.1.0-129 org.apache.hadoop.mapreduce.v2.app.MRAppMaster
|- 27282 27280 27282 27282 (bash) 0 0 115838976 306 /bin/bash -c /usr/jdk64/jdk1.8.0_77/bin/java -Djava.io.tmpdir=/home/hadoop/yarn/local/usercache/ambari-qa/appcache/application_1547070063184_0002/container_1547070063184_0002_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/yarn/log/application_1547070063184_0002/container_1547070063184_0002_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dhdp.version=2.6.1.0-129 -Xmx102m -Dhdp.version=2.6.1.0-129 org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/hadoop/yarn/log/application_1547070063184_0002/container_1547070063184_0002_02_000001/stdout 2>/hadoop/yarn/log/application_1547070063184_0002/container_1547070063184_0002_02_000001/stderr

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
19/01/10 05:45:09 INFO mapreduce.Job: Counters: 0
stdout: /var/lib/ambari-agent/data/output-50.txt
2019-01-10 05:43:36,916 - Using hadoop conf dir: /usr/hdp/2.6.1.0-129/hadoop/conf
2019-01-10 05:43:36,916 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.1.0-129 -> 2.6.1.0-129
2019-01-10 05:43:36,917 - call[‘ambari-python-wrap /usr/bin/hdp-select status hadoop-yarn-resourcemanager’] {‘timeout’: 20}
2019-01-10 05:43:36,943 - call returned (0, ‘hadoop-yarn-resourcemanager - 2.6.1.0-129’)
2019-01-10 05:43:36,965 - Using hadoop conf dir: /usr/hdp/2.6.1.0-129/hadoop/conf
2019-01-10 05:43:36,974 - HdfsResource[’/user/ambari-qa’] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/hdp/2.6.1.0-129/hadoop/bin’, ‘keytab’: [EMPTY], ‘dfs_type’: ‘’, ‘default_fs’: ‘hdfs://master.hadoop:8020’, ‘hdfs_resource_ignore_file’: ‘/var/lib/ambari-agent/data/.hdfs_resource_ignore’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: [EMPTY], ‘user’: ‘hdfs’, ‘owner’: ‘ambari-qa’, ‘hadoop_conf_dir’: ‘/usr/hdp/2.6.1.0-129/hadoop/conf’, ‘type’: ‘directory’, ‘action’: [‘create_on_execute’], ‘immutable_paths’: [u’/mr-history/done’, u’/app-logs’, u’/tmp’], ‘mode’: 0770}
2019-01-10 05:43:36,977 - call[‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘curl -sS -L -w ‘"’"’%{http_code}’"’"’ -X GET ‘"’"‘http://master.hadoop:50070/webhdfs/v1/user/ambari-qa?op=GETFILESTATUS&user.name=hdfs’"’"’ 1>/tmp/tmpNqtdYe 2>/tmp/tmpoTu4ki’’] {‘logoutput’: None, ‘quiet’: False}
2019-01-10 05:43:37,040 - call returned (0, ‘’)
2019-01-10 05:43:37,041 - HdfsResource[’/user/ambari-qa/mapredsmokeoutput’] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/hdp/2.6.1.0-129/hadoop/bin’, ‘keytab’: [EMPTY], ‘dfs_type’: ‘’, ‘default_fs’: ‘hdfs://master.hadoop:8020’, ‘hdfs_resource_ignore_file’: ‘/var/lib/ambari-agent/data/.hdfs_resource_ignore’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: [EMPTY], ‘user’: ‘hdfs’, ‘action’: [‘delete_on_execute’], ‘hadoop_conf_dir’: ‘/usr/hdp/2.6.1.0-129/hadoop/conf’, ‘type’: ‘directory’, ‘immutable_paths’: [u’/mr-history/done’, u’/app-logs’, u’/tmp’]}
2019-01-10 05:43:37,042 - call[‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘curl -sS -L -w ‘"’"’%{http_code}’"’"’ -X GET ‘"’"‘http://master.hadoop:50070/webhdfs/v1/user/ambari-qa/mapredsmokeoutput?op=GETFILESTATUS&user.name=hdfs’"’"’ 1>/tmp/tmpCRPWb8 2>/tmp/tmppTGB5w’’] {‘logoutput’: None, ‘quiet’: False}
2019-01-10 05:43:37,104 - call returned (0, ‘’)
2019-01-10 05:43:37,105 - HdfsResource[’/user/ambari-qa/mapredsmokeinput’] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/hdp/2.6.1.0-129/hadoop/bin’, ‘keytab’: [EMPTY], ‘source’: ‘/etc/passwd’, ‘dfs_type’: ‘’, ‘default_fs’: ‘hdfs://master.hadoop:8020’, ‘hdfs_resource_ignore_file’: ‘/var/lib/ambari-agent/data/.hdfs_resource_ignore’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: [EMPTY], ‘user’: ‘hdfs’, ‘action’: [‘create_on_execute’], ‘hadoop_conf_dir’: ‘/usr/hdp/2.6.1.0-129/hadoop/conf’, ‘type’: ‘file’, ‘immutable_paths’: [u’/mr-history/done’, u’/app-logs’, u’/tmp’]}
2019-01-10 05:43:37,106 - call[‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘curl -sS -L -w ‘"’"’%{http_code}’"’"’ -X GET ‘"’"‘http://master.hadoop:50070/webhdfs/v1/user/ambari-qa/mapredsmokeinput?op=GETFILESTATUS&user.name=hdfs’"’"’ 1>/tmp/tmp6uMcX9 2>/tmp/tmpHiE7pg’’] {‘logoutput’: None, ‘quiet’: False}
2019-01-10 05:43:37,170 - call returned (0, ‘’)
2019-01-10 05:43:37,171 - Creating new file /user/ambari-qa/mapredsmokeinput in DFS
2019-01-10 05:43:37,172 - call[‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘curl -sS -L -w ‘"’"’%{http_code}’"’"’ -X PUT --data-binary @/etc/passwd -H ‘"’"‘Content-Type: application/octet-stream’"’"’ ‘"’"‘http://master.hadoop:50070/webhdfs/v1/user/ambari-qa/mapredsmokeinput?op=CREATE&user.name=hdfs&overwrite=True’"’"’ 1>/tmp/tmpkIpSOs 2>/tmp/tmpuA79fx’’] {‘logoutput’: None, ‘quiet’: False}
2019-01-10 05:43:37,704 - call returned (0, ‘’)
2019-01-10 05:43:37,705 - HdfsResource[None] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/hdp/2.6.1.0-129/hadoop/bin’, ‘keytab’: [EMPTY], ‘dfs_type’: ‘’, ‘default_fs’: ‘hdfs://master.hadoop:8020’, ‘hdfs_resource_ignore_file’: ‘/var/lib/ambari-agent/data/.hdfs_resource_ignore’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: [EMPTY], ‘user’: ‘hdfs’, ‘action’: [‘execute’], ‘hadoop_conf_dir’: ‘/usr/hdp/2.6.1.0-129/hadoop/conf’, ‘immutable_paths’: [u’/mr-history/done’, u’/app-logs’, u’/tmp’]}
2019-01-10 05:43:37,705 - ExecuteHadoop[‘jar /usr/hdp/2.6.1.0-129/hadoop-mapreduce/hadoop-mapreduce-examples-2…jar wordcount /user/ambari-qa/mapredsmokeinput /user/ambari-qa/mapredsmokeoutput’] {‘bin_dir’: '/usr/sbin:/sbin:/usr/lib/ambari-server/:/usr/jdk64/jdk1.8.0_77/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/var/lib/ambari-agent:/usr/hdp/2.6.1.0-129/hadoop/bin:/usr/hdp/2.6.1.0-129/hadoop-yarn/bin’, ‘conf_dir’: ‘/usr/hdp/2.6.1.0-129/hadoop/conf’, ‘logoutput’: True, ‘try_sleep’: 5, ‘tries’: 1, ‘user’: ‘ambari-qa’}
2019-01-10 05:43:37,756 - Execute[‘hadoop --config /usr/hdp/2.6.1.0-129/hadoop/conf jar /usr/hdp/2.6.1.0-129/hadoop-mapreduce/hadoop-mapreduce-examples-2…jar wordcount /user/ambari-qa/mapredsmokeinput /user/ambari-qa/mapredsmokeoutput’] {‘logoutput’: True, ‘try_sleep’: 5, ‘environment’: {}, ‘tries’: 1, ‘user’: ‘ambari-qa’, ‘path’: [u’/usr/sbin:/sbin:/usr/lib/ambari-server/:/usr/jdk64/jdk1.8.0_77/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/var/lib/ambari-agent:/usr/hdp/2.6.1.0-129/hadoop/bin:/usr/hdp/2.6.1.0-129/hadoop-yarn/bin’]}
19/01/10 05:43:39 INFO client.RMProxy: Connecting to ResourceManager at slaver1.hadoop/192.168.200.5:8050
19/01/10 05:43:39 INFO client.AHSProxy: Connecting to Application History server at slaver1.hadoop/192.168.200.5:10200
19/01/10 05:43:40 INFO input.FileInputFormat: Total input paths to process : 1
19/01/10 05:43:40 INFO mapreduce.JobSubmitter: number of splits:1
19/01/10 05:43:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1547070063184_0002
19/01/10 05:43:41 INFO impl.YarnClientImpl: Submitted application application_1547070063184_0002
19/01/10 05:43:41 INFO mapreduce.Job: The url to track the job: http://slaver1.hadoop:8088/proxy/application_1547070063184_0002/
19/01/10 05:43:41 INFO mapreduce.Job: Running job: job_1547070063184_0002
19/01/10 05:45:09 INFO mapreduce.Job: Job job_1547070063184_0002 running in uber mode : false
19/01/10 05:45:09 INFO mapreduce.Job: map 0% reduce 0%
19/01/10 05:45:09 INFO mapreduce.Job: Job job_1547070063184_0002 failed with state FAILED due to: Application application_1547070063184_0002 failed 2 times due to AM Container for appattempt_1547070063184_0002_000002 exited with exitCode: -104
For more detailed output, check the application tracking page: http://slaver1.hadoop:8088/cluster/app/application_1547070063184_0002 Then click on links to logs of each attempt.
Diagnostics: Container [pid=27282,containerID=container_1547070063184_0002_02_000001] is running beyond physical memory limits. Current usage: 139.9 MB of 128 MB physical memory used; 1.9 GB of 268.8 MB virtual memory used. Killing container.
Dump of the process-tree for container_1547070063184_0002_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 27296 27282 27282 27282 (java) 560 111 1884561408 35497 /usr/jdk64/jdk1.8.0_77/bin/java -Djava.io.tmpdir=/home/hadoop/yarn/local/usercache/ambari-qa/appcache/application_1547070063184_0002/container_1547070063184_0002_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/yarn/log/application_1547070063184_0002/container_1547070063184_0002_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dhdp.version=2.6.1.0-129 -Xmx102m -Dhdp.version=2.6.1.0-129 org.apache.hadoop.mapreduce.v2.app.MRAppMaster
|- 27282 27280 27282 27282 (bash) 0 0 115838976 306 /bin/bash -c /usr/jdk64/jdk1.8.0_77/bin/java -Djava.io.tmpdir=/home/hadoop/yarn/local/usercache/ambari-qa/appcache/application_1547070063184_0002/container_1547070063184_0002_02_000001/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/yarn/log/application_1547070063184_0002/container_1547070063184_0002_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Dhadoop.root.logfile=syslog -Dhdp.version=2.6.1.0-129 -Xmx102m -Dhdp.version=2.6.1.0-129 org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/hadoop/yarn/log/application_1547070063184_0002/container_1547070063184_0002_02_000001/stdout 2>/hadoop/yarn/log/application_1547070063184_0002/container_1547070063184_0002_02_000001/stderr

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
19/01/10 05:45:09 INFO mapreduce.Job: Counters: 0

猜你喜欢

转载自blog.csdn.net/qq_41809929/article/details/86211882