hadoop报错——Exception in thread “main“ ExitCodeException exitCode=1: chmod: 无法访问没有那个文件或目录

一、前言

        笔者在新装的Hadoop集群中进行简单的API测试的时候,IDEA出现了一个异常,无法访问文件路径,没有那个文件或目录。在此之前,笔者做HDFS数据导入Hbase的时候,也同样出现了该异常,该异常可以简单的认为权限问题,但是引发该异常的问题却是大不相同的。

二、异常

1、异常一

        使用idea编写一个API,读取HDFS文件然后进行简单的统计时所引发的异常

23/04/15 01:35:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/04/15 01:35:22 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
23/04/15 01:35:22 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
23/04/15 01:35:22 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
23/04/15 01:35:22 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/export/servers/hadoop-2.7.4/tmpha/mapred/staging/root2047142323/.staging/job_local2047142323_0001
Exception in thread "main" ExitCodeException exitCode=1: chmod: 无法访问 '/export/servers/hadoop-2.7.4/tmpha/mapred/staging/root2047142323/.staging/job_local2047142323_0001': 没有那个文件或目录

	at org.apache.hadoop.util.Shell.runCommand(Shell.java:585)
	at org.apache.hadoop.util.Shell.run(Shell.java:482)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:776)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:869)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:852)
	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
	at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:505)
	at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:486)
	at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:508)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:602)
	at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:94)
	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:95)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:190)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
	at com.itcast.hadoop.example.dedup.DedupDriver.main(DedupDriver.java:48)

Process finished with exit code 1

        在这里,笔者初步认为是导入的配置文件所引发的异常,即在IDEA的resources目录下导入hbase的配置文件hbase-site.xml和hadoop的配置文件core-site.xml和log4j.properties所引发的异常,在不导入配置文件的情况下,程序可正常运行。

2、异常二

        此前,笔者在hadoop版本为2.10.1、hbase版本为2.4.11的集群,编写一个读取HDFS里的文件数据导入hbase时也出现同样的异常。

23/04/12 02:51:22 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/export/servers/hadoop-2.10.1/tmpha/mapred/staging/huanganchi1077728205/.staging/job_local1077728205_0001
Exception in thread "main" ExitCodeException exitCode=1: chmod: 无法访问 '/export/servers/hadoop-2.10.1/tmpha/mapred/staging/huanganchi1077728205/.staging/job_local1077728205_0001': 没有那个文件或目录

	at org.apache.hadoop.util.Shell.runCommand(Shell.java:998)
	at org.apache.hadoop.util.Shell.run(Shell.java:884)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1216)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:1310)
	at org.apache.hadoop.util.Shell.execCommand(Shell.java:1292)
	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:770)
	at org.apache.hadoop.fs.ChecksumFileSystem$1.apply(ChecksumFileSystem.java:506)
	at org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:487)
	at org.apache.hadoop.fs.ChecksumFileSystem.setPermission(ChecksumFileSystem.java:503)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:725)
	at org.apache.hadoop.mapreduce.JobResourceUploader.mkdirs(JobResourceUploader.java:648)
	at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:167)
	at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:128)
	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:101)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
	at com.itcast.hbase.hbase_hdfs_data.hdfs_data_Runner.run(hdfs_data_Runner.java:41)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at com.itcast.hbase.hbase_hdfs_data.hdfs_data_Runner.main(hdfs_data_Runner.java:50)

        对比两处的异常,可以发现同样是无法访问文件路径,但是异常问题却大不相同,笔者在阅读了针对该异常的文章,大概可以分为修改hadoop的配置文件和添加权限用户,因为大多数文章认为引发该问题的原因在于,缺乏权限,从而导致无法访问文件路径。

        这里笔者做了先后做了三种测试,分别出现了另外两种异常。

(1)测试一

        在IDEA的resources目录下导入配置hadoop集群时修改的配置文件,运行时报出异常,缺乏权限。

23/04/13 12:55:22 INFO common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
23/04/13 12:55:22 INFO zookeeper.ClientCnxnSocket: jute.maxbuffer value is 1048575 Bytes
23/04/13 12:55:22 INFO zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=false
23/04/13 12:55:22 INFO zookeeper.ClientCnxn: Opening socket connection to server hadoop02/192.168.8.202:2181.
23/04/13 12:55:22 INFO zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error)
23/04/13 12:55:22 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.8.117:37152, server: hadoop02/192.168.8.202:2181
23/04/13 12:55:22 INFO zookeeper.ClientCnxn: Session establishment complete on server hadoop02/192.168.8.202:2181, session id = 0x200006cfbc50006, negotiated timeout = 40000
Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=huanganchi, access=EXECUTE, inode="/tmp":root:supergroup:drwx------
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:311)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:238)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:541)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1705)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1723)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:642)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:110)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2966)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1160)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:880)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1675)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1524)
	at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1521)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1521)
	at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:135)
	at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:112)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:150)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
	at com.itcast.hbase.hbase_hdfs_Mapreduce.hdfs_data_Driver.main(hdfs_data_Driver.java:31)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=huanganchi, access=EXECUTE, inode="/tmp":root:supergroup:drwx------
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:311)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:238)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:541)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1705)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1723)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:642)
	at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:110)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2966)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1160)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:880)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:507)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1034)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1003)
	at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:931)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2854)

	at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1549)
	at org.apache.hadoop.ipc.Client.call(Client.java:1495)
	at org.apache.hadoop.ipc.Client.call(Client.java:1394)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
	at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:800)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
	at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1673)
	... 15 more

Process finished with exit code 1

(2)测试二&&测试三

       

这里说明以下,测试二和测试三都是建立在测试一基础上实施的
测试二是在代码添加以下内容
        //设置参数,指定要访问的文件系统的类型:HDFS文件系统
        conf.set("fs.defaultFS","hdfs://spark01:9000");
        //设置客户端的访问身份,以root身份访问HDFS
        System.setProperty("HADOOP_USER_NAME","root");
        //通过FileSystem类的静态方法,获取文件系统客户端对象
        fs = FileSystem.get(conf);

测试三则是对拒绝访问的目录开放权限
hadoop fs -chmod 777 /xxxx

报出相同异常如下
23/04/13 19:56:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/04/13 19:56:31 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
23/04/13 19:56:32 WARN mapreduce.JobResourceUploader: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
23/04/13 19:56:32 INFO input.FileInputFormat: Total input files to process : 1
23/04/13 19:56:33 INFO mapreduce.JobSubmitter: number of splits:1
23/04/13 19:56:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1681322847291_0001
23/04/13 19:56:35 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
23/04/13 19:56:35 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1681322847291_0001
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.yarn.util.Apps.setEnvFromInputProperty(Ljava/util/Map;Ljava/lang/String;Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;)V
	at org.apache.hadoop.mapreduce.v2.util.MRApps.setEnvFromInputProperty(MRApps.java:716)
	at org.apache.hadoop.mapred.YARNRunner.setupContainerLaunchContextForAM(YARNRunner.java:536)
	at org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:583)
	at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:324)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:253)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588)
	at com.itcast.hadoop.wordcount.WordCountDriver.main(WordCountDriver.java:48)

        针对与该异常,笔者查阅的文章,并没有较好的说明,只能暂且归咎为版本问题所引发的异常。

猜你喜欢

转载自blog.csdn.net/weixin_63507910/article/details/130164883