风控项目异常记录:

1.java.lang.NoClassDefFoundError: org/apache/spark/api/java/function/Function0
    at java.lang.Class.getDeclaredMethods0(Native Method)
    at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
    at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
    at java.lang.Class.getMethod0(Class.java:3018)
    at java.lang.Class.getMethod(Class.java:1784)
    at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
    at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.api.java.function.Function0
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 7 more
Disconnected from the target VM, address: '127.0.0.1:52278', transport: 'socket'
Error: A JNI error has occurred, please check your installation and try again

解决办法:

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming_2.11</artifactId>
    <version>2.1.1</version>
   <scope>provided</scope>
</dependency>
因为依赖中设置了<scope>provied</scope>,provied是在编译和测试时有效,将provied改为compile或者删除掉即可
现在主要来说明<scope>值的作用范围:
compile:默认值,适用于所有阶段(表明该jar包在编译、运行以及测试中路径俊可见),并且会随着项目直接发布。
provided:编译和测试时有效,并且该jar包在运行时由服务器提供。如servlet-api.
runtime:运行时使用,对测试和运行有效。如jdbc.
test:只在测试时使用,在编译和运行时不起作用。发布项目时没有作用。
system:不依赖maven仓库解析,需要提供依赖的显式的置顶jar包路径。对项目的移植来说是不方便的。

2.java.lang.ClassNotFoundException: org.apache.spark.streaming.kafka.KafkaUtils

原因也是因为添加进来的依赖<scope>provided</scope>

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
    <version>2.1.1</version>
    <scope>provided</scope>
</dependency>

解决办法:<scope>provied</scope>

3.Reconnect due to error: java.lang.NoSuchMethodError: org.apache.kafka.common.network.NetworkSend.<init>(Ljava/lang/String;[Ljava/nio/ByteBuffer;)V

at kafka.network.RequestOrResponseSend.<init>(RequestOrResponseSend.scala:41) ~[kafka_2.11-0.10.0.1.jar:?] at kafka.network.RequestOrResponseSend.<init>(RequestOrResponseSend.scala:44) ~[kafka_2.11-0.10.0.1.jar:?] at kafka.network.BlockingChannel.send(BlockingChannel.scala:112) ~[kafka_2.11-0.10.0.1.jar:?] at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:85) [kafka_2.11-0.10.0.1.jar:?] at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:83) [kafka_2.11-0.10.0.1.jar:?] at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149) [kafka_2.11-0.10.0.1.jar:?] at kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79) [kafka_2.11-0.10.0.1.jar:?]

原因是自己添加了kafka的依赖,估计和kafka-client发生了冲突,注释点即可

<!--<dependency>-->
    <!--<groupId>org.apache.kafka</groupId>-->
    <!--<artifactId>kafka_2.11</artifactId>-->
    <!--<version>0.10.2.0</version>-->
<!--</dependency>-->

4.10-26 10:57:17[org.apache.hadoop.util.Shell-303][pool-19-thread-1][315990] - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
    at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
    at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
    at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)
    at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:468)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:905)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:886)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:783)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:772)
    at org.apache.spark.streaming.CheckpointWriter$CheckpointWriteHandler.run(Checkpoint.scala:234)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

原因:本地未安装“winutils.exe用于连接hadoop的工具。”

下载地址:https://github.com/srccodes/hadoop-common-2.2.0-bin

6.Exception in thread "pool-19-thread-5" java.lang.NullPointerException
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:404)
    at org.apache.hadoop.util.Shell.run(Shell.java:379)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:678)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)
    at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:468)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:456)
    at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:905)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:886)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:783)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:772)
    at org.apache.spark.streaming.CheckpointWriter$CheckpointWriteHandler.run(Checkpoint.scala:234)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

原因:windows本地未安装hadoop,去官网下载hadoop2.7.3.tar.gz解压后,配置HADOOP_HOME,配置ok后,把异常5中下载的hadoop-common-2.2.0-bin的hadoop连接工具中的hadoop.dll和winutils.exe 复制到hadoop的bin目录下,最好在windows/System32下也copy一个hadoop.dl,重启电脑,重新运行程序,异常解决

7.Scala创建KafkaUtils.createDirectStream报:(1)方式找不到,(2)需要提供R的参数类型

原因:因为此代码是直接Copy java 中的代码,IDEA字段进行代码切换,在Scala中我准备使用的Scala的createDirectStream,刚copy过来的是Java 中调的createDirectStream,两方式是重载方式,传入的参数不一样。而Copy过来的Map和Set方法不能转换成Scala的Map和Set,而Scala中使用的createDirectStream方法是需要用Scala的Map和Set.

解决办法:将copy过来的map和Set改成Scala中的就可以了

8.[JAVA异常]ERROR: JDWP Unable to get JNI 1.2 environment, jvm->GetEnv() return code = -2 JDWP exit erro

原因:
1.JDK1.8.1
2.上次启动调试的代码有错误,导致进程没有终止,占用了Console输出,在之后启动调试的时候出现此种错误

解决方法:
1.在程序最后,main()函数中添加:System.exit(0);
  System.exit(0);会使程序立即被终止,程序中若有线程还在执行任务,后续的任务也就无法继续执行。
2.kill掉后台java经常,从新运行即可

9.Exception in thread "main" java.lang.ClassCastException: kafka.cluster.BrokerEndPoint cannot be cast to kafka.cluster.Broker

主要原因是kafka版本不匹配问题,因为集群上有的时10的kafka,实际开发过程中使用的时0.8的kafka

之前的pom文件:

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
    <version>2.1.1</version>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
    <version>2.1.1</version>
    <!--<scope>provided</scope>-->
</dependency>

将10和8的版本交换位置后,去掉了10中的<scope>provided</scope>问题解决

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
    <version>2.1.1</version>
    <!--<scope>provided</scope>-->
</dependency>
<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
    <version>2.1.1</version>
    <!--<scope>provided</scope>-->
</dependency>

猜你喜欢

转载自blog.csdn.net/fengfengchen95/article/details/83380245