spark2.3.1读取hbase运行报错的解决办法
1.报错java.lang.IllegalStateException: unread block data
解决办法:
spark.driver.extraClassPath
spark.executor.extraClassPath
要赋值/usr/cwgis/app/spark/jars/lib/*
为应用程序生成的依赖库
sparkConf.set("spark.driver.extraClassPath","/usr/cwgis/app/spark/jars/lib/*");
sparkConf.set("spark.executor.extraClassPath","/usr/cwgis/app/spark/jars/lib/*");
其它SparkConf参数
String master = "spark://mycluster:7077";
SparkConf sparkConf=new SparkConf();
sparkConf.setMaster(master);
//sparkConf.setMaster("local"); //yarn-cluster
sparkConf.setJars(JavaSparkContext.jarOfClass(this.getClass()));
sparkConf.setAppName("geowaveSpark");
sparkConf.set("spark.dynamicAllocation.enabled","false");
sparkConf.set("spark.driver.extraClassPath","/usr/cwgis/app/spark/jars/lib/*");
sparkConf.set("spark.executor.extraClassPath","/usr/cwgis/app/spark/jars/lib/*");
2.报错SparkException: Could not find CoarseGrainedScheduler or it has been stopped
sparkConf.set("spark.dynamicAllocation.enabled","false");
3.报错java.lang.NoSuchMethodError: net.jpountz.lz4.LZ4BlockInputStream
排除库中引用,一般为kafka库中的jar,
然后重新生成依赖库,分发到分布式机群中/usr/cwgis/app/spark/jars/lib
<dependency>
<groupId>mil.nga.giat</groupId>
<artifactId>geowave-adapter-vector</artifactId>
<version>0.9.7</version>
<exclusions>
<exclusion>
<groupId>net.jpountz.lz4</groupId>
<artifactId>lz4</artifactId>
</exclusion>
</exclusions>
</dependency>
—the—end—