kylin遇见的错误

kylin遇见的错误

0.普通问题

0.1

java.net.ConnectException: Call From MyDis/192.168.182.86 to MyDis:10020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
10020:这是配置的mr的jobhistory的端口:明显history server没开,或者出了问题
	/mr-jobhistory-daemon.sh start historyserver

0.2

运行ERROR,点击日志发现java.io.IOException: OS command error exit with return code: 64, error message: SLF4J: Class path contains multiple SLF4J bindings.
原因:第一个可能是:hive元数据没有开启,开启hive元数据服务即可
第二个是SLF4Jar冲突

1.kylin-mr模式

1.1 org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Exception while unregistering

kylin的日志文件报错(

解决思路:kylinUi日志->聚合日志->yarn的日志( ${HADOOP_HOME}/logs/uselogs/任务编号对应的日志文件)):

我的问题是:hadoop之前的模式mr换成了tez(这个在kylin的模式下也有问题),
从tez换成了mr,kylin在中间表使用hive时mr出错。

(解决:hadoop集群mr的运行)(基本都是配置文件的问题。。。)
ERROR [Thread-65] org.apache.hadoop.mapreduce.
v2.app.rm.RMCommunicator: Exception while unregistering
java.lang.NullPointerException
        at org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.
        getApplicationWebURLOnJHSWithoutScheme(MRWebAppUtil.java:140)
      

1.2

tez运行时运行的问题

注:如果不是这个问题,我还没发现我在之前的yarn-site.xml配置了yarn-tez这个(之前将mr换成tez)

这个其实很搞:使用yarn-tez执行的目录是hdfs:/kylin/kylin_metadata/kylin-6faa47ea-3a5b-4020-976f-c9fcf9d93bd2/kylin_sales_cube/fact_distinct_columns

少了一层目录;

这就是tez换成mr的原因

error:java.io.IOException: fail to find the statistics file 
in base dir: hdfs:/kylin/kylin_metadata/
kylin-6faa47ea-3a5b-4020-976f-c9fcf9d93bd2/
kylin_sales_cube/fact_distinct_columns/statistics
直接将tez换成mr
<property>
	<name>mapreduce.framework.name</name>
	<value>yarn</value>
</property>

2.kylin-spark模式

yarn的配置文件
<property>
	<name>yarn.nodemanager.aux-services</name>
	<value>spark_shuffle,mapreduce_shuffle</value>
</property>
<property>
	<name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
	<value>org.apache.spark.network.yarn.YarnShuffleService</value>
</property>
同时将spark-2.4.4-yarn-shuffle.jar放置到${HADOOP_HOME}/share/hadoop/yarn/lib中
spark-2.4.4-yarn-shuffle.jar位置:${SPARK_HOME}/yarn/
org.apache.spark.SparkException: Exception while starting container container_1575360418223_0002_02_000005 on host MyDis
	at org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:125)
	at org.apache.spark.deploy.yarn.ExecutorRunnable.run(ExecutorRunnable.scala:65)
	at org.apache.spark.deploy.yarn.YarnAllocator$$anonfun$runAllocatedContainers$1$$anon$1.run(YarnAllocator.scala:534)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)


<--------------------------------注意这里-------------------------------------->
Caused by: org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:spark_shuffle does not exist
	
	
	
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168)
	at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106)
	at org.apache.hadoop.yarn.client.api.impl.NMClientImpl.startContainer(NMClientImpl.java:205)
	at org.apache.spark.deploy.yarn.ExecutorRunnable.startContainer(ExecutorRunnable.scala:122)
	... 5 more

参考链接:https://blog.csdn.net/qq_43008162/article/details/103355122

发布了44 篇原创文章 · 获赞 7 · 访问量 2140

猜你喜欢

转载自blog.csdn.net/weixin_44273391/article/details/103374540