hive in the pit (Update)

Disclaimer: This article is a blogger original article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
This link: https://blog.csdn.net/weixin_42411818/article/details/102672098

Error: org.apache.spark.sql.AnalysisException: java.lang.IllegalArgumentExceptio n: Wrong FS: hdfs://slave1:8020/user/hadoop-jrq/文件名, expected: hdfs://mycluster; (state=,code=0)

// mycluster reason to, because of the high availability configuration: When prompted to hdfs: // slave1: 8020 instead hdfs

Error: java.lang.OutOfMemoryError: GC overhead limit exceeded (state =, code = 0)
This is because the execution of: select * from table name;
simply put, it is your table too much data, not enough memory, not so play, with a limit of view

Guess you like

Origin blog.csdn.net/weixin_42411818/article/details/102672098