Hive安装(3.0.0)【一篇就够】

前提:Java 和 Hadoop已经安装

一、下载hive安装包

地址:http://mirrors.shu.edu.cn/apache/hive/hive-3.0.0/apache-hive-3.0.0-bin.tar.gz

[root@master usr]# wget http://mirrors.shu.edu.cn/apache/hive/hive-3.0.0/apache-hive-3.0.0-bin.tar.gz
--2018-08-26 01:56:00--  http://mirrors.shu.edu.cn/apache/hive/hive-3.0.0/apache-hive-3.0.0-bin.tar.gz
Resolving mirrors.shu.edu.cn (mirrors.shu.edu.cn)... 202.121.199.235
Connecting to mirrors.shu.edu.cn (mirrors.shu.edu.cn)|202.121.199.235|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 307672459 (293M) [application/x-gzip]
Saving to: ‘apache-hive-3.0.0-bin.tar.gz’

100%[========================================================================================================================>] 307,672,459 1.38MB/s   in 5m 15s 

2018-08-26 02:01:15 (955 KB/s) - ‘apache-hive-3.0.0-bin.tar.gz’ saved [307672459/307672459]
 

二、解压hive安装包

[root@master usr]# tar zxvf apache-hive-3.0.0-bin.tar.gz

更改名字为:hive-3.0.0

[root@master usr]# mv apache-hive-3.0.0-bin hive-3.0.0
[root@master usr]# ls
apache-hive-3.0.0-bin.tar.gz  etc    hadoop  hive-3.0.0  java  lib64    local  sbin   solr-7.4.0      src
bin                           games  hbase   include     lib   libexec  redis  share  solr-7.4.0.zip  tmp

三、配置hive环境变量

扫描二维码关注公众号,回复: 3310168 查看本文章

[root@master usr]# vi /etc/profile

文件尾部追加:

# hive environment
export HIVE_HOME=/usr/hive-3.0.0
export PATH=$PATH:$HIVE_HOME/bin

 

验证:

[root@master usr]# source /etc/profile
[root@master usr]# hive --version
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive 3.0.0
Git git://vgargwork.local/Users/vgarg/repos/hive.apache.master.latest -r ce61711a5fa54ab34fc74d86d521ecaeea6b072a
Compiled by vgarg on Fri May 18 11:38:33 PDT 2018
From source with checksum 81fcb93b608965ed7ac968bae1187fab

四、配置hive

[root@master conf]# cp hive-env.sh.template hive-env.sh
[root@master conf]# vi hive-env.sh

# Set HADOOP_HOME to point to a specific hadoop install directory
# HADOOP_HOME=${bin}/../../hadoop
 HADOOP_HOME=/usr/hadoop/hadoop-2.9.0

# Hive Configuration Directory can be controlled by:
# export HIVE_CONF_DIR=
 export HIVE_CONF_DIR=/usr/hive-3.0.0/conf

[root@master conf]# cp hive-default.xml.template hive-site.xml

五、启动hive

[root@master hive-3.0.0]# hive 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
 at [row,col,system-id]: [3213,96,"file:/usr/hive-3.0.0/conf/hive-site.xml"]

    at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2964)
    at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2733)
    at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2605)
    at org.apache.hadoop.conf.Configuration.get(Configuration.java:1362)
    at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:4967)
    at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:5040)
    at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5127)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5070)
    at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:97)
    at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
Caused by: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
 at [row,col,system-id]: [3213,96,"file:/usr/hive-3.0.0/conf/hive-site.xml"]
    at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:621)
    at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:491)
    at com.ctc.wstx.sr.StreamScanner.reportIllegalChar(StreamScanner.java:2456)
    at com.ctc.wstx.sr.StreamScanner.validateChar(StreamScanner.java:2403)
    at com.ctc.wstx.sr.StreamScanner.resolveCharEnt(StreamScanner.java:2369)
    at com.ctc.wstx.sr.StreamScanner.fullyResolveEntity(StreamScanner.java:1515)
    at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2828)
    at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123)
    at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2799)
    ... 17 more
 

【解决方法】:

修改前:

3212     <description>
   3213       Ensures commands with OVERWRITE (such as INSERT OVERWRITE) acquire Exclusive locks for&#8;transactional tables.  This ensures that inserts (w/o over
        write) running concurrently

   3214       are not hidden by the INSERT OVERWRITE.
   3215     </description>
修改后:

3212     <description>
   3213       Ensures commands with OVERWRITE (such as INSERT OVERWRITE) acquire Exclusive locks for transactional tables.  This ensures that inserts (w/o over
        write) running concurrently

   3214       are not hidden by the INSERT OVERWRITE.
   3215     </description>

再次启动======>

[root@master hive-3.0.0]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 50498504-b385-43e7-bf20-57cb74238499

Logging initialized using configuration in jar:file:/usr/hive-3.0.0/lib/hive-common-3.0.0.jar!/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
    at org.apache.hadoop.fs.Path.initialize(Path.java:254)
    at org.apache.hadoop.fs.Path.<init>(Path.java:212)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:703)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:620)
    at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:585)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:747)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
    at java.net.URI.checkPath(URI.java:1823)
    at java.net.URI.<init>(URI.java:745)
    at org.apache.hadoop.fs.Path.initialize(Path.java:251)
    ... 12 more
 

【解决方法】:

创建io的tmp文件 mkdir /usr/hive-3.0.0/tmp 并且 在配置文件hive-site.xml里面
以下地方的${system:java.io.tmpdir} 全部替换为 /usr/hive-3.0.0/tmp 

{system:user.name} 全部替换为 {user.name}

    141   <property>
    142     <name>hive.exec.local.scratchdir</name>
    143     <value>${system:java.io.tmpdir}/${system:user.name}</value>
    144     <description>Local scratch space for Hive jobs</description>
    145   </property>
    146   <property>
    147     <name>hive.downloaded.resources.dir</name>
    148     <value>${system:java.io.tmpdir}/${hive.session.id}_resources</value>
    149     <description>Temporary local directory for added resources in the remote file system.</description>
    150   </property>
 

1861   <property>
   1862     <name>hive.querylog.location</name>
   1863     <value>${system:java.io.tmpdir}/${system:user.name}</value>
   1864     <description>Location of Hive run time structured log file</description>
   1865   </property>

3522   <property>
   3523     <name>hive.druid.basePersistDirectory</name>
   3524     <value/>
   3525     <description>Local temporary directory used to persist intermediate indexing state, will default to JVM system property java.io.tmpdir.</description>
   3526   </property>

4395   <property>
   4396     <name>hive.server2.logging.operation.log.location</name>
   4397     <value>${system:java.io.tmpdir}/${system:user.name}/operation_logs</value>
   4398     <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
   4399   </property>
 

再次启动======>:

[root@master hive-3.0.0]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = d4cac70f-e7d5-42d0-a8e7-0093c00ef7f1

Logging initialized using configuration in jar:file:/usr/hive-3.0.0/lib/hive-common-3.0.0.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show databases;
FAILED: HiveException java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
hive> 

【解决方法】:

执行 schematool -initSchema -dbType derby 命令进行初始化

[root@master hive-3.0.0]# schematool -initSchema -dbType derby
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:     jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :     org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:     APP
Starting metastore schema initialization to 3.0.0
Initialization script hive-schema-3.0.0.derby.sql

 
Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000)
org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !!
Underlying cause: java.io.IOException : Schema script failed, errorcode 2
Use --verbose for detailed stacktrace.
*** schemaTool failed ***

这是因为第一次执行hive命令时,在执行hive的当前目录下的已经生成了 metastore_db 目录

[root@master hive-3.0.0]# ls -lrt
total 216
-rw-r--r--. 1 root root    230 May 15 17:42 NOTICE
-rw-r--r--. 1 root root  20798 May 15 17:42 LICENSE
-rw-r--r--. 1 root root 143769 May 15 18:32 RELEASE_NOTES.txt
drwxr-xr-x. 2 root root     44 Aug 26 10:16 jdbc
drwxr-xr-x. 2 root root   4096 Aug 26 10:17 binary-package-licenses
drwxr-xr-x. 4 root root     34 Aug 26 10:17 examples
drwxr-xr-x. 3 root root    157 Aug 26 10:17 bin
drwxr-xr-x. 4 root root     35 Aug 26 10:17 scripts
drwxr-xr-x. 4 root root  12288 Aug 26 10:17 lib
drwxr-xr-x. 7 root root     68 Aug 26 10:17 hcatalog
drwxr-xr-x. 3 root root     42 Aug 26 10:27 ${system:java.io.tmpdir}
drwxr-xr-x. 2 root root   4096 Aug 26 10:39 conf
drwxr-xr-x. 3 root root     33 Aug 26 10:44 tmp
drwxr-xr-x. 5 root root    133 Aug 26 10:44 metastore_db
-rw-r--r--. 1 root root  19964 Aug 26 10:44 derby.log

 

metastore_db默认只能在执行hive命令的当前目录创建,需要修改hive-site.xml指定到固定目录

    577   <property>
    578     <name>javax.jdo.option.ConnectionURL</name>
    579     <value>jdbc:derby:;databaseName=/usr/hive-3.0.0/derby_db/metastore_db;create=true</value>
    580     <description>
    581       JDBC connect string for a JDBC metastore.
    582       To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
    583       For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    584     </description>
    585   </property>
 

删除metastore_db目录(rm -rf metastore_db),再次执行初始化

[root@master hive-3.0.0]# schematool -initSchema -dbType derby
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:     jdbc:derby:;databaseName=metastore_db;create=true
Metastore Connection Driver :     org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User:     APP
Starting metastore schema initialization to 3.0.0
Initialization script hive-schema-3.0.0.derby.sql
Initialization script completed
schemaTool completed

 

再次启动Hive======>:

[root@master hive-3.0.0]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hbase/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hive-3.0.0/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = 28e290e1-3e2e-42d4-be61-5314ccc70f99

Logging initialized using configuration in jar:file:/usr/hive-3.0.0/lib/hive-common-3.0.0.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> show databases;
OK
default
Time taken: 1.936 seconds, Fetched: 1 row(s)
hive> create table student(stu_id int,stu_name string);
OK
Time taken: 3.705 seconds
hive> insert into student values(1,'tom');
Query ID = root_20180826111007_d1f4e8e0-ca95-456e-a497-652591b0855e
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1535252886720_0001, Tracking URL = http://master:8088/proxy/application_1535252886720_0001/
Kill Command = /usr/hadoop/hadoop-2.9.0/bin/mapred job  -kill job_1535252886720_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-08-26 11:11:18,864 Stage-1 map = 0%,  reduce = 0%
2018-08-26 11:11:53,149 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 8.07 sec
2018-08-26 11:12:25,559 Stage-1 map = 100%,  reduce = 67%, Cumulative CPU 10.25 sec
2018-08-26 11:12:33,224 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 14.02 sec
MapReduce Total cumulative CPU time: 14 seconds 20 msec
Ended Job = job_1535252886720_0001
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://master:9000/user/hive/warehouse/student/.hive-staging_hive_2018-08-26_11-10-07_127_6657782755309499529-1/-ext-10000
Loading data to table default.student
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 14.02 sec   HDFS Read: 15278 HDFS Write: 240 SUCCESS
Total MapReduce CPU Time Spent: 14 seconds 20 msec
OK
Time taken: 155.381 seconds
hive> insert into student values(2,'bob');
Query ID = root_20180826111257_e89f212c-4a4a-4005-a649-14ed76aa8e69
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1535252886720_0002, Tracking URL = http://master:8088/proxy/application_1535252886720_0002/
Kill Command = /usr/hadoop/hadoop-2.9.0/bin/mapred job  -kill job_1535252886720_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-08-26 11:13:34,223 Stage-1 map = 0%,  reduce = 0%
2018-08-26 11:13:54,804 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.96 sec
2018-08-26 11:14:16,059 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 11.71 sec
MapReduce Total cumulative CPU time: 11 seconds 710 msec
Ended Job = job_1535252886720_0002
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://master:9000/user/hive/warehouse/student/.hive-staging_hive_2018-08-26_11-12-57_895_8590756589454066874-1/-ext-10000
Loading data to table default.student
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 11.71 sec   HDFS Read: 15308 HDFS Write: 240 SUCCESS
Total MapReduce CPU Time Spent: 11 seconds 710 msec
OK
Time taken: 81.864 seconds
hive> select * from student;
OK
1    tom
2    bob

Time taken: 0.824 seconds, Fetched: 2 row(s)
hive> 
 

完!!!

猜你喜欢

转载自blog.csdn.net/sjmz30071360/article/details/82080189