Hive基础之高阶查询

hive中查询语句的语法都在Select Syntax,所有查询相关的语法都在该手册中,本文主要介绍一些高阶的查询语句的用法。

1.分组查询

需求1,每个部门的平均工资是多少,在这里我们使用avg函数来求平均值,使用group by来进行分组。

select t.deptno, avg(t.sal) avg_sal from emp t group by t.deptno ;

hive (default)> select t.deptno, avg(t.sal) avg_sal from emp t group by t.deptno ;
Query ID = hive_20190218215454_b7b04a8b-68a5-4e19-a705-bae2a955e735
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0014, Tracking URL = http://node1:8088/proxy/application_1550060164760_0014/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0014
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-18 21:57:13,216 Stage-1 map = 0%,  reduce = 0%
2019-02-18 21:58:13,382 Stage-1 map = 0%,  reduce = 0%
2019-02-18 21:59:14,190 Stage-1 map = 0%,  reduce = 0%
2019-02-18 22:00:15,212 Stage-1 map = 0%,  reduce = 0%
2019-02-18 22:01:15,984 Stage-1 map = 0%,  reduce = 0%
2019-02-18 22:01:36,590 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 8.27 sec
2019-02-18 22:01:53,749 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 10.17 sec
MapReduce Total cumulative CPU time: 10 seconds 170 msec
Ended Job = job_1550060164760_0014
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 10.17 sec   HDFS Read: 9252 HDFS Write: 54 SUCCESS
Total MapReduce CPU Time Spent: 10 seconds 170 msec
OK
deptno  avg_sal
10      2916.6666666666665
20      2175.0
30      1566.6666666666667
Time taken: 417.131 seconds, Fetched: 3 row(s)

在使用group by的时候需要注意一个陷阱,就是select的查询字段必须要出现在group by的字段中,否则查询的时候就会报错。

需求2,查询每个部门中每个岗位的最高薪水,这里我们使用max函数来求最大值,使用group by来进行分组,不过这次需要首先按部门进行分组,然后按岗位来进行分组。

select t.deptno, t.job, max(t.sal) max_sal from emp t group by t.deptno, t.job ;

hive (default)> select t.deptno, t.job, max(t.sal) max_sal from emp t group by t.deptno, t.job ;
Query ID = hive_20190218220404_16d4aaa8-8ef9-4352-98d9-a435f16d5685
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0015, Tracking URL = http://node1:8088/proxy/application_1550060164760_0015/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0015
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-18 22:04:34,412 Stage-1 map = 0%,  reduce = 0%
2019-02-18 22:05:35,189 Stage-1 map = 0%,  reduce = 0%
2019-02-18 22:05:47,790 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.29 sec
2019-02-18 22:06:16,167 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.01 sec
MapReduce Total cumulative CPU time: 5 seconds 350 msec
Ended Job = job_1550060164760_0015
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 5.35 sec   HDFS Read: 9411 HDFS Write: 158 SUCCESS
Total MapReduce CPU Time Spent: 5 seconds 350 msec
OK
deptno  job     max_sal
10      CLERK   1300.0
10      MANAGER 2450.0
10      PRESIDENT       5000.0
20      ANALYST 3000.0
20      CLERK   1100.0
20      MANAGER 2975.0
30      CLERK   950.0
30      MANAGER 2850.0
30      SALESMAN        1600.0
Time taken: 138.878 seconds, Fetched: 9 row(s)

对于复杂的查询语句,一般是先把查询分解成为小的查询语句,然后再将查询语句和在一起,否则容易出错。

2. Having

使用having的时候不能使用where,那么having与where的区别是什么呢?

where是针对单条记录进行筛选,having是针对分组结果进行筛选,也就是说where后面可以跟的过滤条件都是针对单条记录的,而having后面可以跟的过滤条件都是针对分组结果的,having是和group by结合一起使用的。

需求3,查询平均薪水大于3000的部门

首先需要求出每个部门的平均薪水:
select t.deptno, avg(t.sal) from emp t group by t.deptno ;
然后过滤出平均薪水大于3000的部门:
select t.deptno, avg(t.sal) avg_sal from emp t group by t.deptno having avg_sal > 2000;

hive (default)> select t.deptno, avg(t.sal) avg_sal from emp group by t.deptno having avg_sal > 2000;
FAILED: SemanticException [Error 10004]: Line 1:54 Invalid table alias or column reference 't': (possible column names are: empno, ename, job, mgr, hiredate, sal, comm, deptno)
hive (default)> select t.deptno, avg(t.sal) avg_sal from emp t group by t.deptno having avg_sal > 2000;
Query ID = hive_20190218220909_8d6fbb04-a9c9-4867-997d-72ee3e74615f
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0016, Tracking URL = http://node1:8088/proxy/application_1550060164760_0016/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0016
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-02-18 22:10:00,061 Stage-1 map = 0%,  reduce = 0%
2019-02-18 22:10:21,344 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.91 sec
2019-02-18 22:10:33,958 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 3.91 sec
MapReduce Total cumulative CPU time: 3 seconds 910 msec
Ended Job = job_1550060164760_0016
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 3.91 sec   HDFS Read: 9762 HDFS Write: 32 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 910 msec
OK
deptno  avg_sal
10      2916.6666666666665
20      2175.0
Time taken: 62.428 seconds, Fetched: 2 row(s)

3.连接-Join

join是经常要用到的关键字,它的作用是连接两张表,在实际使用中很少是只查询一张表的,一般都是多张表关联起来进行查询,这时候就要用到join关键字了。join一般分为等值连接、左连接和右连接。

等值join,对于两张表中都存在相同的字段,且值也相同的时候才会进行连接。
select e.empno, e.ename, d.deptno, d.dname from emp e join dept d on e.deptno = d.deptno ;

hive (default)> select e.empno, e.ename, d.deptno, d.dname from emp e join dept d on e.deptno = d.deptno ;
Query ID = hive_20190218221515_a7af602e-ff0b-4a4c-8b5b-07552129f669
Total jobs = 1
2019-02-18 10:16:33     Starting to launch local task to process map join;     maximum memory = 2075918336
2019-02-18 10:16:39     Dump the side-table for tag: 1 with group count: 4 into file: file:/tmp/hive/3250ee76-3ba2-4d98-8139-3b6f575c5999/hive_2019-02-18_22-15-54_561_1601014947394604948-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable
2019-02-18 10:16:39     Uploaded 1 File to: file:/tmp/hive/3250ee76-3ba2-4d98-8139-3b6f575c5999/hive_2019-02-18_22-15-54_561_1601014947394604948-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable (373 bytes)
2019-02-18 10:16:39     End of local task; Time Taken: 5.838 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1550060164760_0017, Tracking URL = http://node1:8088/proxy/application_1550060164760_0017/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0017
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
2019-02-18 22:17:26,356 Stage-3 map = 0%,  reduce = 0%
2019-02-18 22:17:35,716 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 1.58 sec
MapReduce Total cumulative CPU time: 1 seconds 580 msec
Ended Job = job_1550060164760_0017
MapReduce Jobs Launched: 
Stage-Stage-3: Map: 1   Cumulative CPU: 1.58 sec   HDFS Read: 7433 HDFS Write: 310 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 580 msec
OK
empno   ename   deptno  dname
7369    SMITH   20      RESEARCH
7499    ALLEN   30      SALES
7521    WARD    30      SALES
7566    JONES   20      RESEARCH
7654    MARTIN  30      SALES
7698    BLAKE   30      SALES
7782    CLARK   10      ACCOUNTING
7788    SCOTT   20      RESEARCH
7839    KING    10      ACCOUNTING
7844    TURNER  30      SALES
7876    ADAMS   20      RESEARCH
7900    JAMES   30      SALES
7902    FORD    20      RESEARCH
7934    MILLER  10      ACCOUNTING
Time taken: 103.319 seconds, Fetched: 14 row(s)

左连接-left join,左连接是以左边的表为准,右边的表中是否有对应的字段相对应无要求,如果没有的话会以空或null来显示。
select e.empno, e.ename, d.deptno, d.dname from emp e left join dept d on e.deptno = d.deptno ;

hive (default)>  select e.empno, e.ename, d.deptno, d.dname  from emp e left join dept d on e.deptno = d.deptno ;
Query ID = hive_20190218222121_788ee5b3-1308-4f88-a5cc-49841904bd76
Total jobs = 1
2019-02-18 10:22:09     Starting to launch local task to process map join;     maximum memory = 2075918336
2019-02-18 10:22:14     Dump the side-table for tag: 1 with group count: 4 into file: file:/tmp/hive/3250ee76-3ba2-4d98-8139-3b6f575c5999/hive_2019-02-18_22-21-49_821_3844784912080550586-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile11--.hashtable
2019-02-18 10:22:14     Uploaded 1 File to: file:/tmp/hive/3250ee76-3ba2-4d98-8139-3b6f575c5999/hive_2019-02-18_22-21-49_821_3844784912080550586-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile11--.hashtable (373 bytes)
2019-02-18 10:22:14     End of local task; Time Taken: 4.636 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1550060164760_0018, Tracking URL = http://node1:8088/proxy/application_1550060164760_0018/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0018
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
2019-02-18 22:23:31,507 Stage-3 map = 0%,  reduce = 0%
2019-02-18 22:24:05,008 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 1.72 sec
MapReduce Total cumulative CPU time: 1 seconds 720 msec
Ended Job = job_1550060164760_0018
MapReduce Jobs Launched: 
Stage-Stage-3: Map: 1   Cumulative CPU: 1.72 sec   HDFS Read: 7324 HDFS Write: 310 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 720 msec
OK
empno   ename   deptno  dname
7369    SMITH   20      RESEARCH
7499    ALLEN   30      SALES
7521    WARD    30      SALES
7566    JONES   20      RESEARCH
7654    MARTIN  30      SALES
7698    BLAKE   30      SALES
7782    CLARK   10      ACCOUNTING
7788    SCOTT   20      RESEARCH
7839    KING    10      ACCOUNTING
7844    TURNER  30      SALES
7876    ADAMS   20      RESEARCH
7900    JAMES   30      SALES
7902    FORD    20      RESEARCH
7934    MILLER  10      ACCOUNTING
Time taken: 137.514 seconds, Fetched: 14 row(s)

右连接-right join,右连接是以右边的表为准,左边的表中是否有对应的字段相对应无要求,如果没有的话会以空或null来显示。
select e.empno, e.ename, e.deptno, d.dname from emp e right join dept d on e.deptno = d.deptno ;

hive (default)> select e.empno, e.ename, e.deptno, d.dname  from emp e right join dept d on e.deptno = d.deptno ;
Query ID = hive_20190218222525_d405f256-2ae9-455f-94f1-a352292c47a3
Total jobs = 1
2019-02-18 10:25:47     Starting to launch local task to process map join;     maximum memory = 2075918336
2019-02-18 10:25:52     Dump the side-table for tag: 0 with group count: 3 into file: file:/tmp/hive/3250ee76-3ba2-4d98-8139-3b6f575c5999/hive_2019-02-18_22-25-32_225_2805393753238241240-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile20--.hashtable
2019-02-18 10:25:52     Uploaded 1 File to: file:/tmp/hive/3250ee76-3ba2-4d98-8139-3b6f575c5999/hive_2019-02-18_22-25-32_225_2805393753238241240-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile20--.hashtable (498 bytes)
2019-02-18 10:25:52     End of local task; Time Taken: 5.719 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1550060164760_0019, Tracking URL = http://node1:8088/proxy/application_1550060164760_0019/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0019
Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0
2019-02-18 22:26:16,286 Stage-3 map = 0%,  reduce = 0%
2019-02-18 22:26:34,175 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 1.5 sec
MapReduce Total cumulative CPU time: 1 seconds 500 msec
Ended Job = job_1550060164760_0019
MapReduce Jobs Launched: 
Stage-Stage-3: Map: 1   Cumulative CPU: 1.5 sec   HDFS Read: 6599 HDFS Write: 330 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 500 msec
OK
empno   ename   deptno  dname
7782    CLARK   10      ACCOUNTING
7839    KING    10      ACCOUNTING
7934    MILLER  10      ACCOUNTING
7369    SMITH   20      RESEARCH
7566    JONES   20      RESEARCH
7788    SCOTT   20      RESEARCH
7876    ADAMS   20      RESEARCH
7902    FORD    20      RESEARCH
7499    ALLEN   30      SALES
7521    WARD    30      SALES
7654    MARTIN  30      SALES
7698    BLAKE   30      SALES
7844    TURNER  30      SALES
7900    JAMES   30      SALES
NULL    NULL    NULL    OPERATIONS
Time taken: 63.074 seconds, Fetched: 15 row(s)

从上面的查询结果可以看出,以右表为准,左边的字段值出现为NULL的情况,说明右连接是以右边的表的数据为准的。

全连接-full join,全连接是将左连接和右连接放在一起,在实际应用中用到的不是很多。
select e.empno, e.ename, e.deptno, d.dname from emp e full join dept d on e.deptno = d.deptno ;

hive (default)> select e.empno, e.ename, e.deptno, d.dname  from emp e full join dept d on e.deptno = d.deptno ;
Query ID = hive_20190218222929_a0196f09-cd96-43be-a17b-7fac012b3074
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1550060164760_0020, Tracking URL = http://node1:8088/proxy/application_1550060164760_0020/
Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job  -kill job_1550060164760_0020
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2019-02-18 22:30:00,266 Stage-1 map = 0%,  reduce = 0%
2019-02-18 22:30:38,705 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 5.75 sec
2019-02-18 22:31:07,049 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 7.6 sec
2019-02-18 22:31:20,754 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 9.36 sec
MapReduce Total cumulative CPU time: 9 seconds 360 msec
Ended Job = job_1550060164760_0020
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 9.36 sec   HDFS Read: 14555 HDFS Write: 330 SUCCESS
Total MapReduce CPU Time Spent: 9 seconds 360 msec
OK
empno   ename   deptno  dname
7934    MILLER  10      ACCOUNTING
7839    KING    10      ACCOUNTING
7782    CLARK   10      ACCOUNTING
7876    ADAMS   20      RESEARCH
7788    SCOTT   20      RESEARCH
7369    SMITH   20      RESEARCH
7566    JONES   20      RESEARCH
7902    FORD    20      RESEARCH
7844    TURNER  30      SALES
7499    ALLEN   30      SALES
7698    BLAKE   30      SALES
7654    MARTIN  30      SALES
7521    WARD    30      SALES
7900    JAMES   30      SALES
NULL    NULL    NULL    OPERATIONS
Time taken: 105.25 seconds, Fetched: 15 row(s)

全连接有点类似于两张表的笛卡尔乘积,在实际应用中不是很多,一般左连接用的比较多一些。

更多有关大数据的内容请关注微信公众号:大数据与人工智能初学者
扫描下面的二维码即可关注:
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/xjjdlut/article/details/88068973