hive表中创建分区,删除分区,加载分区数据,

一、静态分区

1、select查询中会扫描整个表内容,会消耗大量时间。由于相当多的时候人们只关心表中的一部分数据,

   故建表时引入了分区概念。


2、hive分区表:是指在创建表时指定的partition的分区空间,若需要创建有分区的表,

   需要在create表的时候调用可选参数partitioned by,详见表创建的语法结构。


二、实现创建、删除分区表


注意:
1、一个表可以拥有一个或者多个分区,每个分区以文件夹的形式单独存在表文件夹的目录下。

2、hive的表和列名不区分大小写(故建表时,都是小写)

3、分区是以字段的形式在表结构中存在,通过"desc table_name"命令可以查看到字段存在,该字段仅是分区的标识。

4、建表的语法(建分区可参见PARTITIONED BY参数):


CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name [(col_name data_type [COMMENT col_comment], ...)] [COMMENT table_comment] 
[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)] 
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS] 
[ROW FORMAT row_format]
[STORED AS file_format] 
[LOCATION hdfs_path]


5、分区建表分为2种,一种是单分区,也就是说在表文件夹目录下只有一级文件夹目录。另外一种是多分区,表文件夹下出现多文件夹嵌套模式。


a、单分区建表语句:create table test_table (id int, content string) partitioned by (dt string);
   单分区表,按天分区,在表结构中存在id,content,dt三列。

b、双分区建表语句:create table test_table_2 (id int, content string) partitioned by (dt string, hour string);
   双分区表,按天和小时分区,在表结构中新增加了dt和hour两列。


6、增加分区表语法(表已创建,在此基础上添加分区):

ALTER TABLE table_name ADD partition_spec [ LOCATION 'location1' ] partition_spec [ LOCATION 'location2' ] ... partition_spec: : PARTITION (partition_col = partition_col_value, partition_col = partiton_col_value, ...)

用户可以用 ALTER TABLE ADD PARTITION 来向一个表中增加分区。当分区名是字符串时加引号。例:

ALTER TABLE test_table ADD PARTITION (dt='2016-08-08', hour='10') location '/path/uv1.txt' PARTITION (dt='2017-08-08', hour='12') location '/path/uv2.txt';

7、删除分区语法:

ALTER TABLE table_name DROP partition_spec, partition_spec,...

用户可以用 ALTER TABLE DROP PARTITION 来删除分区。分区的元数据和数据将被一并删除。例:

ALTER TABLE test_table DROP PARTITION (dt='2016-08-08', hour='10');



8、数据加载进分区表中语法:

LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]

例:
LOAD DATA INPATH '/user/uv.txt' INTO TABLE test_table_2 PARTITION(dt='2016-08-08', hour='08');

LOAD DATA local INPATH '/user/hh/' INTO TABLE test_table  partition(dt='2013-02- 07');


当数据被加载至表中时,不会对数据进行任何转换。Load操作只是将数据复制至Hive表对应的位置。数据加载时在表下自动创建一个目录,文件存放在该分区下。


9、基于分区的查询的语句:

SELECT test_table.* FROM test_table WHERE test_table.dt>= '2008-08-08';

10、查看双分区语句:
hive> show partitions test_table_2; 
OK 
dt=2016-08-08/hour=10 
dt=2016-08-09/hour=10
dt=2008-08-09/hour=10

举例:

CREATE TABLE `incr_test_2`(
  `ord_id` string, 
  `ord_no` string, 
  `creat_date` string, 
  `creat_time` string, 
  `time_stamp` string)
COMMENT 'Imported by sqoop on 2016/08/08 14:53:43'
PARTITIONED BY ( 
  `log_time` string)
ROW FORMAT SERDE 
  'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' 
WITH SERDEPROPERTIES ( 
  'field.delim'='\u0001', 
  'line.delim'='\n', 
  'serialization.format'='\u0001') 
STORED AS INPUTFORMAT 
  'org.apache.hadoop.mapred.TextInputFormat' 
OUTPUTFORMAT 
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
;

查看对应的建表信息:

hive (origin_test)> show create table incr_test_2;
OK
CREATE TABLE `incr_test_2`(
  `ord_id` string, 
  `ord_no` string, 
  `creat_date` string, 
  `creat_time` string, 
  `time_stamp` string)
COMMENT 'Imported by sqoop on 2016/08/04 14:53:43'
PARTITIONED BY ( 
  `log_time` string)
ROW FORMAT SERDE 
  'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' 
WITH SERDEPROPERTIES ( 
  'field.delim'='\u0001', 
  'line.delim'='\n', 
  'serialization.format'='\u0001') 
STORED AS INPUTFORMAT 
  'org.apache.hadoop.mapred.TextInputFormat' 
OUTPUTFORMAT 
  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
  'hdfs://nameservice/user/hive/warehouse/origin_test.db/incr_test_2'
TBLPROPERTIES (
  'transient_lastDdlTime'='1470293625')

查看分区表:

-- 查看单分区:
hive (origin_test)>show partitions incr_test_2;
OK
log_time=20160917182510
log_time=20160917192512
log_time=20160917202512
log_time=20160917212512
log_time=20160917222510
log_time=20160917232511
log_time=20160918002525
log_time=20160918012514
log_time=20160918022513
log_time=20160918032510
log_time=20160918042510
log_time=20160918052511
log_time=20160918062513
log_time=20160918072510
log_time=20160918082510
log_time=20160918092511
log_time=20160918102510
log_time=20160918112511
log_time=20160918122512
log_time=20160918132511
Time taken: 0.264 seconds, Fetched: 20 row(s)
hive (origin_ennenergy_transport)> 
分区:分区的字段不能是表中已有的字段
create table person4 (
id int,
name string,
likes ARRAY<string>,
address MAP<string,string>
)
PARTITIONED BY (sex string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
COLLECTION ITEMS TERMINATED BY '-'
MAP KEYS TERMINATED BY ':';
load data local inpath '/tmp/data/person0_data' into table person4 PARTITION (sex='1');
load data local inpath '/tmp/data/person0_data' into table person4 PARTITION (sex='2');
hive> select * from person4;
1  Jed ["book","lol"] {"beijing":"chaoyang"} 1
2  Tom ["book","lol","sports"] {"beijing":"chaoyang","shanghai":"pudong"} 1
3  Cat ["book","lol","sports","music"]
{"beijing":"chaoyang","shanghai":"pudong","shanxi":"taiyuan"} 1
1  Jed ["book","lol"] {"beijing":"chaoyang"} 2
2  Tom ["book","lol","sports"] {"beijing":"chaoyang","shanghai":"pudong"} 2
3  Cat ["book","lol","sports","music"]
{"beijing":"chaoyang","shanghai":"pudong","shanxi":"taiyuan"} 2
hive> select * from person4 where sex="1";
1  Jed ["book","lol"] {"beijing":"chaoyang"} 1
2  Tom ["book","lol","sports"] {"beijing":"chaoyang","shanghai":"pudong"} 1
3  Cat ["book","lol","sports","music"]
{"beijing":"chaoyang","shanghai":"pudong","shanxi":"taiyuan"} 1
create table person6 (
id int,
name string,
likes ARRAY<string>,
address MAP<string,string>
)
PARTITIONED BY (sex string, age int)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
COLLECTION ITEMS TERMINATED BY '-'
MAP KEYS TERMINATED BY ':';
hive> load data local inpath '/tmp/data/person6_data' into table person6 PARTITION
(sex='1',age=18);
hive> load data local inpath '/tmp/data/person6_data' into table person6 PARTITION
(sex='2',age=18);
hive> load data local inpath '/tmp/data/person6_data' into table person6 PARTITION
(sex='3',age=20);
添加/删除分区,内部表删除分区后该分区下的数据也会被删除,外部表删除分区后该分
区下的数据不会被删除
ALTER TABLE person4 add PARTITION (sex='3');
ALTER TABLE person6 drop PARTITION (sex='1',age=18);
ALTER TABLE person6 drop PARTITION (sex='2');#会删除 sex='2'和 age=18
ALTER TABLE person4 drop PARTITION (age=20); #会删除 sex='3'和 age=20

二、动态分区:
需求:按照不同部门作为分区导数据到目标表 
以上需求如果用静态分区的话,数据量大你是不是很懵逼??所以这个需求一般采用动态分区来实现。 
1、创建目标表

hive (default)> create table emp_dynamic_partition(
              > empno int, 
              > ename string, 
              > job string, 
              > mgr int, 
              > hiredate string, 
              > sal double, 
              > comm double)
              > PARTITIONED BY(deptno int)
              > row format delimited fields terminated by '\t';

2、采用动态方式加载数据到目标表 
加载之前先设置一下下面的参数

hive (default)> set hive.exec.dynamic.partition.mode=nonstrict

开始加载数据:

insert into table emp_dynamic_partition partition(deptno)
select empno , ename , job , mgr , hiredate , sal , comm, deptno from emp;

上面加载数据方式并没有指定具体的分区,只是指出了分区字段。在select最后一个字段必须跟你的分区字段,这样就会自行根据deptno的value来分区。

静态分区和动态分区的区别就是
动态分区不用指定目录,有系统自己分配,加载数据的时候会比静态分区更加方便。

猜你喜欢

转载自blog.csdn.net/wyqwilliam/article/details/82531967
今日推荐