16-Hive DDL DML built-in functions

Create a table: the following format

CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.] table_name
LIKE existing_table_or_view_name
[LOCATION hdfs_path];

 

Copy table structure, do not copy data table
CREATE TABLE ruozedata_emp2 LIKE ruozedata_emp;

Create Table As Select (CTAS)         #CTAS表示Create Table As Select
create table ruozedata_emp3 as select empno,ename,deptno from ruozedata_emp;

Offline Job: day granularity, today is yesterday's statistics, statistics written inside a tmp table

ALTER TABLE 老表 RENAME TO 新表;
ALTER TABLE ruozedata_emp3 rename to ruozedata_emp3_bak;

Drop Table Delete Table foot
DROP TABLE table_name ruozedata_emp3_bak; tables and data will be deleted

Data Truncate Table delete the table, but the table still
Truncate Table table_name;

 

 

Drop Table and frequently asked interview Truncate Table

#################################################################

 

 

 

 

 

 

 

内部表和外部表
metadata: TBL_TYPE
data: Table Type

hive中有2大类,managed table 和 external table,如下                    #external是外部的意思,外部表时候,通常把数据不放在默认的路径下  

默认情况下,create table table_name就是内部表

create external table table_name 是外部表

 

 

 

 

 

 

 

上面的发现在mysql中有元数据,在hdfs中也有数据

 

 

 

 

 

 

 

 

 所以对于managed内部表而言,执行drop table table_name时候,元数据和HDFS上面的数据都没了

下面来看外部表 terminated,外部表,公司一般走路径,也就是说不放在默认的路径中

CREATE EXTERNAL TABLE emp_external(
empno int,
ename string,
job string,
mgr int,
hiredate string,
sal double,
comm double,
deptno int
)ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
LOCATION '/d7_externel/emp/' ;

这个时候元数据和hdfs中都有数据了

 

 

 

 

 

 

 

 

 

 

 

但是HDFS中数据还在

 

 

 

 

 总结:

MANAGED_TABLE
DROP : data + metadata  都会被删除掉

 

EXTERNAL_TABLE
DROP: metadata 被删除 但是 HDSF不删

上面的2个区别,面试经常会被问到

##############################################################################

create table ruozedata_dept(
deptno int,
dname string,
loc string
) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';

load data local inpath '/home/hadoop/data/dept.txt' into table ruozedata_dept;

 

 上面数据就重复了,此时需要用overwrite 防止重写

 

 

load data inpath 'hdfs://hadoop000:8020/wc/dept/dept.txt' into table ruozedata_dept;

 

 

LOAD DATA [LOCAL] INPATH '' [OVERWRITE] INTO TABLE XXX;
LOCAL:从本地系统 linux
不带LOCAL: 从Hadoop文件系统 HDFS

OVERWRITE 数据覆盖
不带OVERWRITE 追加

#############################################################

 

 

 

下面来把数据导出

INSERT OVERWRITE LOCAL DIRECTORY '/home/hadoop/tmp/empout'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
SELECT empno,ename FROM ruozedata_emp;

 

SELECT empno,ename FROM ruozedata_emp;

 

select ename,sal,
case
when sal>1 and sal<=1000 then "lower"
when sal>1000 and sal<=2000 then "just so so"
when sal>2000 and sal<=4000 then "ok"
else "high"
end
from ruozedata_emp;                   #显示如下

 

Guess you like

Origin www.cnblogs.com/python8/p/11964928.html