hive六__Hive自定义函数和Transform

编写UDF函数,来将原来创建的buck_ip_test表中的英文国籍转换成中文

iptest.txt文件内容:

1   张三  192.168.1.1 china
2   李四  192.168.1.2 china
3   王五  192.168.1.3 china
4   makjon  192.168.1.4 china
1   aa  192.168.1.1 japan
2   bb  192.168.1.2 japan
3   cc  192.168.1.3 japan
4   makjon  192.168.1.4 japan

查看内容如下:
这里写图片描述
创建表结构:

0: jdbc:hive2://localhost:10000> create table buck_ip_test(id int,name string,ip string,country string)
0: jdbc:hive2://localhost:10000> row format delimited fields terminated by '\t';

将数据加载到表里面:

0: jdbc:hive2://localhost:10000> load data local inpath '/home/hadoop/iptest.txt' into table buck_ip_test;
INFO  : Loading data to table default.buck_ip_test from file:/home/hadoop/iptest.txt
INFO  : Table default.buck_ip_test stats: [numFiles=1, totalSize=204]
No rows affected (0.829 seconds)

查看表数据:

0: jdbc:hive2://localhost:10000> select * from buck_ip_test;

这里写图片描述

编写Java代码,Lower.java代码如下:
(为什么要继承 org.apache.hadoop.hive.ql.exec.UDF ?? 参考官方文档)

package com.ghq.hive;

import java.util.HashMap;

import org.apache.hadoop.hive.ql.exec.UDF;

public class Lower extends UDF{

    private static HashMap<String,String> countryMap = new HashMap<>();

    static {
        countryMap.put("china", "中国");
        countryMap.put("japan", "日本");
    }

    //此段代码进行国家的转换
    public  String evaluate(String str){
        String country  = countryMap.get(str);
        if(country ==null){
            return "其他";
        }else{
            return country;
        }
    }

    //在函数中可以定义多个evaluate方法,进行重载
    //此段代码进行国家和IP的拼接,测试重载用
    public  String evaluate(String country,String ip){

            return country+"_"+ip;
    }
}

在eclipse测试无问题后,导出成utftest.jar并上传到服务器的当前用户的家目录 ~
这里是/home/hadoop
接下来将jar包导入到hive中

0: jdbc:hive2://localhost:10000> add jar /home/hadoop/utftest.jar

或者将jar包放到hive目录下面的lib文件夹下。

创建自定义函数:

create temporary function convert as 'com.ghq.hive.Lower';

然后在Hive中进行查询:

hive> select country,convert(country,ip),convert(country) from buck_ip_test;

查询结果如下:
这里写图片描述

Hive中使用udf对JSON进行处理

数据文件movie.txt内容如下:

{"movie":"2797","rate":"4","timeStamp":"978302039","uid":"1"}
{"movie":"2321","rate":"3","timeStamp":"978302205","uid":"1"}
{"movie":"720","rate":"3","timeStamp":"978300760","uid":"1"}
{"movie":"1270","rate":"5","timeStamp":"978300055","uid":"1"}
{"movie":"527","rate":"5","timeStamp":"978824195","uid":"1"}
{"movie":"2340","rate":"3","timeStamp":"978300103","uid":"1"}
{"movie":"48","rate":"5","timeStamp":"978824351","uid":"1"}
{"movie":"1097","rate":"4","timeStamp":"978301953","uid":"1"}
{"movie":"1721","rate":"4","timeStamp":"978300055","uid":"1"}
{"movie":"1545","rate":"4","timeStamp":"978824139","uid":"1"}

查看内容如下:
这里写图片描述

将数据导入到hive中的rating表中:

create table rating(rate string);
load data local inpath '/home/hadoop/movie.txt' overwrite into table rating;
select * from rating;

内容如下:
这里写图片描述

使用ObjectMapper来处理json的数据,首先创建MovierateBean.java,代码如下:

package com.ghq.hive;

import java.sql.Timestamp;

public class MovierateBean {
    private String movie;
    private String rate;
    private Timestamp timeStamp;
    private String uid;
    public String getMovie() {
        return movie;
    }
    public void setMovie(String movie) {
        this.movie = movie;
    }
    public String getRate() {
        return rate;
    }
    public void setRate(String rate) {
        this.rate = rate;
    }
    public Timestamp getTimeStamp() {
        return timeStamp;
    }
    public void setTimeStamp(Timestamp timeStamp) {
        this.timeStamp = timeStamp;
    }
    public String getUid() {
        return uid;
    }
    public void setUid(String uid) {
        this.uid = uid;
    }
    @Override
    public String toString() {
        return "MovierateBean [movie=" + movie + ", rate=" + rate + ", timeStamp=" + timeStamp + ", uid=" + uid + "]";
    }
}

创建MovieJson.java,代码如下:

package com.ghq.hive;

import org.apache.hadoop.hive.ql.exec.UDF;
import org.codehaus.jackson.map.ObjectMapper;

public class MovieJson extends UDF{

    public String evaluate(String jsonline){
        ObjectMapper om = new ObjectMapper();
        try{
            MovierateBean  bean = om.readValue(jsonline,MovierateBean.class);
            return bean.toString();
        }catch(Exception e){
            return(jsonline);
        }   

    }
}

和前面案例操作一样

add jar /home/hadoop/movie.jar;
create temporary function movie_convert as 'com.ghq.hive.MovieJson';
select movie_convert(rate) from rating;

这里写图片描述

Hive Transform简单介绍

Hive的UDF、UDAF需要通过java语言编写。Hive提供了另一种方式,达到自定义UDF和UDAF的目的,但使用方法更简单。这就是TRANSFORM。TRANSFORM语言支持通过多种语言,实现类似于UDF的功能。

服务器端/opt/movie_trans.py脚本内容如下:

import sys
import datetime
import json

for line in sys.stdin:
    line = line.strip()
    hjson = json.loads(line)
    movie = hjson['movie']
    rate = hjson['rate']
    timeStamp = hjson['timeStamp']
    uid = hjson['uid']
    timeStamp = datetime.datetime.fromtimestamp(float(timeStamp))
    print ('\t'.join([movie, rate, str(timeStamp),uid]))

执行结果如下:
这里写图片描述

猜你喜欢

转载自blog.csdn.net/guo20082200/article/details/82533600