hive 自定义udf函数

上一篇中介绍到了一些hive 中一些默认的function 但是在日常的开发需求中这个肯定是满足不了我们的,下面介绍一下hive 的自定义function
废话不多少了先写个简单的例子压压惊
首先给出工程依赖

<dependencies>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>${hadoop.version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-exec</artifactId>
            <version>${hive.version}</version>
        </dependency>

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.10</version>
        </dependency>
    </dependencies>

hadoop hive 版本我用的是如下的版本

 <properties>
        <java.version>1.8</java.version>
        <projcet.build.sourceEncoding>UTF-8</projcet.build.sourceEncoding>
        <hadoop.version>2.7.5</hadoop.version>
        <hive.version>2.2.0</hive.version>
    </properties>

下面给出一个简单例子就是把字符串转换成大写

import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.Text;

public class UpperUDF extends UDF {
    public Text evaluate(final Text s) {
        if (s == null) {
            return null;
        }
        return new Text( s.toString().toUpperCase());
    }
}

下面创建一个临时函数

add jar /data/hive/jar/hive-udf-diy.jar;
create temporary function my_upper as 'com.wen.hive.UpperUDF';

下面测试一下函数结果如下

hive (default)> add jar /data/hive/jar/hive-udf-diy.jar;
Added [/data/hive/jar/hive-udf-diy.jar] to class path
Added resources: [/data/hive/jar/hive-udf-diy.jar]
hive (default)> create temporary function my_upper as 'com.wen.hive.UpperUDF';
OK
Time taken: 1.494 seconds
hive (default)> select my_upper(name),* from default.student_info;
OK
_c0     student_info.name       student_info.age
SPARK   spark   17
HADOOP  hadoop  18
JAVA    java    19
Time taken: 5.987 seconds, Fetched: 3 row(s)

但是这种是临时的函数 一旦重启就没有了方便测试用的 但是如果想想创建永久的改怎么弄呢
可以先将文件上传到hdfs 上面去 然后使用下面的命令创建

create  function my_upper as 'com.wen.hive.UpperUDF' USING JAR 'hdfs:///data/hive/jar/hive-udf-diy.jar';

然后再执行select

 select my_upper(name),* from default.student_info;

得到如下的结果

在这里插入代码片
[hive@cloud bin]$ ./hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/apps/soft/apache-hive-2.2.0-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/apps/soft/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in file:/apps/soft/apache-hive-2.2.0-bin/conf/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive (default)> select my_upper(name),* from default.student_info;
Added [/tmp/5480349e-47da-4ebd-ab39-d44fe72e2789_resources/hive-udf-diy.jar] to class path
Added resources: [hdfs:///data/hive/jar/hive-udf-diy.jar]
OK
_c0     student_info.name       student_info.age
SPARK   spark   17
HADOOP  hadoop  18
JAVA    java    19
Time taken: 7.092 seconds, Fetched: 3 row(s)

欢迎关注,更多福利

这里写图片描述

猜你喜欢

转载自blog.csdn.net/u012957549/article/details/85839263