hadoop学习笔记之hive 安装与配置

环境

ubuntu 16.04
hadoop 2.7.3
hbase 1.2.4
hive 2.1.1

安装配置

1.下载安装包解压并配置环境变量
2.安装mysql

sudo apt-get install mysql-server
sudo apt-get install mysql-client

创建数据库’hive_meta’

mysql -u root -p
mysql> create database hive_meta

设置远程访问

mysql> GRANT ALL PRIVILEGES ON *.* TO 'yourusername'@'%' IDENTIFIED BY 'yourpassword' WITH GRANT OPTION 
mysql> FLUSH PRIVILEGES;

测试远程访问

mysql -h yourip -u yourusername -p

如果不能访问,打开/etc/mysql/mysql.conf.d/mysqld.cnf
将其中bind_address=127.0.0.1一句注释掉

下载mysql jdbc驱动放到hive目录下的lib文件夹中

配置

1.hive-env.sh.template另存为hive-env.sh
配置HADOOP_HOME
2.新建hive-site.xml添加以下配置内容

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<configuration>
    <property>
        <name>hive.metastore.warehouse.dir</name>
        <value>/hive/warehouse</value>
    </property>
    <property>
            <name>hive.exec.scratchdir</name>
            <value>/tmp/hive</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://master:3306/hive_meta?useSSL=false</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>root</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>123456</value>
    </property>
</configuration>

4.初始化

schematool -dbType mysql -initSchema

5.启动hive cli

hive

hive2.1.1与hbase1.2.4整合

上述步骤完成后,可以先到文档最后内容,测试一下能否与Hbase整合,似乎可以,不用做整合

1.下载hive2.1.1源码包,解压出hbase-handler文件夹
2.用eclipse新建一个普通java项目
3.将hbase-handler文件夹下org文件夹复制到项目src目录下
4.从hadoop、hbase、hive、zookeeper的lib目录中,找出以下jar
这里写图片描述

5.导出jar文件,更名为hive-hbase-handler-2.1.1.jar

6.配置hive-site.xml

<property>      
            <name>hive.aux.jars.path</name>       
            <value>file:///home/user/hadoop/apache-hive-2.1.1-bin/lib/hive-hbase-handler-2.1.1.jar,file:///home/user/hadoop/apache-hive-2.1.1-bin/lib/guava-14.0.1.jar,file:///home/user/hadoop/hbase-1.2.4/lib/hbase-common-1.2.4.jar,file:///home/user/hadoop/zookeeper-3.4.9/zookeeper-3.4.9.jar</value>      
    </property>      
    <property>  
            <name>hbase.zookeeper.quorum</name>  
        <value>master,node,another</value>  
    </property> 

整合完成,可以创建一个hive表并且对应hbase表也会创立。

打开hive输入如下指令

CREATE TABLE hbase_hive_1(key int, value string)   
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'   
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")   
TBLPROPERTIES ("hbase.table.name" = "xyz"); 

hbase shell 命令

list

查看是否含有xyz这一个表,如果有则说明整合成功。

猜你喜欢

转载自blog.csdn.net/flushest/article/details/62040825