1. 下载安装包 hbase-0.98.6-cdh5.3.6.tar.gz 解压,
链接:https://pan.baidu.com/s/1vsz2Cqh2cp0n99sHS_xBzg 提取码:4abh
2. 进入 conf 中 配置 hbase-env.sh, 配置 JAVA_HOME, 配置 是否使用 hbase 自带的 zookeeper,
export JAVA_HOME=/home/cmcc/server/jdk1.8.0_181 export HBASE_MANAGES_ZK=false
3. hbase-site.xml
1》nameNode 节点名称(如下是单节点的) <property> <name>hbase.rootdir</name> <value>hdfs://hadoop1:9000</value> </property>
2》是否让 hbase 支持分布式
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
3》配置 hbase 端口号
(1) 第一种方式, 只写端口号, 因为Hmaster 会用到高可用
<property>
<name>hbase.master.port</name>
<value>600000</value>
</property>
(2) 第二种方式是指定某台固定的机器
<property>
<name>hbase.master.port</name>
<value>hadoop1:600000</value>
</property>
4》配置 zookeeper ,zookeeper必须是奇数个,如果是多台>1台, 配置成:<value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>
<property>
<name>hbase.zookeeper.quorum</name>
<value>hadoop1:2181</value>
</property>
5》配置 zookeeper data 目录
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/cmcc/server/zookeeper/data</value>
</property>
6》配置 zookeeper 端口
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
7》使用本地文件系统设置为false,使用hdfs设置为true
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>true</value>
</property>
4. 编辑 regionservers, 相当于 slave 文件
如果是单机, 添加: hadoop1
如果是多台, 添加:
hadoop1
hadoop2
hadoop3
5. 将 lib 中 所有 hadoop 开头的jar包删除, 再到 hadoop中将如下对应的jar包拷贝到lib目录下, zookeeper 的jar包到zookeeper 中拷贝
首先进入到 hadoop 目录下, 搜索出一个jar包, 拷贝到指定目录
find -name hadoop-annotations /home/cmcc/server/t1/ 最后将所有jar包拷贝到lib目录中 (如果是集群, 不要忘记到其他机器上做)
hadoop-annotations-2.5.0.jar hadoop-auth-2.5.0-cdh5.3.6.jar hadoop-client-2.5.0-cdh5.3.6.jar hadoop-common-2.5.0-cdh5.3.6.jar hadoop-hdfs-2.5.0-cdh5.3.6.jar hadoop-mapreduce-client-app-2.5.0-cdh5.3.6.jar hadoop-mapreduce-client-common-2.5.0-cdh5.3.6.jar hadoop-mapreduce-client-core-2.5.0-cdh5.3.6.jar hadoop-mapreduce-client-hs-2.5.0-cdh5.3.6.jar hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.6.jar hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6.jar hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6-tests.jar hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.6.jar hadoop-yarn-api-2.5.0-cdh5.3.6.jar hadoop-yarn-applications-distributedshell-2.5.0-cdh5.3.6.jar hadoop-yarn-applications-unmanaged-am-launcher-2.5.0-cdh5.3.6.jar hadoop-yarn-client-2.5.0-cdh5.3.6.jar hadoop-yarn-common-2.5.0-cdh5.3.6.jar hadoop-yarn-server-applicationhistoryservice-2.5.0-cdh5.3.6.jar hadoop-yarn-server-common-2.5.0-cdh5.3.6.jar hadoop-yarn-server-nodemanager-2.5.0-cdh5.3.6.jar hadoop-yarn-server-resourcemanager-2.5.0-cdh5.3.6.jar hadoop-yarn-server-tests-2.5.0-cdh5.3.6.jar hadoop-yarn-server-web-proxy-2.5.0-cdh5.3.6.jar zookeeper-3.4.5-cdh5.3.6.jar
6. 将 hbase+hadoop_repository.tar.gz CDH_HadoopJar.tar.gz 拷贝到 lib 目录中, 到 1 中的 网盘中下载 (如果是集群, 不要忘记到其他机器上做)
7. 将 hadoop 中的 core-site.xml, hdfs-site.xml 拷贝到 hbase中的 conf 中 (如果是集群, 不要忘记到其他机器上做)
8. 启动服务
bin/start-hbase.sh