Hadoop高可用平台启动(HDFS+Zookeeper+Yarn)相关操作及Hbase\MySQL\Hive启动

Hadoop高可用平台启动相关操作

Hadoop高可用平台启动HDFS+Zookeeper+Yarn及MySQL+Hbase+Hive启动相关操作

一、Hadoop高可用平台启动顺序

1 启动Zookeeper集群

在hadoop2、hadoop3、hadoop4上执行

zkServer.sh start
[root@hadoop2 ~]# zkServer.sh start
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hadoop2 ~]# jps
1425 Jps
1407 QuorumPeerMain

2 启动hadoop集群

在hadoop1上执行

方式一 会产生日志报错文件占内存

start-all.sh

方式二 建议用这个

start-dfs.sh
start-yarn.sh #此步放到第三步
[root@hadoop1 ~]# start-dfs.sh
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Starting namenodes on [hadoop1 hadoop2]
hadoop1: starting namenode, logging to /opt/hadoop/logs/hadoop-root-namenode-hadoop1.out
hadoop2: starting namenode, logging to /opt/hadoop/logs/hadoop-root-namenode-hadoop2.out
hadoop1: SLF4J: Class path contains multiple SLF4J bindings.
hadoop1: SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
hadoop1: SLF4J: Found binding in [jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
hadoop1: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
hadoop1: SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
hadoop2: datanode running as process 1508. Stop it first.
hadoop4: datanode running as process 1437. Stop it first.
hadoop3: starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-hadoop3.out
Starting journal nodes [hadoop1 hadoop2 hadoop3]
hadoop2: starting journalnode, logging to /opt/hadoop/logs/hadoop-root-journalnode-hadoop2.out
hadoop3: starting journalnode, logging to /opt/hadoop/logs/hadoop-root-journalnode-hadoop3.out
hadoop1: starting journalnode, logging to /opt/hadoop/logs/hadoop-root-journalnode-hadoop1.out
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Starting ZK Failover Controllers on NN hosts [hadoop1 hadoop2]
hadoop1: zkfc running as process 2006. Stop it first.
hadoop2: starting zkfc, logging to /opt/hadoop/logs/hadoop-root-zkfc-hadoop2.out
[root@hadoop1 ~]# jps
1712 NameNode
2089 Jps
1917 JournalNode
[root@hadoop2 ~]# jps
1760 Jps
1571 JournalNode
1480 NameNode
1690 DFSZKFailoverController
1407 QuorumPeerMain
[root@hadoop3 ~]# jps
1488 DataNode
1664 Jps
1425 QuorumPeerMain
1581 JournalNode
[root@hadoop4 ~]# jps
1495 Jps
1421 QuorumPeerMain

3 启动yarn

在hadoop1上执行

start-yarn.sh
[root@hadoop1 ~]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/logs/yarn-root-resourcemanager-hadoop1.out
hadoop3: nodemanager running as process 1546. Stop it first.
hadoop2: starting nodemanager, logging to /opt/hadoop/logs/yarn-root-nodemanager-hadoop2.out
hadoop4: starting nodemanager, logging to /opt/hadoop/logs/yarn-root-nodemanager-hadoop4.out
[root@hadoop1 ~]# jps
1712 NameNode
2268 Jps
1917 JournalNode
[root@hadoop2 ~]# jps
1936 Jps
1571 JournalNode
1480 NameNode
1816 NodeManager
1690 DFSZKFailoverController
1407 QuorumPeerMain
[root@hadoop3 ~]# jps
1488 DataNode
1425 QuorumPeerMain
1699 Jps
1581 JournalNode
[root@hadoop4 ~]# jps
1538 NodeManager
1658 Jps
1421 QuorumPeerMain

4 启动yarn的rm resourcemanager

在hadoop3、hadoop4上执行

yarn-daemon.sh start resourcemanager
[root@hadoop3 ~]# yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /opt/hadoop/logs/yarn-root-resourcemanager-hadoop3.out
[root@hadoop3 ~]# jps
1488 DataNode
1792 Jps
1425 QuorumPeerMain
1737 ResourceManager
1581 JournalNode
[root@hadoop4 ~]# yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /opt/hadoop/logs/yarn-root-resourcemanager-hadoop4.out
[root@hadoop4 ~]# jps
1744 Jps
1538 NodeManager
1699 ResourceManager
1421 QuorumPeerMain

5 进入相关web界面

进入hdfs WEB界面

http://hadoop1:50070/dfshealth.html#tab-overview

进入hbase WEB界面

http://hadoop1:16010/master-status

二、启动数仓或框架或工具

1 Hbase

在hadoop1上

最好先时间同步一下,否则可能启动不了hbase高可用:

date -s '2020-04-11 12:07:01'
或
yum install ntpdate
# 启动hbase服务:
start-hbase.sh
# 进入交互界面:
hbase shell
[root@hadoop1 ~]# jps
1712 NameNode
2487 HMaster
2679 Jps
1917 JournalNode

2 MySQL

在hadoop1上安装

启动服务

service mysqld start

登录

mysql -uroot -proot
或
mysql -u root -p
root

3 Hive

3.1 单机版

在hadoop2上

hive

hive直接进入交互界面

单机版:hadoop2 上 metastore+hive

3.2 联机版 hive Server2

执行hive前先启动一个metastore的服务,hive才会来连接metastore

a 启动hive metastore服务

hive --service metastore &

b 再启动hive 实现metastore和hive分离

hadoop1 启动mysql mysql保存hive元数据

//hadoop2 单机程序:hive+metastore服务 (这里不用管它)

hadoop3 启动服务器metastore 连接hadoop1上的mysql

hadoop4客户端hive连接hadoop3上的metastore

3.3 联机登录实施:

1、hadoop3上启动metastore服务

hive --service metastore &  
[root@hadoop3 ~]# Starting Hive Metastore Server
20/04/11 13:15:36 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist

hadoop4上进入hive交互界面

hive
[root@hadoop4 ~]# hive
20/04/11 13:20:07 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist
Logging initialized using configuration in jar:file:/opt/hive/lib/hive-common-1.2.2.jar!/hive-log4j.properties
hive> show databases;
OK
default
emp
video
Time taken: 0.568 seconds, Fetched: 3 row(s)
hive> 

2、hadoop3上启动hiveServer2服务

hiveserver2 &
[root@hadoop3 ~]# hiveserver2 &
[1] 3794
[root@hadoop3 ~]# 20/04/11 13:22:51 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist

hadoop4上进入交互界面

# 进入beeline
beeline -u jdbc:hive2://hadoop3:10000  -n root -p root
# 退出
!exit
[root@hadoop4 ~]# beeline -u jdbc:hive2://hadoop3:10000  -n root -p root
Connecting to jdbc:hive2://hadoop3:10000
Connected to: Apache Hive (version 1.2.2)
Driver: Hive JDBC (version 1.2.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.2.2 by Apache Hive
0: jdbc:hive2://hadoop3:10000> show databases;
+----------------+--+
| database_name  |
+----------------+--+
| default        |
| emp            |
| video          |
+----------------+--+
3 rows selected (0.93 seconds)
0: jdbc:hive2://hadoop3:10000> 

3.4 坑点:如果连接不上就杀死hive的runjar进程,并重新启动进程

# 如果连接不上就杀死hiveserver2进程,并重新启动hiveserver2进程
[root@hadoop3 ~]# kill -9 3379
[root@hadoop3 ~]# jps
1536 JournalNode
1634 NodeManager
1443 DataNode
3589 Jps
1782 ResourceManager
2232 HRegionServer
1384 QuorumPeerMain
[1]+  Killed                  hiveserver2
发布了38 篇原创文章 · 获赞 46 · 访问量 1414

猜你喜欢

转载自blog.csdn.net/weixin_45568892/article/details/105451800