Hadoop-2.7.3集群(HA HDFS)搭建

在《Hadoop-2.7.3集群(HDFS)搭建》中,整理了如何搭建分布式的hdfs,但存在单点问题,如果NameNode挂了,整个集群就处于不可用的状态。这里记录了在之前基础上如何搭建HA HDFS。

1、增加了一台机器,并修改了host。

127.0.0.1	localhost
127.0.1.1	ubuntu
192.168.42.132 chan-takchi-03
192.168.42.131 chan-takchi-02
192.168.42.130 chan-takchi-01
192.168.42.129 chan-takchi

2、机器角色的划分

192.168.42.129 namenode
192.168.42.130 datanode journalnode
192.168.42.131 datanode journalnode
192.168.42.132 namenode(standby)  journalnode


3、修改hdfs-site.xml

<configuration>
	<property>
		<name>dfs.name.dir</name>
		<value>/usr/local/hdfs/name</value>
	</property>
	<property>
		<name>dfs.data.dir</name>
		<value>/usr/local/hdfs/data</value>
	</property>
	<property>
		<name>dfs.nameservices</name>
		<value>takchi-cluster</value>
	</property>
	<property>
		<name>dfs.ha.namenodes.takchi-cluster</name>
		<value>nn1,nn2</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.takchi-cluster.nn1</name>
		<value>chan-takchi:9000</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.takchi-cluster.nn2</name>
		<value>chan-takchi-03:9000</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.takchi-cluster.nn1</name>
		<value>chan-takchi:50070</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.takchi-cluster.nn2</name>
		<value>chan-takchi-03:50070</value>
	</property>
	<property>
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://chan-takchi-03:8485;chan-takchi-01:8485;chan-takchi-02:8485/takchi-cluster</value>
	</property>
	<property>
		<name>dfs.journalnode.edits.dir</name>
		<value>/usr/local/hdfs/journal</value>
	</property>
	<property>
		<name>dfs.client.failover.proxy.provider.takchi-cluster</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>sshfence</value>
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/takchi/.ssh/id_rsa</value>
	</property>
</configuration>


4、修改core-site.xml

<configuration>
     <property>
            <name>fs.default.name</name>
            <value>hdfs://takchi-cluster</value>
     </property>
     <property>
             <name>dfs.replication</name>
             <value>1</value>
     </property>
     <property>
              <name>hadoop.tmp.dir</name>
              <value>/tmp/hadoop</value>
     </property>
</configuration>


5、先启动journalnode,格式化namenode时需要连接到journalnode。

sbin/hadoop-daemon.sh start journalnode


6、在设想成为active的namenode上,格式化并启动namenode。

bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode


7、格式化standby的namenode并启动,这里的格式化是通过journalnode来同步上一个namenode的格式化数据。

bin/hdfs namenode -bootstrapStandby
sbin/hadoop-daemon.sh start namenode


8、启动datanode。

sbin/hadoop-daemons.sh start datanode  


9、至此,可以在两个namenode的50070端口看到HDFS集群已经跑起来了,但都处于standby的状态,无法进行读写。


10、指定active的namenode。

bin/hdfs haadmin -transitionToActive nn1
bin/hdfs haadmin -transitionToActive --forceactive nn1


这样,HA HDFS集群处于了可用状态,当发生故障时便可参考步骤7、10,快速恢复节点至正常状态。美中不足的是,这需要手动来完成,但通过配合zookeeper集群可做到自动Failover。

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

在以上基础上,添加自动Failover功能。

11、先关闭dfs,搭建zookeeper集群,并启动server.2、server.3、server.4

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/usr/apps/data/zookeeper
# the port at which the clients will connect
clientPort=2181
server.1=192.168.42.129:2888:3888
server.2=192.168.42.130:2888:3888
server.3=192.168.42.131:2888:3888
server.4=192.168.42.132:2888:3888

12、在hdfs-site.xml里添加一下内容。

<property>
	<name>dfs.ha.automatic-failover.enabled</name>
	<value>true</value>
</property>
<property>
	<name>ha.zookeeper.quorum</name>
	<value>chan-takchi-01:2181,chan-takchi-02:2181,chan-takchi-03:2181</value>
</property>


13、格式化zookeeper集群,目的是在ZooKeeper集群上建立HA的相应节点

bin/hdfs zkfc -formatZK


14、重新启动dfs。

sbin/start-dfs.sh


一般情况下会自行启动相应的程序,命令跑完之后jps看一下是否有跑起来,没有则手工启动。

Starting namenodes on [chan-takchi chan-takchi-03]
chan-takchi-03: starting namenode, logging to /home/takchi/Bigdata/hadoop-2.7.3/logs/hadoop-takchi-namenode-ubuntu.out
chan-takchi: starting namenode, logging to /home/takchi/Bigdata/hadoop-2.7.3/logs/hadoop-takchi-namenode-ubuntu.out
chan-takchi-01: starting datanode, logging to /home/takchi/Bigdata/hadoop-2.7.3/logs/hadoop-takchi-datanode-ubuntu.out
chan-takchi-02: starting datanode, logging to /home/takchi/Bigdata/hadoop-2.7.3/logs/hadoop-takchi-datanode-ubuntu.out
Starting journal nodes [chan-takchi-03 chan-takchi-01 chan-takchi-02]
chan-takchi-03: starting journalnode, logging to /home/takchi/Bigdata/hadoop-2.7.3/logs/hadoop-takchi-journalnode-ubuntu.out
chan-takchi-02: starting journalnode, logging to /home/takchi/Bigdata/hadoop-2.7.3/logs/hadoop-takchi-journalnode-ubuntu.out
chan-takchi-01: starting journalnode, logging to /home/takchi/Bigdata/hadoop-2.7.3/logs/hadoop-takchi-journalnode-ubuntu.out
Starting ZK Failover Controllers on NN hosts [chan-takchi chan-takchi-03]
chan-takchi: starting zkfc, logging to /home/takchi/Bigdata/hadoop-2.7.3/logs/hadoop-takchi-zkfc-ubuntu.out
chan-takchi-03: starting zkfc, logging to /home/takchi/Bigdata/hadoop-2.7.3/logs/hadoop-takchi-zkfc-ubuntu.out


15、至此,自动Failover的HA HDFS集群已经搭建完成,把active的namenode节点关掉,看看standby的namenode有没有变active吧?!

sbin/hadoop-daemon.sh  stop namenode


 

猜你喜欢

转载自blog.csdn.net/i792439187/article/details/54632062