hadoop集成yarn高可用HA的搭建

1、修改配置文件;

具体的修改内容为:

mapred-site.xml:

<configuration>
 <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
  </property>

</configuration>

修改:yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
 <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
<property>
   <name>yarn.resourcemanager.ha.enabled</name>
   <value>true</value>
 </property>
 <property>
   <name>yarn.resourcemanager.cluster-id</name>
   <value>cluster1</value>
 </property>
 <property>
   <name>yarn.resourcemanager.ha.rm-ids</name>
   <value>rm1,rm2</value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm1</name>
   <value>node13</value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm2</name>
   <value>node14</value>
 </property>
 <property>
   <name>yarn.resourcemanager.zk-address</name>
   <value>node12:2181,node13:2181,node14:2181</value>
 </property>

</configuration>

分别将这两个配置文件进行分发:

[root@node12 hadoop]# scp mapred-site.xml yarn-site.xml node14:`pwd`

完成之后不需要进行格式化,直接启动就成。

[root@node11 ~]# start-yarn.sh

然后jps查看进程,发现没有启动,需要进行单独启动:

[root@node14 ~]# yarn-daemon.sh start resourcemanager

[root@node13 ~]# yarn-daemon.sh start resourcemanager

 

jps查找14上边的进程,按照进程号进行杀死:kill -9 进程号

检查,

重新启动:yarn-daemon.sh start resourcemanager

 

 hdfs dfs -get /data/wc/output/* ./

hdfs dfs -ls /data/wc/output

hdfs dfs -put ./test.txt /user/root

猜你喜欢

转载自blog.csdn.net/wyqwilliam/article/details/85214901