Hadoop HA搭建及检验

1.备份原来的完全分布式集群

[hadoop@master etc]$ cp hadoop hadoop-full
[hadoop@master etc]$ ls
hadoop  hadoop-full

2.第一台机器和第二台机器的双向ssh免密钥登录(hadoop搭建已经配好)
3.修改hdfs-site.xml

<configuration>
 19 <property>
 20   <name>dfs.replication</name>
 21   <value>2</value>
 22 </property>
 23 <!--指定hdfs的nameservice为mycluster,需要和core-site.xml中的保持一致 -->
 24 <property>
 25 <name>dfs.nameservices</name>
 26   <value>mycluster</value>
 27 </property>
 28 <!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
 29 <property>
 30   <name>dfs.ha.namenodes.mycluster</name>
 31   <value>nn1,nn2</value>
 32 </property>
 33 <!-- nn1的RPC通信地址 -->
 34 <property>
 35   <name>dfs.namenode.rpc-address.mycluster.nn1</name>
 36   <value>master:8020</value>
 37 </property>
 38 <!-- nn2的RPC通信地址 -->
 39 <property>
 40   <name>dfs.namenode.rpc-address.mycluster.nn2</name>
 41   <value>worker1:8020</value>
 42 </property>
 43 <!-- nn1的http通信地址 -->
 44 <property>
 45   <name>dfs.namenode.http-address.mycluster.nn1</name>
 46   <value>master:50070</value>
 47 </property>
 48 <!-- nn2的http通信地址 -->
 49 <property>
 50   <name>dfs.namenode.http-address.mycluster.nn2</name>
 51   <value>worker1:50070</value>
 52 </property>
 53 
 54 <!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
 55 <property>
 56   <name>dfs.namenode.shared.edits.dir</name>
 57   <value>qjournal://master:8485;worker1:8485;worker2:8485/mycluster</value>
 58 </property>
 59 <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
 60 <property>
 61   <name>dfs.journalnode.edits.dir</name>
 62   <value>/data/hadoopha/dfsdata/journalnode</value>
 63 </property>
 64 <!-- 配置失败自动切换实现方式 -->
 65 <property>
 66   <name>dfs.client.failover.proxy.provider.mycluster</name>
 67   <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
 68 </property>
 69 <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
 70 <property>
 71   <name>dfs.ha.fencing.methods</name>
 72   <value>sshfence</value>
 73 </property>
 74 <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
 75 <property>
 76   <name>dfs.ha.fencing.ssh.private-key-files</name>
 77   <value>/home/hadoop/.ssh/id_rsa</value>
 78 </property>
 79 <!-- 使用自动恢复机制-->
 80 <property>
 81    <name>dfs.ha.automatic-failover.enabled</name>
 82    <value>true</value>
 83  </property>
 84 <!-- 配置sshfence隔离机制超时时间 -->
 85   <property>
 86     <name>dfs.ha.fencing.ssh.connect-timeout</name>
 87     <value>30000</value>
 88   </property>
 89 
 90   <property>
 91     <name>dfs.namenode.name.dir</name>
 92     <value>/data/hadoopha/dfsdata/name</value>
 93   </property>
 94   <property>
 95     <name>dfs.datanode.data.dir</name>
 96     <value>/data/hadoopha/dfsdata/data</value>
 97   </property>
 98   <property>
 99     <name>dfs.blocksize</name>
100     <value>134217728</value>
101   </property>
102   <property>
103     <name>dfs.replication</name>
104     <value>2</value>
105   </property>
106 </configuration>

4.修改core-site.xml

<configuration>
<!--指定nameservice为mycluster -->
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://mycluster</value>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/data/hadoopha/dfsdata/tmp</value>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/data/hadoopha/dfsdata/tmp</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
    <name>ha.zookeeper.quorum</name>
    <value>master:2181,worker1:2181,worker2:2181</value>
</property>
<!-- 支持hive beeline连接-->
<property>
  <name>hadoop.proxyuser.hadoop.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hadoop.groups</name>
    <value>*</value>
</property>
</configuration>

5.将这两个文件分发给各机器

scp core-site.xml hdfs-site.xml worker1:`pwd`
scp core-site.xml hdfs-site.xml worker2:`pwd`

6.安装配置zookeeper(本站内已配好)
7.启动zookeeper(3台)

[hadoop@master hadoop]$zkServer.sh start
[hadoop@worker1 hadoop]$zkServer.sh start
[hadoop@worker2 hadoop]$zkServer.sh start

8.启动journalnode

[hadoop@master hadoop]$ hadoop-daemon.sh start journalnode
[hadoop@worker1 hadoop]$ hadoop-daemon.sh start journalnode
[hadoop@worker2 hadoop]$ hadoop-daemon.sh start journalnode
[hadoop@master hadoop]$ jps
5393 JournalNode
5093 QuorumPeerMain
5512 Jps

9.格式化namenode,并启动namenode
第一台:

[hadoop@master hadoop]$ hdfs namenode -format
18/07/16 11:21:56 INFO common.Storage: Storage directory /data/hadoopha/dfsdata/name has been successfully formatted.
18/07/16 11:21:57 INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoopha/dfsdata/name/current/fsimage.ckpt_0000000000000000000 using no compression
18/07/16 11:21:57 INFO namenode.FSImageFormatProtobuf: Image file /data/hadoopha/dfsdata/name/current/fsimage.ckpt_0000000000000000000 of size 352 bytes saved in 0 seconds.
18/07/16 11:21:57 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/07/16 11:21:57 INFO util.ExitUtil: Exiting with status 0
18/07/16 11:21:58 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.163.145
************************************************************/

启动namenode

[hadoop@master hadoop]$ hadoop-daemon.sh start namenode
starting namenode, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-master.out
[hadoop@master hadoop]$ jps
5393 JournalNode
5587 Jps
5093 QuorumPeerMain
5535 NameNode

10.另一台NN设置standby:

[hadoop@worker1 hadoop]$ hdfs namenode -bootstrapStandby
18/07/16 11:38:23 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/07/16 11:38:23 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
18/07/16 11:38:23 WARN common.Util: Path /data/hadoopha/dfsdata/name should be specified as a URI in configuration files. Please update hdfs configuration.
18/07/16 11:38:23 WARN common.Util: Path /data/hadoopha/dfsdata/name should be specified as a URI in configuration files. Please update hdfs configuration.
=====================================================
About to bootstrap Standby ID nn2 from:
           Nameservice ID: mycluster
        Other Namenode ID: nn1
  Other NN's HTTP address: http://master:50070
  Other NN's IPC  address: master/192.168.163.145:8020
             Namespace ID: 229541486
            Block pool ID: BP-671668853-192.168.163.145-1531711316682
               Cluster ID: CID-801461e5-d7bd-457d-afcf-81da5cca168a
           Layout version: -63
       isUpgradeFinalized: true
=====================================================
18/07/16 11:38:25 INFO common.Storage: Storage directory /data/hadoopha/dfsdata/name has been successfully formatted.
18/07/16 11:38:25 WARN common.Util: Path /data/hadoopha/dfsdata/name should be specified as a URI in configuration files. Please update hdfs configuration.
18/07/16 11:38:25 WARN common.Util: Path /data/hadoopha/dfsdata/name should be specified as a URI in configuration files. Please update hdfs configuration.
18/07/16 11:38:27 INFO namenode.TransferFsImage: Opening connection to http://master:50070/imagetransfer?getimage=1&txid=0&storageInfo=-63:229541486:0:CID-801461e5-d7bd-457d-afcf-81da5cca168a
18/07/16 11:38:27 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
18/07/16 11:38:27 INFO namenode.TransferFsImage: Transfer took 0.00s at 0.00 KB/s
18/07/16 11:38:27 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 352 bytes.
18/07/16 11:38:27 INFO util.ExitUtil: Exiting with status 0
18/07/16 11:38:27 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at worker1/192.168.163.146

11.在NNstandby节点上格式化zookeeper

[hadoop@worker1 ~]$ hdfs zkfc -formatZK
18/07/16 13:44:37 INFO tools.DFSZKFailoverController: Failover controller configured for NameNode NameNode at worker1/192.168.163.146:8020
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:host.name=worker1
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:java.version=1.8.0_141
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:java.home=/app/java/jdk1.8.0_141/jre
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/app/hadoop/hadoop-2.7.3/etc/hadoop:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/app/hadoop/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/app/hadoop/hadoop-2.7.3/contrib/capacity-scheduler/*.jar
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/app/hadoop/hadoop-2.7.3/lib/native
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-431.el6.x86_64
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=master:2181,worker1:2181,worker2:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@223d2c72
18/07/16 13:44:37 INFO zookeeper.ClientCnxn: Opening socket connection to server worker2/192.168.163.147:2181. Will not attempt to authenticate using SASL (unknown error)
18/07/16 13:44:37 INFO zookeeper.ClientCnxn: Socket connection established to worker2/192.168.163.147:2181, initiating session
18/07/16 13:44:37 INFO zookeeper.ClientCnxn: Session establishment complete on server worker2/192.168.163.147:2181, sessionid = 0x364a15596210001, negotiated timeout = 5000
18/07/16 13:44:37 INFO ha.ActiveStandbyElector: Session connected.
18/07/16 13:44:37 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
18/07/16 13:44:37 INFO zookeeper.ZooKeeper: Session: 0x364a15596210001 closed
18/07/16 13:44:37 INFO zookeeper.ClientCnxn: EventThread shut down

12.在zk客户端上查看

[hadoop@worker2 ~]$ zkCli.sh 
[zk: localhost:2181(CONNECTED) 0] ls /
[controller_epoch, brokers, zookeeper, hadoop-ha, admin, isr_change_notification, consumers, config, hbase]
[zk: localhost:2181(CONNECTED) 2] ls /hadoop-ha
[mycluster]
[zk: localhost:2181(CONNECTED) 3] ls /hadoop-ha/mycluster
[]

其中由刚刚格式化创建的hadoop-ha
12.启动dfs.sh ,查看jps

[hadoop@master hadoop]$ start-dfs.sh 
Starting namenodes on [master worker1]
master: namenode running as process 5535. Stop it first.
worker1: starting namenode, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-worker1.out
worker2: starting datanode, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-worker2.out
worker1: starting datanode, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-worker1.out
Starting journal nodes [master worker1 worker2]
master: journalnode running as process 5393. Stop it first.
worker2: journalnode running as process 4978. Stop it first.
worker1: journalnode running as process 5024. Stop it first.
Starting ZK Failover Controllers on NN hosts [master worker1]
master: starting zkfc, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-zkfc-master.out
worker1: starting zkfc, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-zkfc-worker1.out

三台进程情况:

[hadoop@master hadoop]$ jps
5393 JournalNode
5093 QuorumPeerMain
7288 DFSZKFailoverController
7354 Jps
5535 NameNode
[hadoop@worker1 ~]$ jps
5024 JournalNode
5552 DFSZKFailoverController
4824 QuorumPeerMain
5659 Jps
5436 DataNode
5373 NameNode
[hadoop@worker2 ~]$ jps
4978 JournalNode
5331 DataNode
5445 Jps
4783 QuorumPeerMain

进入worker2的zkCli查看

[hadoop@worker2 ~]$ zkCli.sh 
[zk: localhost:2181(CONNECTED) 0] ls /hadoop-ha/mycluster
[ActiveBreadCrumb, ActiveStandbyElectorLock]
[zk: localhost:2181(CONNECTED) 1] get /hadoop-ha/mycluster/ActiveBreadCrumb

    myclusternn1master �>(�>
cZxid = 0x3a0000000a
ctime = Mon Jul 16 13:55:25 CST 2018
mZxid = 0x3a0000000a
mtime = Mon Jul 16 13:55:25 CST 2018
pZxid = 0x3a0000000a
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 30
numChildren = 0

13.打开浏览器,进入hdfs的管理界面
这里写图片描述

http://master:50070/dfshealth.html#tab-overview

14.测试集群可用性
建立目录

[hadoop@master hadoop-2.7.3]$ hdfs dfs -mkdir -p /user/hadoop

创建一个1–100数字的putTest.txt文件,上传(以块为1M大小)

[hadoop@master hadoop-2.7.3]$ for i in {1..100};do echo "number :"$i >> putTest.txt;done

上传

[hadoop@master hadoop-2.7.3]$ hdfs dfs -D dfs.blocksize=1048576 -put ./putTest.txt 

查看

[hadoop@master hadoop-2.7.3]$ hdfs dfs -ls -R /
drwxr-xr-x   - hadoop supergroup          0 2018-07-16 14:06 /user
drwxr-xr-x   - hadoop supergroup          0 2018-07-16 14:16 /user/hadoop
-rw-r--r--   2 hadoop supergroup       2585 2018-07-16 14:16 /user/hadoop/putTest.txt

web界面查看大小
这里写图片描述
15.检验高可用性ha
杀掉第一台机器的NN,查看第二胎机器的webUI,发现NN编程active,第一台则为standby;

[hadoop@master hadoop-2.7.3]$ jps
5393 JournalNode
5093 QuorumPeerMain
7288 DFSZKFailoverController
7854 Jps
5535 NameNode
[hadoop@master hadoop-2.7.3]$ kill -9 5535
[hadoop@master hadoop-2.7.3]$ jps
5393 JournalNode
5093 QuorumPeerMain
7288 DFSZKFailoverController
7870 Jps

第一台:standby,再刷新一次变为
这里写图片描述
第二台active
这里写图片描述

重新启动第一台的NN,启动后不会切换成active,成本太高。

[hadoop@master hadoop-2.7.3]$ hadoop-daemon.sh start namenode
starting namenode, logging to /app/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-master.out
[hadoop@master hadoop-2.7.3]$ jps
7920 NameNode
5393 JournalNode
5093 QuorumPeerMain
7288 DFSZKFailoverController
7999 Jps

杀掉第二台的ZKFC,查看webUI,发现第一台NN变成active,第二台为standby

[hadoop@worker1 ~]$ jps
5024 JournalNode
6064 Jps
5552 DFSZKFailoverController
4824 QuorumPeerMain
5436 DataNode
5373 NameNode
[hadoop@worker1 ~]$ kill -9 5552
[hadoop@worker1 ~]$ jps
5024 JournalNode
4824 QuorumPeerMain
6074 Jps
5436 DataNode
5373 NameNode

查看第二台为standby
这里写图片描述
第一台为active
这里写图片描述
强迫症最终将第二胎的zkfs也启动起来。

[hadoop@worker1 ~]$ jps
5024 JournalNode
4824 QuorumPeerMain
6201 Jps
6153 DFSZKFailoverController
5436 DataNode
5373 NameNode

到此hadoop2.x ha搭建测试完成。

猜你喜欢

转载自blog.csdn.net/yangang1223/article/details/81065154