centos7安装hadoop3.03和jdk1.8的高可用集群模式

centos7安装hadoop3.03和jdk1.8的高可用集群模式

规划三台服务器
配置如下:
master 192.168.145.200
slave1 192.168.145.201
slave2 192.168.145.202

添加用户hadoop3 并加入hadoop用户组中
安装目录如下:
/home/hadoop3
添加/home/hadoop3/app目录
安装如下软件:
/home/hadoop3/app/jdk
/home/hadoop3/app/hadoop
/home/hadoop3/data

配置SSH免密码通信
先在三台服务器中/home/hadoop3/目录下新建目录.ssh
Master主机上操作如下:
su hadoop3
cd /home/hadoop3
mkdir .ssh
ssh-keygen –t rsa
cd .ssh
ls
cat id_rsa.pub >>authorized_keys
cd ..
chmod 700 .ssh
chmod 600 .ssh/*
ssh master //第一次执行需要输入yes
ssh master //第二次执行就可以直接访问了

slave1和slave2主机操作如下:
su hadoop3
cd /home/hadoop3
mkdir .ssh
ssh-keygen –t rsa
cd .ssh
ls
cd ..
chmod 700 .ssh
chmod 600 .ssh/*

将另外2个节点中的共钥id_rsa.pub拷贝到master节点中的authorized_keys文件中
如:把slave1和slave2主机节点上的/home/hadoop3/.ssh/id_rsa.pub改名为id_rsa201.pub和id_rsa202.pub拷贝到主机master中/home/hadoop3/.ssh/中

[hadoop3@master .ssh]$
cat id_rsa201.pub >> authorized_keys
cat id_rsa202.pub >> authorized_keys
合并后authorized_keys内容为:
Ssh-rsa ……………………hadoop3@master
Ssh-rsa ……………………hadoop3@slave1
Ssh-rsa…………………….hadoop3@slave2

然后将master中的authorized_keys文件分发到其他2个节点上。

scp -r authorized_keys hadoop3@slave1:~/.ssh/
scp -r authorized_keys hadoop3@slave2:~/.ssh/
测试免密命令如下:
ssh master
ssh slave1
ssh slave2

三台主机/etc/hosts 配置内容为:
127.0.0.1 core localhost.localdomain localhost
192.168.145.200 master
192.168.145.201 slave1
192.168.145.202 slave2

脚本工具的使用
在master节点,hadoop3用户下创建/home/hadoop3/tools目录。

[hadoop3@master ~]$ mkdir /home/hadoop3/tools
cd /home/hadoop3/tools
    将本地脚本文件上传至/home/hadoop3/tools目录下,这些脚本大家可以自己写, 如果不熟练也可以直接使用。
[hadoop3@master tools]$ rz deploy.conf
[hadoop3@master tools]$ rz deploy.sh
[hadoop3@master tools]$ rz runRemoteCmd.sh
[hadoop3@master tools]$ ls
deploy.conf  deploy.sh  runRemoteCmd.sh

查看一下deploy.conf配置文件内容

[hadoop3@master tools]$ cat deploy.conf
master,all,namenode,zookeeper,resourcemanager,
slave1,all,slave,namenode,zookeeper,resourcemanager,
slave2,all,slave,datanode,zookeeper,

查看一下deploy.sh远程复制文件脚本内容

[hadoop3@master tools]$ cat deploy.sh
#!/bin/bash
#set -x
if [ $# -lt 3 ]
then 
  echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag"
  echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag confFile"
  exit 
fi
src=$1
dest=$2
tag=$3
if [ 'a'$4'a' == 'aa' ]
then
  confFile=/home/hadoop3/tools/deploy.conf
else 
  confFile=$4
fi
if [ -f $confFile ]
then
  if [ -f $src ]
  then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'` 
    do
       scp $src $server":"${dest}
    done 
  elif [ -d $src ]
  then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'` 
    do
       scp -r $src $server":"${dest}
    done 
  else
      echo "Error: No source file exist"
  fi
else
  echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi

查看一下runRemoteCmd.sh远程执行命令脚本内容。
[hadoop3@master tools]$ cat runRemoteCmd.sh

#!/bin/bash
#set -x
if [ $# -lt 2 ]
then 
  echo "Usage: ./runRemoteCmd.sh Command MachineTag"
  echo "Usage: ./runRemoteCmd.sh Command MachineTag confFile"
  exit 
fi
cmd=$1
tag=$2
if [ 'a'$3'a' == 'aa' ]
then

  confFile=/home/hadoop3/tools/deploy.conf
else 
  confFile=$3
fi
if [ -f $confFile ]
then
    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'` 
    do
       echo "*******************$server***************************"
       ssh $server "source /etc/profile; $cmd"
    done 
else
  echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi
   三个文件,方便我们搭建hadoop3分布式集群。具体如何使用看后面如何操作。
   如果我们想直接使用脚本,还需要给脚本添加执行权限。
[hadoop3@master tools]$ chmod u+x deploy.sh
[hadoop3@master tools]$ chmod u+x runRemoteCmd.sh
   同时我们需要将/home/hadoop3/tools目录配置到PATH路径中。
[hadoop3@master tools]$vi ~/.bashrc
PATH=/home/hadoop3/tools:$PATH
export PATH

如果执行runRemoteCmd.sh文件报错:解析器错误,执行下面命令:

Vi runRemoteCmd.sh
:set ff=unix
:wq

centos7中hadoop各节点时间同步方法

采用NTP(Network Time Protocol)方式来实现, 选择一台机器, 作为集群的时间同步服务器, 然后分别配置服务端和集群其他机器。我这里以master机器时间为准,其他机器同这台机器时间做同步。

(一)NTP服务端

安装ntp服务(在master服务器上)

sudo su -

yum install ntp -y

配置/etc/ntp.conf,这边采用本地机器作为时间的原点

注释server列表:

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

添加如下内容:  

server 127.127.1.0 prefer
fudge 127.127.1.0 stratum 8
logfile /var/log/ntp.log

启动ntpd服务

systemctl start ntpd

查看ntp服务状态

systemctl status ntpd

加入开机启动

systemctl enable ntpd

(二) NTP客户端 (在slave1和slave2服务器上)

安装ntp

yum install ntpdate

配置crontab任务主动同步

ssh slave1
sudo su -
yum install ntpdate
vi /etc/crontab

# crontab -e
*/10 * * * * /usr/sbin/ntpdate 192.168.145.200;hwclock -w

ssh slave2
sudo su -
yum install ntpdate
vi /etc/crontab

# crontab -e
*/10 * * * * /usr/sbin/ntpdate 192.168.145.200;hwclock -w

然后重新启动hadoop集群服务器reboot.

Zookeeper安装

安装到/home/hadoop3/app/zookeeper目录
zookeeper-3.4.12.tar.gz
tar –zxvf zookeeper-3.4.12.tar.gz
mv zookeeper-2.4.12 zookeeper
配置vi /home/hadoop3/app/zookeeper/conf/zoo.cfg
Cp zoo_sample.cfg zoo.cfg
Vi zoo.cfg
dataDir=/home/hadoop3/data/zookeeper/zkdata
dataLogDir=/home/hadoop3/data/zookeeper/zkdatalog

master主机配置方法

server.1=0.0.0.0:2888:3888
server.2=192.168.145.201:2888:3888
server.3=192.168.145.202:2888:3888

slave1主机配置方法

server.1=192.168.145.200:2888:3888
server.2=0.0.0.0:2888:3888
server.3=192.168.145.202:2888:3888

slave2主机配置方法

server.1=192.168.145.200:2888:3888
server.2=192.168.145.201:2888:3888
server.3=0.0.0.0:2888:3888

1 2 3代表服务编号;2888代表Zookeeper节点通信端口;3888代表zook选举端口

远程拷贝zookeeper
通过远程脚本deploy.sh将Zookeeper安装目录拷贝到其他节点

[hadoop3@master app]$ deploy.sh zookeeper /home/hadoop3/app/ slave

所有节点创建数据目录和日志目录

[hadoop3@master app]$ runRemoteCmd.sh "mkdir -p /home/hadoop3/data/zookeeper/zkdata" all  
[hadoop3@master app]$ runRemoteCmd.sh "mkdir -p /home/hadoop3/data/zookeeper/zkdatalog" all 

创建myid文件

Vi /home/hadoop3/data/zookeeper/zkdata/myid
在各个节点上,在 dataDir 所指定的目录下创一个名为 myid 的文件, 文件内容为各个server 点后面的数字。
[hadoop3@master zkdata]$ vi myid

[hadoop3@slave1 zkdata]$ vi myid

[hadoop3@slave2 zkdata]$ vi myid

测试运行

runRemoteCmd.sh “/home/hadoop3/app/zookeeper/bin/zkServer.sh start” zookeeper
runRemoteCmd.sh “jps” all
runRemoteCmd.sh “/home/hadoop3/app/zookeeper/bin/zkServer.sh status” all
如果一个节点为leader,另两个节点为follower,则说明Zookeeper安装成功。

三台主机vi ~/.bashrc文件内容为:

# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
#下面为新加的内容
# add to path for /home/hadoop3/app/jdk   and   /home/hadoop3/tools
JAVA_HOME=/home/hadoop3/app/jdk
HADOOP_HOME=/home/hadoop3/app/hadoop
ZOOKEEPER_HOME=/home/hadoop3/app/zookeeper
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$ZOOKEEPER_HOME/bin:/home/hadoop3/tools:$PATH
export JAVA_HOME CLASSPATH  HADOOP_HOME ZOOKEEPER_HOME PATH

编辑配置文件workers 从节点列表

vi /home/hadoop3/app/hadoop/etc/hadoop/workers
master
slave1
slave2

配置hadoop-env.sh

export JAVA_HOME=/home/hadoop3/app/jdk
export HADOOP_HOME=/home/hadoop3/app/hadoop

配置mapred-env.sh、yarn-env.sh

export JAVA_HOME=/home/hadoop3/app/jdk
export HADOOP_HOME=/home/hadoop3/app/hadoop

编辑配置文件core-site.xml

vi /home/hadoop3/app/hadoop/etc/hadoop/core-site.xml

<configuration>
    <property>
      <name>fs.defaultFS</name>
      <value>hdfs://mycluster</value>
    </property>
    <property>
      <name>hadoop.tmp.dir</name>
      <value>/home/hadoop3/data/tmp</value>
    </property>
    <property>
       <name>ha.zookeeper.quorum</name>
       <value>master:2181,slave1:2181,slave2:2181</value>
    </property>
</configuration>

编辑配置文件hdfs-site.xml

Vi /home/hadoop3/app/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
    <property>
      <name>dfs.nameservices</name>
      <value>mycluster</value>
    </property>
    <property>
      <name>dfs.permissions.enabled</name>
      <value>false</value>
    </property>
    <property>
      <name>dfs.ha.namenodes.mycluster</name>
      <value>nn1,nn2,nn3</value>
    </property>
    <property>
      <name>dfs.namenode.rpc-address.mycluster.nn1</name>
      <value>master:9820</value>
    </property>
    <property>
      <name>dfs.namenode.rpc-address.mycluster.nn2</name>
      <value>slave1:9820</value>
    </property>
    <property>
      <name>dfs.namenode.rpc-address.mycluster.nn3</name>
      <value>slave2:9820</value>
    </property>
    <property>
      <name>dfs.namenode.http-address.mycluster.nn1</name>
      <value>master:9870</value>
    </property>
    <property>
      <name>dfs.namenode.http-address.mycluster.nn2</name>
      <value>slave1:9870</value>
    </property>
    <property>
      <name>dfs.namenode.http-address.mycluster.nn3</name>
      <value>slave2:9870</value>
    </property> 
      <name>dfs.namenode.shared.edits.dir</name>
      <value>qjournal://master:8485;slave1:8485;slave2:8485/mycluster</value>
    </property>
    <property>
      <name>dfs.journalnode.edits.dir</name>
      <value>/home/hadoop3/data/journaldata/jn</value>
    </property>
    <property>
      <name>dfs.client.failover.proxy.provider.mycluster</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
      <name>dfs.ha.fencing.methods</name>
      <value>
           sshfence
           shell(/bin/true)
      </value>
    </property>
    <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/home/hadoop3/.ssh/id_rsa</value>
    </property>
    <property>
      <name>dfs.ha.fencing.ssh.connect-timeout</name>
      <value>10000</value>
    </property>
    <property>
      <name>dfs.namenode.handler.count</name>
      <value>100</value>
    </property>
    <property>
       <name>dfs.ha.automatic-failover.enabled</name>
       <value>true</value>
     </property>
</configuration>

编辑配置文件mapred-site.xml

Vi /home/hadoop3/app/hadoop/etc/hadoop/mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.application.classpath</name>
        <value>
            /home/hadoop3/app/hadoop/etc/hadoop,
            /home/hadoop3/app/hadoop/share/hadoop/common/*,
            /home/hadoop3/app/hadoop/share/hadoop/common/lib/*,
            /home/hadoop3/app/hadoop/share/hadoop/hdfs/*,
            /home/hadoop3/app/hadoop/share/hadoop/hdfs/lib/*,
            /home/hadoop3/app/hadoop/share/hadoop/mapreduce/*,
            /home/hadoop3/app/hadoop/share/hadoop/mapreduce/lib/*,
            /home/hadoop3/app/hadoop/share/hadoop/yarn/*,
            /home/hadoop3/app/hadoop/share/hadoop/yarn/lib/*
        </value>
    </property>
</configuration>

编辑配置文件yarn-site.xml

Vi /home/hadoop3/app/hadoop/etc/hadoop/yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.resourcemanager.connect.retry-interval.ms</name>
        <value>2000</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yarn-rm-cluster</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>master</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>slave1</value>
    </property>
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
    <property>
       <description>The class to use as the persistent store.</description>
       <name>yarn.resourcemanager.store.class</name>
       <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
     </property>
    <property>
        <name>yarn.resourcemanager.zk.state-store.address</name>
        <value>master:2181,slave1:2181,slave2:2181</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>master:2181,slave1:2181,slave2:2181</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>master:8034</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>master:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>slave1:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>slave1:8034</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>slave1:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>

格式化hdfs

先启动zookeeper

runRemoteCmd.sh “/home/hadoop3/app/zookeeper/bin/zkServer.sh start” all

接着启动journalnode

runRemoteCmd.sh “/home/hadoop3/app/hadoop/sbin/hadoop-daemon.sh start journalnode” all
runRemoteCmd.sh “/home/hadoop3/app/hadoop/sbin/hadoop-daemon.sh stop journalnode” all

在master节点上执行格式化

bin/hdfs namenode -format / /namenode 格式化
bin/hdfs zkfc -formatZK //格式化高可用
sbin/start-dfs.sh

备用节点slave1 slave2通过master节点元数据信息,分别在slave1、slave2节点上执行。

bin/hdfs namenode -bootstrapStandby
bin/hdfs namenode -bootstrapStandby

测试运行HDFS

查看HDFS Web界面,这里配置的master slave1 slave2节点都为namenode
http://192.168.145.200:9870
http://192.168.145.201:9870
http://192.168.145.202:9870
关闭active状态的namenode,检查是否会自动切换其他节点

测试hdfs文件上传

vi djt.txt
hadoop
hadoop
hadoop
dajiangtai
dajiangtai
dajiangtai
hsg
qq.com

bin/hdfs dfs -mkdir /dajiangtai
bin/hdfs dfs -put djt.txt /dajiangtai
bin/hdfs dfs -cat /dajiangtai/djt.txt

脚本分发修改的yarn配置

deploy.sh mapred-site.xml /home/hadoop3/app/hadoop/etc/hadoop/ slave
deploy.sh yarn-site.xml /home/hadoop3/app/hadoop/etc/hadoop/ slave

启动yarn

在master节点启动resourcemanager

bin/yarn –daemon start resourcemanager
在slave1节点启动resourcemanager
bin/yarn –daemon start resourcemanager
在3个节点分别启动nodemanager
runRemoeteCmd.sh “yarn –daemon start nodemanager” all

通过Web查看YARN

http://192.168.145.200:8088

检查ResourceManager状态

Bin/yarn rmadmin –getServiceState rm1
Bin/yarn rmadmin –getServiceState rm2
Bin/yarn rmadmin –getServiceState rm3

关闭active 状态的resourcemanager,检查另外一个节点是否能称为active状态。

测试运行WordCount

Cd /home/hadoop3/app/hadoop
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar wordcount /dajiangtai/djt.txt /dajiangtai/output

查看yarn的Web界面

查看运行结果

bin/hdfs dfs -cat /dajiangtai/output/*
hdfs dfs -cat /dajiangtai/output/*
dajiangtai 3
hadoop 3
hsg 1
qq.com 1
[hadoop3@master ~]$

   如果以上操作没有问题,说明Hadoop3.0分布式高可用集群成功搭建完毕。

参考1url: http://www.dajiangtai.com/community/18389.do
参考2url: https://blog.csdn.net/hliq5399/article/details/78193113

猜你喜欢

转载自blog.csdn.net/hsg77/article/details/80945493