hadoop集群配置

  1. 创建目录
  • 安装目录下创建数据存放的文件夹,/home/hadoop/hadoop-2.7.1/tmp、hdfs、hdfs/data、hdfs/name
  • 修改/home/hadoop/hadoop-2.7.1/etc/hadoop下的配置文件
    • core-site.xml
      <property>
              <name>fs.defaultFS</name>
              <value>hdfs://centos01:9000</value>
          </property>
          <property>
              <name>hadoop.tmp.dir</name>
              <value>file:/home/hadoop/hadoop-2.7.1/tmp</value>
          </property>
          <property>
              <name>io.file.buffer.size</name>
              <value>131702</value>
          </property>
       
    • hdfs-site.xml
      <property>
              <name>dfs.namenode.name.dir</name>
              <value>file:/home/hadoop/hadoop-2.7.1/dfs/name</value>
          </property>
          <property>
              <name>dfs.datanode.data.dir</name>
              <value>file:/home/hadoop/hadoop-2.7.1/dfs/data</value>
          </property>
          <property>
              <name>dfs.replication</name>
              <value>2</value>
          </property>
          <property>
              <name>dfs.namenode.secondary.http-address</name>
              <value>centos01:9001</value>
          </property>
          <property>
          <name>dfs.webhdfs.enabled</name>
          <value>true</value>
          </property>
       
    • mapred-site.xml
       <property>
              <name>mapreduce.framework.name</name>
              <value>yarn</value>
          </property>
          <property>
              <name>mapreduce.jobhistory.address</name>
              <value>centos01:10020</value>
          </property>
          <property>
              <name>mapreduce.jobhistory.webapp.address</name>
              <value>centos01:19888</value>
          </property>
       
    • yarn-site.xml
      <property>
              <name>yarn.nodemanager.aux-services</name>
              <value>mapreduce_shuffle</value>
          </property>
          <property>
              <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
              <value>org.apache.hadoop.mapred.ShuffleHandler</value>
          </property>
          <property>
              <name>yarn.resourcemanager.address</name>
              <value>centos01:8032</value>
          </property>
          <property>
              <name>yarn.resourcemanager.scheduler.address</name>
              <value>centos01:8030</value>
          </property>
          <property>
              <name>yarn.resourcemanager.resource-tracker.address</name>
              <value>centos01:8031</value>
          </property>
          <property>
              <name>yarn.resourcemanager.admin.address</name>
              <value>centos01:8033</value>
          </property>
          <property>
              <name>yarn.resourcemanager.webapp.address</name>
              <value>centos01:8088</value>
          </property>
          <property>
              <name>yarn.nodemanager.resource.memory-mb</name>
              <value>768</value>
          </property>
       
    • hadoop-env.sh、yarn-env.sh 加入JAVA_HOME环境变量
      export JAVA_HOME=/usr/java/jdk1.7.0_80
       
    • slaves 
      #localhost
      centos02
       
  • 把配置好的hadoop发送到从节点(如果没有目录,请新建,没有jdk,请提前装,路径和master保持一致)
    scp -r /home/hadoop/hadoop-2.7.1 root@centos02:/home/hadoop/
     
  • 初始化hadoop
    bin/hdfs namenode -format
     
  • 启动hadoop
    sbin/start-all.sh 
     
  • 访问web页面查看结果
    http://centos01:8088/
     
  • 成功页面


     
  • 常见问题
    • 如果是虚机安装在windows下,scp找不到主机,请在windows/system32/dirver/etc/hosts下把相关主机名添加上
      192.168.222.130     centos01
      192.168.222.131     centos02
       
    • 如果web页面访问不到,请关闭防火墙和iptable
      systemctl stop firewalld.service
      systemctl stop iptables.service
       
    • 禁用防火墙自动启动
      systemctl disable firewalld.service
       

    猜你喜欢

    转载自a66573334.iteye.com/blog/2264214