Hadoop 1.0架构集群搭建

1.环境准备

JDK 
Linux
准备至少3台机器 	
时间同步
ssh免秘钥登录
关闭防火墙

2.node01下载解压缩Hadoop

3.配置etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/latest

4.配置core-site.xml

<configuration> 
    <property> 
        <name>fs.defaultFS</name> 
        <value>hdfs://node01:9000</value> 
    </property> 
    <property> 
        <name>hadoop.tmp.dir</name> 
        <value>/opt/hadoop-2.6.1</value> 
    </property> 
</configuration> 

5.配置hdfs-site.xml

<configuration> 
  <property> 
      <name>dfs.replication</name> 
      <value>3</value> 
 </property> 
 <property> 
      <name>dfs.namenode.secondary.http-address</name> 
      <value>node02:50090</value> 
 </property> 
 <property> 
      <name>dfs.namenode.secondary.https-address</name> 
      <value>node02:50091</value> 
   </property> 
</configuration> 

6.在/hadoop-2.6.5/etc/hadoop/新建Masters文件 写上node02

7.在/hadoop-2.6.5/etc/hadoop/slaves写上

node01
node02
node03

8.将配置好的Hadoop通过scp发给其他节点

9.配置环境变量 vim ~/.bash_profile

export HADOOP_HOME=/usr/soft/hadoop-2.6.5 
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

10.将配置好的环境变量发送到其他节点的/root/目录下,所有节点加载环境变量

source ~/.bash_profile

11.回到根目录下对NN进行格式化

hdfs namenode -format

12.启动HDFS: start-dfs.sh

13.浏览器输入node01:50090

猜你喜欢

转载自blog.csdn.net/weixin_39206633/article/details/83542545