Hadoop 安装&配置

由于近两年随着互联网的高速发展,产生的数据量也想到惊人,从而产生了对大数据处理的框架,以下是Linux对hadoop 的安装与配置步骤为大家分享!

下载地址:http://www.apache.org/dyn/closer.cgi/hadoop/common/ 

解压 tar –zxvf hadoop-2.5.2

配置环境变量:vi /etc/profile

export HADOOP_HOME=/路径/hadoop-2.5.2 

 export HADOOP_COMMON_HOME=$HADOOP_HOME 

 export HADOOP_HDFS_HOME=$HADOOP_HOME 

 export HADOOP_MAPRED_HOME=$HADOOP_HOME 

 export HADOOP_YARN_HOME=$HADOOP_HOME 

 export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop 

 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib 

 export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 

 export HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib

进入: cd hadoop-2.5.2/etc/hadoop

修改:hadoop-env.sh,yarn-env.sh 的JAVA_HOME 为

export JAVA_HOME=/usr/java/jdk1.7.0_67 (安装jdk 路径)

修改 :vi etc/hadoop/core-site.xml 

<configuration>

<property>

<name>hadoop.tmp.dir</name>

<value>/opt/soft-228238/hadoop-2.5.2/tmp</value>

<description>A base for other temporary directories.</description>

</property>

 

<property>

<name>fs.default.name</name>

<value>hdfs://192.168.68.84:9000</value>

</property>

<property>

<name>io.file.buffer.size</name>

<value>131072</value>

</property>

 

<property>

<name>hadoop.proxyuser.root.hosts</name>

<value>192.168.68.84</value>

</property>

<property>

<name>hadoop.proxyuser.root.groups</name>

<value>*</value>

</property>

</configuration>

 

修改: vi hdfs-site.xml  (注意:这里需要自己手动用mkdir创建name和data文件夹,具体位置也可以自己选择,其中dfs.replication的值建议配置为与分布式 cluster 中实际的 DataNode 主机数一致。)

 <configuration>

<property>

<name>dfs.namenode.name.dir</name>

<value>/opt/soft-228238/hadoop-2.5.2/hdfs/name</value>

<final>true</final>

</property>

 

<property>

<name>dfs.datanode.data.dir</name>

<value>/opt/soft-228238/hadoop-2.5.2/hdfs/data</value>

<final>true</final>

</property>

<property>

<name>dfs.replication</name>

<value>2</value>

</property>

 

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

</configuration>

 修改: vi mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

<final>true</final>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>192.168.68.84:10020</value>

</property>

 

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>192.168.68.84:19888</value>

</property>

 

<property>

<name>mapreduce.jobhistory.intermediate-done-dir</name>

<value>/usr/dpap/hadoop/tmp</value>

</property>

<property>

<name>mapreduce.jobhistory.done-dir</name>

<value>/usr/dpap/hadoop/done</value>

</property>

 

<property>

<name>mapreduce.job.tracker</name>

<value>192.168.68.84:9001</value>

</property>

 

</configuration>

 

修改 : vi yarn-site.xml

 <configuration>

 

<!-- Site specific YARN configuration properties -->

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

 

<property>

<name>yarn.resourcemanager.address</name>

<value>192.168.68.84:18040</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>192.168.68.84:18030</value>

</property>

 

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>192.168.68.84:18025</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address</name>

<value>192.168.68.84:18041</value>

</property>

 

<property>

<name>yarn.resourcemanager.webapp.address</name>

<value>192.168.68.84:8088</value>

</property>

 

<property>

<name>yarn.nodemanager.local-dirs</name>

<value>/opt/soft-228238/hadoop-2.5.2/mynode/my</value>

</property>

<property>

<name>yarn.nodemanager.log-dirs</name>

<value>/opt/soft-228238/hadoop-2.5.2/mynode/logs</value>

</property>

 

<property>

<name>yarn.nodemanager.log.retain-seconds</name>

<value>10800</value>

</property>

 

<property>

<name>yarn.nodemanager.remote-app-log-dir</name>

<value>/logs</value>

</property>

<property>

<name>yarn.nodemanager.remote-app-log-dir-suffix</name>

<value>logs</value>

</property>

 

<property>

<name>yarn.log-aggregation.retain-seconds</name>

<value>-1</value>

</property>

 

<property>

<name>yarn.log-aggregation.retain-check-interval-seconds</name>

<value>-1</value>

</property>

</configuration>

 注:192.168.68.84 为集群机的 主机IP

启动测试

格式化:namdenode

    cd bin/hadoop namenode -format

成功标志:

启动 hdfs :

cd  路径/hadoop-2.5.2/sbin

sbin/start-dfs.sh 

启动 yarn : 

sbin/start-yarn.sh

jsp 查看状态,下图为成功标志:


 

 集群配置:

编辑 $HADOOP_HOME/etc/hadoop/slaves

内容如下:

Supervisor-85   

Supervisor-41

 

(注:)Supervisor-85,Supervisor-84 是集群机器名称,可在 系统的/etc/hosts

文件设置

将 etc/hadoop 下的配置文件拷贝到其他机器对应目录中即可。

scp /etc/hadoop [email protected]:/etc/hadoop

scp /etc/ hadoop [email protected]:/etc/hadoop

浏览器查看:

浏览器打开 http://192.168.68.84:50070/,会看到hdfs管理页面

浏览器打开 http://192.168.68.84:8088/,会看到hadoop进程管理页面

 

dfs上创建input目录

[root@supervisor-84 bin]# hadoop fs -mkdir -p input

 

把hadoop目录下的README.txt拷贝到dfs新建的input

[root@supervisor-84 hadoop-2.5.2]# hadoop fs -copyFromLocal README.txt input

 

猜你喜欢

转载自wangmengbk.iteye.com/blog/2215146