hadoop安装以及配置

hadoop:
单机模式
伪分布模式
集群模式


==========================================================================
单机模式(开发测试模式,hadoop将以java进程形式运行)
==========================================================================
1、去官网下载最新的hadoop:
http://hadoop.apache.org/common/
http://labs.mop.com/apache-mirror/hadoop/common/hadoop-1.0.4/hadoop-1.0.4.tar.gz

2、安装JDK,配置好JDK环境变量
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_17
export CLASSPATH=.:$JAVA_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH

source /etc/profile

3、解压hadoop:
---------------------------------------
hadoop0.23.6版本配置

sudo tar -xzf hadoop0.23.6.tar.gz
cd /opt/apps/
ln -s /opt/app_install/hadoop0.23.6 hadoop
---------------------------------------
hadoop1.0.4版本配置

tar -zxvf hadoop-1.0.4.tar.gz
cd /opt/apps/
ln -s /opt/apps_install/hadoop-1.0.4 hadoop
---------------------------------------

4、创建hadoop的用户和用户组
(1)创建一个名为hadoop的用户组:
    sudo addgroup hadoop
    或redhat下面 groupadd hadoop
(2)创建一个名为hadoop的用户,归到hadoop用户组下
    sudo adduser --ingroup hadoop hadoop
    或redhat下面 useradd -ghadoop hadoop
(3)用gedit打开etc下的sudoers文件
    sudo gedit /etc/sudoers
    在 root   ALL=(ALL)  ALL 下面添加如下一行
    hadoop  ALL=(ALL)  ALL

    切换用户
    su hadoop


5、创建ssh-key
(1)装载ssh服务
    sudo apt-get install openssh-server
(2)创建ssh-key,为rsa
    ssh-keygen -t rsa -P ""
(3)将此ssh-key添加到信任列表中,并启用此ssh-key
     cat /home/hadoop/.ssh/id_rsa.pub >> /home/hadoop/.ssh/authorized_keys
     sudo /etc/init.d/ssh reload

    
6、配置系统环境变量
---------------------------------------
hadoop0.23.6版本配置

export HADOOP_HOME=/opt/apps/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
生效source /etc/profile
---------------------------------------
hadoop1.0.4版本配置

export HADOOP_INSTALL=/opt/apps/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
生效source /etc/profile
---------------------------------------

(已经可以运行开发测试模式,hadoop将以java进程形式运行)
测试:
hadoop jar hadoop-mapreduce-examples-0.23.6.jar wordcount firstTest result





==========================================================================
伪分布模式(是只使用一台机机器的集群模式)
==========================================================================

7、配置hadoop
---------------------------------------
hadoop0.23.6版本配置

(1)编辑文件hadoop/etc/hadoop/yarn-env.sh
头部增加:
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_17
export HADOOP_FREFIX=/opt/apps/hadoop
export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_HDFS_HOME=${HADOOP_FREFIX}
export PATH=$PATH:$HADOOP_FREFIX/bin
export PATH=$PATH:$HADOOP_FREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_FREFIX}
export YARN_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_HOME=${HADOOP_FREFIX}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_FREFIX}/etc/hadoop

(2)编辑文件libexec/hadoop-config.sh
添加export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_17
ln -s yarn-env.sh hadoop-env.sh
mkdir -p /opt/apps/hadoop_tmp/hadoop-root

(3)编辑文件hadoop/etc/hadoop/core-site.xml
<configuration>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://localhost:54310/</value>
  </property>
  <property>  
    <name>hadoop.tmp.dir</name>
    <value>/opt/apps/hadoop/hadoop-root</value>
  </property>
<property>
  <name>fs.arionfs.impl</name>
  <value>org.apache.hadoop.fs.pvfs2.Pvfs2FileSystem</value>
  <description>The FileSystem for arionfs.</description>
</property>
</configuration>

(4)编辑文件hadoop/etc/hadoop/hdfs-site.xml
<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/opt/apps/hadoop_space/dfs/name</value>
    <final>true</final>
</property>
<property>
    <name>dfs.namenode.data.dir</name>
    <value>file:/opt/apps/hadoop_space/dfs/data</value>
    <final>true</final>
</property>
<property>  
    <name>dfs.replication</name>
    <value>1</value>
</property>
<property>
    <name>dfs.permission</name>
    <value>false</value>
</property>

(5)编辑文件hadoop/etc/hadoop/mapred-site.xml
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<property>
    <name>mapreduce.job.tracker</name>
    <value>hdfs://localhost:9001</value>
<final>true</final>
</property>
<property>
    <name>mapreduce.map.memory.mb</name>
    <value>1536</value>
</property>
<property>
    <name>mapreduce.map.java.opts</name>
    <value>-Xmx1024M</value>
</property>
<property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>3072</value>
</property>
<property>
    <name>mapreduce.reduce.java.opts</name>
    <value>-Xmx2560M</value>
</property>
<property>
    <name>mapreduce.task.io.sort.mb</name>
    <value>512</value>
</property>
<property>
    <name>mapreduce.task.io.sort.factor</name>
    <value>100</value>
</property>    
<property>
    <name>mapreduce.reduce.shuffle.parallelcopies</name>
    <value>50</value>
</property>
<property>
    <name>mapreduce.system.dir</name>
    <value>file:/opt/apps/hadoop_space/mapred/system</value>
</property>
<property>
    <name>mapreduce.local.dir</name>
    <value>file:/opt/apps/hadoop_space/mapred/local</value>
    <final>true</final>
</property>

(6)编辑文件hadoop/etc/hadoop/yarn-site.xml
<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce.shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<property>
    <name>user.name</name>
    <value>hadoop</value>
</property>
<property>
    <name>yarn.resourcemanager.address</name>
    <value>localhost:54311</value>
</property>
<property>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>localhost:54312</value>
</property>
<property>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>localhost:54313</value>
</property>
<property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>localhost:54314</value>
</property>
<property>
    <name>yarn.web-proxy.address</name>
    <value>localhost:54315</value>
</property>
<property>
    <name>mapred.job.tracker</name>
    <value>localhost</value>
</property>
---------------------------------------
hadoop1.0.4版本配置

mkdir -p /opt/apps/hadoop_tmp/hadoop-root

(1)编辑文件hadoop/conf/hadoop-env.sh
将注释的JAVA_HOME配置改为
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_17

(2)修改文件hadoop/conf/core-site.xml
<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
    <description>master的hdfs连接地址,这个决定namenode</description>
</property>
<property>
    <name>hadoop.tmp.dir</name>
    <value>/opt/apps/hadoop_tmp/hadoop-root</value>
    <description>最重要的hadoop临时目录,其它目录会引用该目录的配置</description>
</property>

(3)修改文件hadoop/conf/hdfs-site.xml
<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
 <property>
    <name>dfs.permissions</name>
    <value>false</value>
    <description>关闭权限配置</description>
</property>

(4)修改文件hadoop/conf/mapred-site.xml
<property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
</property>
---------------------------------------

创建文件夹:
mkdir -p /opt/apps/hadoop_tmp/hadoop-root/dfs/name

8、格式化namenode (首次运行必需滴)
先进入hadoop目录,格式化namenode:
hadoop namenode -format


9、启动hadoop
---------------------------------------
hadoop0.23.6版本配置

在/opt/apps/hadoop/sbin
./start-dfs.sh
./start-yarn.sh
---------------------------------------
hadoop1.0.4版本配置

在/opt/apps/hadoop/bin
./start-all.sh

------------------------------
PS:
如果启动报错可能是顺序不对
rm -rf /opt/apps/hadoop/hadoop-root
rm -rf /opt/apps/hadoop_space/*
kill 所有进程 然后重新启动
------------------------------
界面:
http://localhost:50030 (MapReduce的Web页面)
http://localhost:50070 (HDFS的Web页面)

测试:
查看HDFS的命令行使用方式
hdfs dfs -help
查看HDFS中的文件
hdfs dfs -ls
在HDFS根目录创建文件夹
hdfs dfs -mkdir /firstTest
拷贝当前文件到HDFS上的一个文件夹
hdfs dfs -copyFromLocal test.txt /firstTest
运行一个小测试demo:
hadoop jar hadoop-mapreduce-examples-0.23.6.jar wordcount /firstTest result
查看运行结果:hdfs dfs -cat result/part-r-00000





==========================================================================
集群模式(生产环境)
==========================================================================
基本环境配置先参考单机模式配置好

1、配置一系列文件(所有节点上)
vim /etc/hosts
10.11.6.72 hadoop_master
10.11.6.56 hadoop_slave1
10.11.6.57 hadoop_slave2

2、进入hadoop目录下,配置conf下的masters文件
cd /opt/apps/hadoop
vim conf/masters
(打开后将里面内容清空,然后添加“master”或者master的IP“192.168.1.10”,
此处即是hosts中配置的映射,填master或者直接填IP都是一样的)

3、配置conf下的slaves文件
sudo gedit conf/slaves
(打开后将里面内容清空,然后添加“slave”或者slave的IP“192.168.1.11”,原因同上)

http://www.cnblogs.com/xia520pi/archive/2012/05/16/2503949.html



==========================================================================
编译eclipse plugin
==========================================================================
1、编辑${HADOOP_HOME}/src/contrib/下的build-contrib.xml文件
添加{version}和{eclipse.home}属性:
<!-- 这里定义了 version & eclipse.home -->  
<property name="version" value="1.0.4"/>  
<property name="eclipse.home" value="/home/chqz/systool/eclipse/eclipse"/>
<property name="name" value="${ant.project.name}"/>  
<property name="root" value="${basedir}"/>  
<property name="hadoop.root" location="${root}/../../../"/>


2、编辑${HADOOP_HOME}/src/contrib/eclipse-plugin/下的build.xml文件
(1) 添加hadoop-jars path,并同时加入到classpath中:
<!-- 这里添加了 hadoop-jars -->
<path id="hadoop-jars">
<fileset dir="${hadoop.root}/">
<include name="hadoop-*.jar"/>
</fileset>
</path>

<!-- Override classpath to include Eclipse SDK jars -->
<path id="classpath">
<pathelement location="${build.classes}"/>
<pathelement location="${hadoop.root}/build/classes"/>
<path refid="eclipse-sdk-jars"/>

<!-- 将 hadoop-jars 添加到这里 -->
<path refid="hadoop-jars"/>
</path>

(2) 设置includeantruntime=on,防止compile时报warning:
<target name="compile" depends="init, ivy-retrieve-common" unless="skip.contrib">
<echo message="contrib: ${name}"/>
<javac
encoding="${build.encoding}"
srcdir="${src.dir}"
includes="**/*.java"
destdir="${build.classes}"
debug="${javac.debug}"
deprecation="${javac.deprecation}"
<!-- 设置includeantruntime=on,防止compile报warning -->
  includeantruntime="on">
<classpath refid="classpath"/>
</javac>
</target>

(3) 添加将要打包到plugin中的第三方包列表:
<!-- Override jar target to specify manifest -->
<target name="jar" depends="compile" unless="skip.contrib">
<mkdir dir="${build.dir}/lib"/>
  <!-- 这里主要修改的是file中的值,注意路径一定要正确 -->
<copy file="${hadoop.root}/hadoop-core-${version}.jar" tofile="${build.dir}/lib/hadoop-core.jar" verbose="true"/>
<copy file="${hadoop.root}/lib/commons-cli-1.2.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.root}/lib/commons-lang-2.4.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.root}/lib/commons-configuration-1.6.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.root}/lib/jackson-mapper-asl-1.8.8.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.root}/lib/jackson-core-asl-1.8.8.jar" todir="${build.dir}/lib" verbose="true"/>
<copy file="${hadoop.root}/lib/commons-httpclient-3.0.1.jar" todir="${build.dir}/lib" verbose="true"/>

<jar
jarfile="${build.dir}/hadoop-${name}-${version}.jar" manifest="${root}/META-INF/MANIFEST.MF">
<fileset dir="${build.dir}" includes="classes/ lib/"/>
<fileset dir="${root}" includes="resources/ plugin.xml"/>
</jar>
</target>


3、执行ant命令以生成hadoop-eclipse-plugin-${version}.jar包:
进入到${HADOOP_HOME}/src/contrib/eclipse-plugin/目录下,然后执行ant命令
最后成功生成的hadoop-eclipse-plugin-${version}.jar在${HADOOP_HOME}/build/contrib/eclipse-plugin下
如:/opt/apps/hadoop/build/contrib/eclipse-plugin/hadoop-eclipse-plugin-1.0.4.jar



猜你喜欢

转载自blog.csdn.net/jsjwk/article/details/8923999
今日推荐