Hadoop HDFS RPM包安装方案

文章出处:http://netkiller.github.io/storage/hdfs.html

5.2. Hadoop HDFS RPM包安装方案

你是不是感觉hadoop 安装太复杂呢? 下面是无障碍,无门槛安装方案,非常适合不懂java得系统管理。


HDFS:
      NameNode  :管理节点
      DataNode  :数据节点
      SecondaryNamenode : 数据源信息备份整理节点

MapReduce
       JobTracker  :任务管理节点
       Tasktracker :任务运行节点

5.2.1. 准备工作

准备4台服务器,操作系统为 Centos 6.4 最小化安装


NameNode   192.168.2.10 hostname namenode
DataNode    192.168.2.11 hostname:datanode1
DataNode    192.168.2.12 hostname:datanode2

JobTracker  192.168.2.10 (也可单独配置一台,也可以与NameNode公用,这里只用到了HDFS,这台可有可无,准备上面4台即可)
TaskTracker (与DataNode共用)

设置网络使其可以互访,然后关闭防火墙与selinux

# yum update -y
# lokkit --disabled --selinux=disabled
			

Hadoop 重要的端口


1.Job Tracker 管理界面: 50030
2.HDFS 管理界面 :  50070
3.HDFS通信端口:  9000
4.MapReduce通信端口:  9001

过程 6.3. Hadoop - 准备工作

  1. 为所有服务器安装Java运行环境

    以 CentOS 6.4 为例

    # yum install java-1.7.0-openjdk
    					
  2. 在所有服务器上安装 Hadoop

    安装方案有下面两种 RPM与YUM,选择其中一种

    # rpm -ivh http://ftp.cuhk.edu.hk/pub/packages/apache.org/hadoop/common/hadoop-1.1.2/hadoop-1.1.2-1.x86_64.rpm
    Retrieving http://ftp.cuhk.edu.hk/pub/packages/apache.org/hadoop/common/hadoop-1.1.2/hadoop-1.1.2-1.x86_64.rpm
    Preparing...                ########################################### [100%]
       1:hadoop                 ########################################### [100%]
    					
    yum localinstall http://ftp.cuhk.edu.hk/pub/packages/apache.org/hadoop/common/hadoop-1.1.2/hadoop-1.1.2-1.x86_64.rpm
    					

    如果网络比较慢,可以使用Wget或axel下载后安装

    wget http://ftp.cuhk.edu.hk/pub/packages/apache.org/hadoop/common/hadoop-1.1.2/hadoop-1.1.2-1.x86_64.rpm
    yum localinstall hadoop-1.1.2-1.x86_64.rpm
    					

    Hadoop 用户

    # cat /etc/passwd | grep Hadoop
    mapred:x:202:123:Hadoop MapReduce:/tmp:/bin/bash
    hdfs:x:201:123:Hadoop HDFS:/tmp:/bin/bash
    					
  3. 配置/etc/hosts文件

    					
    cat >> /etc/hosts <<EOD
    
    ###############################
    # Hadoop Host
    ###############################
    #NameNode
    192.168.2.10 	namenode.example.com
    
    #DataNode
    192.168.2.11 	datanode1.example.com
    192.168.2.12 	datanode2.example.com
    
    EOD
    					
    					
  4. 生成其密钥

    					
    # ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/root/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /root/.ssh/id_rsa.
    Your public key has been saved in /root/.ssh/id_rsa.pub.
    The key fingerprint is:
    cc:6f:30:76:82:28:96:13:c8:e6:bc:d7:5b:2d:11:d7 root@images-upload
    The key's randomart image is:
    +--[ RSA 2048]----+
    |                 |
    |..        .      |
    |.o.    . . E     |
    |+  o . +o        |
    | o= . ..S .      |
    | ..o.  .o*       |
    | . . . o .o      |
    |  .   o ..       |
    |     .           |
    +-----------------+
    					
    					
  5. 植入公钥证书

    向DataNode节点所有的服务器植入公钥证书

    					
    ssh-copy-id [email protected]
    ssh-copy-id [email protected]
    					
    					

    只需要输入yes后,再输入密码即可完成公钥证书的植入。过程类似下面:

    # ssh-copy-id [email protected]
    The authenticity of host 'datanode1.example.com (192.168.2.11)' can't be established.
    RSA key fingerprint is f1:0b:b1:63:1a:f6:ac:a3:da:4f:14:b5:f0:cc:df:67.
    Are you sure you want to continue connecting (yes/no)? yes 输入yes
    Warning: Permanently added 'datanode1.example.com' (RSA) to the list of known hosts.
    [email protected]'s password: 输入密码
    Now try logging into the machine, with "ssh '[email protected]'", and check in:
    
      .ssh/authorized_keys
    
    to make sure we haven't added extra keys that you weren't expecting.
    
    # ssh-copy-id [email protected]
    The authenticity of host 'datanode2.example.com (192.168.2.12)' can't be established.
    RSA key fingerprint is f1:0b:b1:63:1a:f6:ac:a3:da:4f:14:b5:f0:cc:df:67.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'datanode2.example.com,192.168.2.12' (RSA) to the list of known hosts.
    [email protected]'s password:
    Now try logging into the machine, with "ssh '[email protected]'", and check in:
    
      .ssh/authorized_keys
    
    to make sure we haven't added extra keys that you weren't expecting.
    
    					

    完成后测试登陆,如果没有提示密码直接进入表示正确

    # ssh [email protected]
    # exit
    					

5.2.2. NameNode 配置名称节点

配置文件

core-site.xml	 common属性配置
hdfs-site.xml    HDFS属性配置
mapred-site.xml  MapReduce属性配置
hadoop-env.sh    hadooop 环境变量配置
			

过程 6.4. Hadoop - NameNode

  1. 配置文件 hadoop-env.sh

    将 /usr/java/default 改为 /usr

    # cp hadoop-env.sh hadoop-env.sh.original
    # sed -i "s:/usr/java/default:/usr:" hadoop-env.sh
    					
  2. 配置文件 core-site.xml

    					
    # cp core-site.xml core-site.xml.original
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
             <name>fs.default.name</name>
             <value>hdfs://namenode.example.com:9000</value>
        </property>
        <property>
             <name>hadoop.tmp.dir</name>
             <value>/var/tmp/hadoop</value>
        </property>
    </configuration>
    					
    					

    fs.default.name: NameNode的URI。hdfs://主机名:端口/

    hadoop.tmp.dir: Hadoop的默认临时路径,

  3. 配置文件 mapred-site.xml

    					
    # cp mapred-site.xml mapred-site.xml.original
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>mapred.job.tracker</name>
            <value>namenode.example.com:9001</value>
        </property>
        <property>
            <name>mapred.local.dir</name>
            <value>/var/tmp/hadoop</value>
        </property>
    </configuration>
    					
    					

    mapred.job.tracker: JobTracker的主机和端口。

  4. 配置文件 hdfs-site.xml

    					
    # cp hdfs-site.xml hdfs-site.xml.original
    
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>dfs.name.dir</name>
            <value>/var/hadoop/name1</value>
            <description>  </description>
        </property>
        <property>
            <name>dfs.data.dir</name>
            <value>/var/hadoop/hdfs/data1</value>
            <description> </description>
        </property>
        <property>
            <name>dfs.replication</name>
            <value>2</value>
        </property>
    </configuration>
    					
    					
    dfs.name.dir: NameNode持久存储名字空间及事务日志的本地文件系统路径。 当这个值是一个逗号分割的目录列表时,nametable数据将会被复制到所有目录中做冗余备份。
    2)   dfs.data.dir是DataNode存放块数据的本地文件系统路径,逗号分割的列表。 当这个值是逗号分割的目录列表时,数据将被存储在所有目录下,通常分布在不同设备上。
    3)dfs.replication是数据需要备份的数量,默认是3,如果此数大于集群的机器数会出错。
    					
  5. 配置masters和slaves主从结点

    备份masters与slaves配置文件

     cp masters masters.original
     cp slaves slaves.original
    					
    					
    cat > /etc/hadoop/masters <<EOD
    namenode.example.com
    EOD
    					
    					
    					
    cat > /etc/hadoop/slaves <<EOD
    datanode1.example.com
    datanode2.example.com
    EOD
    					
    					
  6. 复制配置文件

    # cd /etc/hadoop/
    # scp hadoop-env.sh core-site.xml mapred-site.xml hdfs-site.xml masters slaves [email protected]:/etc/hadoop/
    # scp hadoop-env.sh core-site.xml mapred-site.xml hdfs-site.xml masters slaves [email protected]:/etc/hadoop/
    					

    控制台输出类似下面表示复制成功。

    # scp hadoop-env.sh core-site.xml mapred-site.xml hdfs-site.xml masters slaves [email protected]:/etc/hadoop/
    hadoop-env.sh                                                                          100% 2116     2.1KB/s   00:00
    core-site.xml                                                                          100%  412     0.4KB/s   00:00
    mapred-site.xml                                                                        100%  406     0.4KB/s   00:00
    hdfs-site.xml                                                                          100%  595     0.6KB/s   00:00
    masters                                                                                100%   21     0.0KB/s   00:00
    slaves
    					

    将 NameNode 上的配置文件复制给 DataNode

  7. 启动 Hadoop

    创建工作目录

    # mkdir /var/hadoop/
    # mkdir /var/hadoop/name{1,2}
    # su - hdfs -c  "mkdir -p  /var/hadoop/hdfs/data{1,2}"
    					
    # hadoop namenode -format
    13/04/23 14:35:33 INFO namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = namenode.example.com/192.168.2.10
    STARTUP_MSG:   args = [-format]
    STARTUP_MSG:   version = 1.1.2
    STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:06:43 UTC 2013
    ************************************************************/
    Re-format filesystem in /var/hadoop/name1 ? (Y or N) Y
    13/04/23 14:35:37 INFO util.GSet: VM type       = 64-bit
    13/04/23 14:35:37 INFO util.GSet: 2% max memory = 2.475 MB
    13/04/23 14:35:37 INFO util.GSet: capacity      = 2^18 = 262144 entries
    13/04/23 14:35:37 INFO util.GSet: recommended=262144, actual=262144
    13/04/23 14:35:37 INFO namenode.FSNamesystem: fsOwner=root
    13/04/23 14:35:37 INFO namenode.FSNamesystem: supergroup=supergroup
    13/04/23 14:35:37 INFO namenode.FSNamesystem: isPermissionEnabled=true
    13/04/23 14:35:37 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
    13/04/23 14:35:37 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
    13/04/23 14:35:38 INFO namenode.NameNode: Caching file names occuring more than 10 times
    13/04/23 14:35:38 INFO common.Storage: Image file of size 110 saved in 0 seconds.
    13/04/23 14:35:38 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/var/hadoop/name1/current/edits
    13/04/23 14:35:38 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/var/hadoop/name1/current/edits
    13/04/23 14:35:38 INFO common.Storage: Storage directory /var/hadoop/name1 has been successfully formatted.
    13/04/23 14:35:38 INFO common.Storage: Image file of size 110 saved in 0 seconds.
    13/04/23 14:35:38 INFO namenode.FSEditLog: closing edit log: position=4, editlog= /var/hadoop/name2/current/edits
    13/04/23 14:35:38 INFO namenode.FSEditLog: close success: truncate to 4, editlog= /var/hadoop/name2/current/edits
    13/04/23 14:35:38 INFO common.Storage: Storage directory  /var/hadoop/name2 has been successfully formatted.
    13/04/23 14:35:38 INFO namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at namenode.example.com/192.168.2.10
    ************************************************************/
    					
    # chown hdfs:hadoop -R /var/hadoop
    					
    # /etc/init.d/hadoop-namenode start
    # /etc/init.d/hadoop-datanode start
    					

http://192.168.2.10:50070/

5.2.3. DataNode 配置数据节点

过程 6.5. Hadoop - DataNode

  1. 创建hadoop数据存储目录

    					
    mkdir /var/hadoop/
    chown hdfs:hadoop -R /var/hadoop
    su - hdfs -c  "mkdir -p  /var/hadoop/hdfs/data1"
    					
    					
  2. 启动 Hadoop

    					
    # /etc/init.d/hadoop-datanode start
    					
    					

5.2.4. Hadoop UI (WEB界面)

常用访问页面


1. HDFS 界面
        http://hostname:50070
2. MapReduce 管理界面
        http://hostname:50030
        

5.2.5. 测试Hadoop

将install.log文件拷贝到分布式文件系统

hadoop fs -mkdir test
hadoop fs -put install.log test
			

显示文件内容

# hadoop dfs -cat test/install.log
			

查看目录结构

# hadoop dfs -ls
Found 1 items
drwxr-xr-x   - root supergroup          0 2013-04-23 15:20 /user/root/test
[root@namenode ~]# hadoop dfs -ls test
Found 1 items
-rw-r--r--   2 root supergroup      10278 2013-04-23 15:20 /user/root/test/install.log
			

猜你喜欢

转载自netkiller-github-com.iteye.com/blog/1852093
今日推荐