hadoop 集群环境搭建

一 部署 Hadoop 前的准备工作
1 需要知道hadoop依赖Java和SSH
Java 1.5.x (以上),必须安装。
ssh 必须安装并且保证 sshd 一直运行,以便用Hadoop 脚本管理远端Hadoop守护进程。

2 建立 Hadoop 公共帐号
所有的节点应该具有相同的用户名,可以使用如下命令添加:
useradd hadoop
passwd hadoop

vi /etc/sudoers

添加
hadoop ALL=(ALL)ALL

3 配置 host 主机名
tail -n 3 /etc/hosts
192.168.1.114  namenode
192.168.1.115  datanode1
192.168.1.116  datanode2
192.168.1.117  datanode3

4 以上几点要求所有节点(namenode|datanode)配置全部相同

二 ssh 配置

1 生成私匙 id_rsa 与 公匙 id_rsa.pub 配置文件
[hadoop@hadoop1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
d6:63:76:43:e2:5b:8e:85:ab:67:a2:7c:a6:8f:23:f9 [email protected]

2 私匙 id_rsa 与 公匙 id_rsa.pub 配置文件
[hadoop@hadoop ~]$cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[hadoop@hadoop ~]$ ls .ssh/
authorized_keys  id_rsa  id_rsa.pub 

3 把公匙文件上传到datanode服务器
[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode1
28
hadoop@datanode1's password:
Now try logging into the machine, with "ssh 'hadoop@datanode1'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode2
28
hadoop@datanode2's password:
Now try logging into the machine, with "ssh 'hadoop@datanode2'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode3
28
hadoop@datanode3's password:
Now try logging into the machine, with "ssh 'hadoop@datanode3'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[hadoop@hadoop ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@localhost
28
hadoop@localhost's password:
Now try logging into the machine, with "ssh 'hadoop@localhost'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.


4 验证
[hadoop@localhost hadoop-1.1.2]$ ssh datanode1
Last login: Sun Jun  9 00:17:09 2013 from namenode
[hadoop@datanode1 ~]$ exit
logout
Connection to datanode1 closed.



三 java环境配置

那么 下载linux下的jdk 包,解压到指定的目录

# tar  -zxvf   jdk-7u7-linux-i586.tar.gz

配置java环境变量

# vi /etc/profile

在文件的最后添加
export JAVA_HOME=/usr/java/jdk-1.7
export CLASSPATH=.:$JAVA_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH

重新加载刚才修改的配置文件

# source  /etc/profile
# java -version

java 环境安装成功!


拷贝java安装包和hadoop的包到datanode上

# scp /etc/profile root@datanode1:/etc/
# scp /etc/profile root@datanode2:/etc/
# scp /etc/profile root@datanode3:/etc/

[root@hadoop ~]# scp -r /home/hadoop/ hadoop@datanode1:/home/hadoop/
[root@hadoop ~]# scp -r /home/hadoop/ hadoop@datanode2:/home/hadoop/
[root@hadoop ~]# scp -r /home/hadoop/ hadoop@datanode3:/home/hadoop/ 


按照同样的步骤 将jdk的 tar包解压到 /usr/java/jdk1.7.0_07

#source /etc/profile
#java -version

四 hadoop 配置

1 配置目录
[hadoop@hadoop ~]$ pwd
/home/hadoop
2 配置hadoop-env.sh,指定java位置
vi hadoop/conf/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_07

3 配置core-site.xml //定位文件系统的 namenode

[hadoop@hadoop1 ~]$ cat hadoop/conf/core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>fs.default.name</name>
<value>hdfs://namenode:9000</value>
</property>

</configuration>

4 配置mapred-site.xml //定位jobtracker 所在的主节点

[hadoop@hadoop1 ~]$ cat hadoop/conf/mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>mapred.job.tracker</name>
<value>namenode:9001</value>
</property>

</configuration>

5 配置hdfs-site.xml //配置HDFS副本数量
 
[hadoop@hadoop1 ~]$ vi hadoop/conf/hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>dfs.replication</name>
<value>3</value>
</property>

</configuration>

6 配置 master 与 slave 配置文档
[hadoop@hadoop ~]$ vi hadoop/conf/masters
namenode
[hadoop@hadoop ~]$ vi hadoop/conf/slaves
datanode1
datanode2
datanode3

7 拷贝hadoop 目录到所有节点(datanode)
[hadoop@hadoop hadoop]$ scp -r /home/hadoop/hadoop-1.1.2 hadoop@datanode1:
[hadoop@hadoop hadoop]$ scp -r /home/hadoop/hadoop-1.1.2 hadoop@datanode2:
[hadoop@hadoop hadoop]$ scp -r /home/hadoop/hadoop-1.1.2 hadoop@datanode3:

8 格式化 HDFS
$ bin/hadoop namenode -format
9 启动hadoop 守护进程
[hadoop@hadoop hadoop]$ bin/start-all.sh
10 验证 计算PI 的值
[hadoop@hadoop hadoop]$bin/hadoop jar hadoop-examples-1.1.2.jar pi 4 2

如果出现

“could only be replicated to 0 nodes, instead of 1 ”,产生这样的错误原因有多种,这里列举出以下四种常用的解决方法以供参考:
1、确保master(namenode) 、slaves(datanode)的防火墙已经关闭(我遇到的是这种 $service iptables stop 来关闭防火墙)
2、确保DFS空间的使用情况
3、Hadoop默认的hadoop.tmp.dir的路径为/tmp/hadoop-${user.name},而有的linux系统的/tmp目录文件系统的类型往往是Hadoop不支持的。
4、先后启动namenode、datanode
[hadoop@hadoop hadoop]$bin/hadoop-daemon.sh start namenode
[hadoop@hadoop hadoop]$bin/hadoop-daemon.sh start datanode

删除文件
[hadoop@hadoop hadoop]bin/hadoop fs -rmr hdfs://localhost:8020/user/Vito/PiEstimator_TMP_3_141592654 

离开安全模式
[hadoop@hadoop hadoop]bin/hadoop dfsadmin -safemode leave

看到下面的输出说明环境搭建成功
[hadoop@hadoop hadoop]$ bin/hadoop fs -rmr hdfs://namenode:9000/user/hadoop/PiEstimator_TMP_3_141592654
Deleted hdfs://namenode:9000/user/hadoop/PiEstimator_TMP_3_141592654
[hadoop@hadoop hadoop]$ bin/hadoop jar hadoop-examples-1.1.2.jar pi 4 2
Number of Maps  = 4
Samples per Map = 2
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Starting Job
13/06/09 07:02:57 INFO mapred.FileInputFormat: Total input paths to process : 4
13/06/09 07:02:57 INFO mapred.JobClient: Running job: job_201306090651_0002
13/06/09 07:02:58 INFO mapred.JobClient:  map 0% reduce 0%
13/06/09 07:03:04 INFO mapred.JobClient:  map 50% reduce 0%
13/06/09 07:03:12 INFO mapred.JobClient:  map 50% reduce 16%
13/06/09 07:03:52 INFO mapred.JobClient:  map 75% reduce 16%
13/06/09 07:03:57 INFO mapred.JobClient:  map 100% reduce 16%
13/06/09 07:03:59 INFO mapred.JobClient:  map 100% reduce 100%
13/06/09 07:04:00 INFO mapred.JobClient: Job complete: job_201306090651_0002
13/06/09 07:04:00 INFO mapred.JobClient: Counters: 31
13/06/09 07:04:00 INFO mapred.JobClient:   Job Counters
13/06/09 07:04:00 INFO mapred.JobClient:     Launched reduce tasks=1
13/06/09 07:04:00 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=115754
13/06/09 07:04:00 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/06/09 07:04:00 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/06/09 07:04:00 INFO mapred.JobClient:     Rack-local map tasks=2
13/06/09 07:04:00 INFO mapred.JobClient:     Launched map tasks=4
13/06/09 07:04:00 INFO mapred.JobClient:     Data-local map tasks=2
13/06/09 07:04:00 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=55300
13/06/09 07:04:00 INFO mapred.JobClient:   File Input Format Counters
13/06/09 07:04:00 INFO mapred.JobClient:     Bytes Read=472
13/06/09 07:04:00 INFO mapred.JobClient:   File Output Format Counters
13/06/09 07:04:00 INFO mapred.JobClient:     Bytes Written=97
13/06/09 07:04:00 INFO mapred.JobClient:   FileSystemCounters
13/06/09 07:04:00 INFO mapred.JobClient:     FILE_BYTES_READ=94
13/06/09 07:04:00 INFO mapred.JobClient:     HDFS_BYTES_READ=960
13/06/09 07:04:00 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=259773
13/06/09 07:04:00 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=215
13/06/09 07:04:00 INFO mapred.JobClient:   Map-Reduce Framework
13/06/09 07:04:00 INFO mapred.JobClient:     Map output materialized bytes=112
13/06/09 07:04:00 INFO mapred.JobClient:     Map input records=4
13/06/09 07:04:00 INFO mapred.JobClient:     Reduce shuffle bytes=112
13/06/09 07:04:00 INFO mapred.JobClient:     Spilled Records=16
13/06/09 07:04:00 INFO mapred.JobClient:     Map output bytes=72
13/06/09 07:04:00 INFO mapred.JobClient:     Total committed heap usage (bytes)=820191232
13/06/09 07:04:00 INFO mapred.JobClient:     CPU time spent (ms)=3470
13/06/09 07:04:00 INFO mapred.JobClient:     Map input bytes=96
13/06/09 07:04:00 INFO mapred.JobClient:     SPLIT_RAW_BYTES=488
13/06/09 07:04:00 INFO mapred.JobClient:     Combine input records=0
13/06/09 07:04:00 INFO mapred.JobClient:     Reduce input records=8
13/06/09 07:04:00 INFO mapred.JobClient:     Reduce input groups=8
13/06/09 07:04:00 INFO mapred.JobClient:     Combine output records=0
13/06/09 07:04:00 INFO mapred.JobClient:     Physical memory (bytes) snapshot=592089088
13/06/09 07:04:00 INFO mapred.JobClient:     Reduce output records=0
13/06/09 07:04:00 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=1736314880
13/06/09 07:04:00 INFO mapred.JobClient:     Map output records=8
Job Finished in 63.642 seconds
Estimated value of Pi is 3.50000000000000000000

猜你喜欢

转载自chen106106.iteye.com/blog/1884864