Ubuntu 14.04下Hadoop集群安装

Hadoop是一个由Apache基金会所开发的分布式系统基础架构。它是根据Google公司发表的MapReduce和Google文件系统的论文自行实现而成。Hadoop框架透明地为应用提供可靠性和数据移动。它实现了名为MapReduce的编程范式:应用程序被分区成许多小部分,而每个部分都能在集群中的任意节点上运行或重新运行。

Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上;而且它提供高吞吐量(high throughput)来访问应用程序的数据,适合那些有着超大数据集(large data set)的应用程序。HDFS放宽了(relax)POSIX的要求,可以以流的形式访问(streaming access)文件系统中的数据。

用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力进行高速运算和存储。Hadoop的框架最核心的设计就是HDFS和MapReduce。HDFS为海量的数据提供了存储,则MapReduce为海量的数据提供了计算。

搭建

Ubuntu 14.04下Hadoop集群安装PDF文档可以到Linux公社资源站下载:

------------------------------------------分割线------------------------------------------

具体下载目录在 /2017年资料/2月/18日/Ubuntu 14.04下Hadoop集群安装/

------------------------------------------分割线------------------------------------------

1.修改hosts文件

保证三台机器的网络是可达的前提下,更改主机名,并修改hosts文件:

# hostnamectl  set-hostname master  // 在master节点上执行
# hostnamectl  set-hostname slave-1  // 在slave-1节点上执行
# hostnamectl  set-hostname slave-2  // 在slave-2节点上执行
分别把三台机器的hosts文件进行修改:
# vim /etc/hosts
192.168.1.2  master
192.168.1.3  slave-1
192.168.1.4  slave-2

2.在master和slave节点上安装java:

3.禁用IPv6

现在Hadoop目前对IPv6不是很好,在一些Linux发行版上可能造成一些未知bug。在Hadoop Wiki上提供了方法来禁用,我这里修改sysctl.conf文件,添加以下几行:

# vim /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
# sysctl -p  //使其立即生效

4.创建Hadoop User

在master和slave节点上执行:

# addgroup hdgroup  //创建hadoop group
# adduser —ingroup hdgroup hduser  //创建Hadoop User并加入Hadoop group
Adding user `hduser' ...
Adding new user `hduser' (1001) with group `hdgroup' ...
Creating home directory `/home/hduser' ...
Copying files from `/etc/skel' ...
Enter new UNIX password:            //输入密码之后一路回车即可
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for hduser
Enter the new value, or press ENTER for the default
        Full Name []:
        Room Number []:
        Work Phone []:
        Home Phone []:
        Other []:
Is the information correct? [Y/n]

Hadoop要求无密码登录,所以需要生成秘钥,这里注意要用刚才创建的普通hduser用户,分别在master和slave上执行如下操作:

# su - hduser
$ ssh-keygen -N ''
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
5b:ae:c6:5a:ce:66:51:d3:6c:6c:14:9b:b2:8a:da:e9 hduser@master
The key's randomart image is:
+--[ RSA 2048]----+
|            ..  |
|            .o  |
|          .=o    |
|          oo*    |
|        S.o+    |
|      ..=      |
|      ..+..      |
|    o ==.      |
|    ..E=+        |
+-----------------+
$ ssh-copy-id hduser@master
$ ssh-copy-id hduser@slave-1
$ ssh-copy-id hduser@slave-2

5.下载和安装Hadoop

 登录Hadoop的官方下载地址,选择你需要的版本,复制下载链接,这里我选用最新的2.7.3版本:

打开链接之后,右键复制链接地址:

在master和slave上分别执行(你也可以在一台机器上下载完之后拷贝到另外两台上):

$ cd /home/hduser
$ wget -c 
$ tar -zxvf hadoop-2.7.3.tar.gz
$ mv hadoop-2.7.3 hadoop

6.更改环境变量

首先确定之前安装的java home目录,查找办法如下(在任意一台机器上执行):

hduser@master:~$ env | grep -i java
JAVA_HOME=/usr/lib/jvm/java-8-oracle

分别在master和slave节点上执行以下操作,编辑".bashrc"文件,添加如下几行:

$ vim .bashrc  //编辑文件,添加如下几行
export HADOOP_HOME=/home/hduser/hadoop
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
$ source  .bashrc  //source使其立即生效

分别在master和slave节点上执行以下操作,更改Hadoop-env的JAVA_HOME:

$ vim /home/hduser/hadoop/etc/hadoop/hadoop-env.sh
#export JAVA_HOME=${JAVA_HOME}  //更改此行,或者注释掉新加以下一行
export JAVA_HOME=/usr/lib/jvm/java-8-oracle

--------------------------------------分割线 --------------------------------------

下面关于Hadoop的文章您也可能喜欢,不妨看看:

--------------------------------------分割线 --------------------------------------

7.Hadoop配置

Hadoop的配置这里主要涉及四个配置文件:etc/hadoop/core-site.xml,etc/hadoop/hdfs-site.xml, etc/hadoop/yarn-site.xml and etc/hadoop/mapred-site.xml.

这里摘录网络上的一段话,在继续下面的操作之前一定要阅读这段,以便更好的理解:

Hadoop Distributed File System: A distributed file system that provides high-throughput access to application data. A HDFS cluster primarily consists of a NameNode that manages the file system metadata and DataNodes that store the actual data. If you compare HDFS to a traditional storage structures ( e.g. FAT, NTFS), then NameNode is analogous to a Directory Node structure, and DataNode is analogous to actual file storage blocks.

Hadoop YARN: A framework for job scheduling and cluster resource management.

Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.

①在master和slave节点上更改"core-site.xml"文件,master和slave节点应该使用相同"fs.defaultFS"值,而且必须指向master节点;在“configuration”中间添加如下配置:

<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/hduser/tmp</value>
  <description>Temporary Directory.</description>
</property>
 
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master:54310</value>
  <description>Use HDFS as file storage engine</description>
</property>

最终core-site.xml配置文件如下图所示:

如果tmp目录不存在,需要手动创建一个:

$ mkdir /home/hduser/tmp
$ chown -R hduser:hdgroup /home/hduser/tmp //非hduser用户创建虚赋权

②只在master节点上更改"mapred-site.xml"文件,由于没有这个文件,需要需要复制那个template文件生成一个:

$ cd /home/hduser/hadoop/
$ cp -av etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml

编辑xml配置文件,在“configuration”中间添加如下配置:

 <property>
 <name>mapreduce.jobtracker.address</name>
 <value>master:54311</value>
 <description>The host and port that the MapReduce job tracker runs
  at. If “local”, then jobs are run in-process as a single map
  and reduce task.
</description>
</property>
<property>
 <name>mapreduce.framework.name</name>
 <value>yarn</value>
 <description>The framework for running mapreduce jobs</description>
</property>

③在master和slave节点上更改"hdfs-site.xml"文件,在“configuration”中间添加如下配置:

 <property>
 <name>dfs.replication</name>
 <value>2</value>
 <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
 </description>
</property>
<property>
 <name>dfs.namenode.name.dir</name>
 <value>/data/hduser/hdfs/namenode</value>
 <description>Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
 </description>
</property>
<property>
 <name>dfs.datanode.data.dir</name>
 <value>/data/hduser/hdfs/datanode</value>
 <description>Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
 </description>

猜你喜欢

转载自www.linuxidc.com/Linux/2017-02/140783.htm