Hadoop安装之standAlone单机

hadoop安装1.x和2.x有三种安装架构,本文将介绍第一种安装模式standAlone

一:standAlone(单机)

standAlone安装将所有服务都安装在一台机器上,如下:

运行服务

服务器IP

NameNode

192.168.254.100

SecondaryNameNode

192.168.254.100

DataNode

192.168.254.100

ResourceManager

192.168.254.100

NodeManager

192.168.254.100

1)下载安装:

JDK的安装请自行安装。

下载地址:http://archive.apache.org/dist/hadoop/common/hadoop-2.7.5/hadoop-2.7.5.tar.gz

解压:

mkdir -p /export/softwares

mkdir -p /exprot/servers

cd /export/softwares

tar -zxvf hadoop-2.7.5.tar.gz -C ../servers/

2)修改配置文件:

2.1:修改core-site.xml

cd /export/servers/hadoop-2.7.5/etc/hadoop

vim core-site.xml

<configuration>

<property>

<name>fs.default.name</name>

<!--  namenode所在机器地址 -->

<value>hdfs://192.168.254.100:8020</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/export/servers/hadoop-2.7.5/hadoopDatas/tempDatas</value>

</property>

<!--  缓冲区大小,可根据服务器性能动态调整 -->

<property>

<name>io.file.buffer.size</name>

<value>4096</value>

</property>

<!--  开启hdfs的垃圾桶机制,删除掉的数据可以从垃圾桶中回收,单位分钟 -->

<property>

<name>fs.trash.interval</name>

<value>10080</value>

</property>

</configuration>

2.2:修改hdfs-site.xml

cd /export/servers/hadoop-2.7.5/etc/hadoop

vim hdfs-site.xml

<configuration>

<!-- NameNode存储元数据信息的路径,一般先确定磁盘的挂载目录,然后多个目录用,进行分割   -->

<!--   集群动态上下线

<property>

<name>dfs.hosts</name>

<value>/export/servers/hadoop-2.7.4/etc/hadoop/accept_host</value>

</property>

<property>

<name>dfs.hosts.exclude</name>

<value>/export/servers/hadoop-2.7.4/etc/hadoop/deny_host</value>

</property>

 -->

 <property>

<name>dfs.namenode.secondary.http-address</name>

<value>node01:50090</value>

</property>

<property>

<!--浏览器访问hdfs端口-->

<name>dfs.namenode.http-address</name>

<value>node01:50070</value>

</property>

<property>

<name>dfs.namenode.name.dir</name>

<value>file:///export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas,file:///export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas2</value>

</property>

<!--  定义dataNode数据存储的节点位置,实际工作中,一般先确定磁盘的挂载目录,然后多个目录用,进行分割  -->

<property>

<name>dfs.datanode.data.dir</name>

<value>file:///export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas,file:///export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas2</value>

</property>

<property>

<name>dfs.namenode.edits.dir</name>

<value>file:///export/servers/hadoop-2.7.5/hadoopDatas/nn/edits</value>

</property>

<property>

<name>dfs.namenode.checkpoint.dir</name>

<value>file:///export/servers/hadoop-2.7.5/hadoopDatas/snn/name</value>

</property>

<property>

<name>dfs.namenode.checkpoint.edits.dir</name>

<value>file:///export/servers/hadoop-2.7.5/hadoopDatas/dfs/snn/edits</value>

</property>

<!--副本数量-->

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

<!--hadoop权限校验-->

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

<!--block块大小,128M-->

<property>

<name>dfs.blocksize</name>

<value>134217728</value>

</property>

</configuration>

2.3:修改hadoop-env.sh

cd /exprot/servers/hadoop2.7.5/etc/hadoop

vim hadoop-evn.sh

export JAVA_HOME=/export/servers/jdk1.8.0_141

2.4:修改mapredp-site.xml

cd /exprot/servers/hadoop2.7.5/etc/hadoop

vim mapredp-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

<property>

<name>mapreduce.job.ubertask.enable</name>

<value>true</value>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>192.168.254.100:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>192.168.254.100:19888</value>

</property>

</configuration>

2.5:修改yarn-site.xml

cd /exprot/servers/hadoop2.7.5/etc/hadoop

vim yarn-site.xml

<configuration>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>node01</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.log-aggregation-enable</name>

<value>true</value>

</property>

<property>

<name>yarn.log-aggregation.retain-seconds</name>

<value>604800</value>

</property>

</configuration>

2.6:修改mapred-env.sh

cd /exprot/servers/hadoop2.7.5/etc/hadoop

vim mapred-env.sh

export JAVA_HOME=/export/servers/jdk1.8.0_141

2.7:slaves

cd /exprot/servers/hadoop2.7.5/etc/hadoop

vim slaves

localhost

3)启动集群

启动hadoop集群,需要启动hdfs和yarn两个模块。注意首次启动HDFS时,必须对其进行格式化操作,本质上是一些清理和准备工作,因为此时HDFS在物理上还是不存在的

格式化命令:hdfs namenode -format

再启动前需要创建存放数据文件夹

cd  /export/servers/hadoop-2.7.5

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/tempDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas2

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas2

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/nn/edits

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/snn/name

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/dfs/snn/edits

启动命令:

cd  /export/servers/hadoop-2.7.5/

bin/hdfs namenode -format --如果已经格式化了就不需要再格式化

sbin/start-dfs.sh

sbin/start-yarn.sh

sbin/mr-jobhistory-daemon.sh start historyserver

4)界面查看:

http://192.168.254.100:50070/explorer.html#/  查看hdfs

http://192.168.254.100:8088/cluster   查看yarn集群

http://192.168.254.100:19888/jobhistory  查看历史完成的任务

猜你喜欢

转载自blog.csdn.net/qq_15076569/article/details/84103615