一.环境说明
安装介质 :virtualbox centos6.8
网络模式 NAT +host-only(双网卡模式公司内网无法使用简单的桥连接—因为ip自动获取会被占用)
三台虚拟机
h1 | 192.168.56.11 | namenode resourcemanager secondarynamenode |
h2 | 192.168.56.12 | datanode nodemanager |
h3 | 192.168.56.13 | datanode nodemanager |
hadoop使用2.9.1版
jdk1.7
二.关闭防火墙&网络配置
--关闭防火墙 service iptables stop --关闭防火墙开机启动服务 chkconfig iptables off 安装 sz rz 工具 yum install lrzsz.x86_64 修改hostname vim /etc/sysconfig/network文件中的hostname 重启 centos reboot -h now |
三.安装openssh
yum search openssh yum install openssh-server 设置ssh免密登录 ssh-keygen cp id_rsa.pub authorized_keys 添加免密 ssh-copy-id -i id_rsa.pub root@h1 ssh-copy-id -i id_rsa.pub root@h2 ssh-copy-id -i id_rsa.pub root@h3 |
四.下载&安装 jdk 下载hadoop 安装包
rz 命令上传jdk安装包 在线下载hadoop安装包 wget -c http://mirrors.shu.edu.cn/apache/hadoop/common/hadoop-2.9.1/hadoop-2.9.1.tar.gz tar -xzvf jdk-7u45-linux-x64.tar.gz tar -xzvf hadoop-2.9.1.tar.gz |
五.添加相关配置
1.指定JDK安装目录
etc/hadoop/hadoop-env.sh export JAVA_HOME=${JAVA_HOME} 设置jdk安装主目录 |
2.修改hadoop默认配置(etc/hadoop/core-site.xml)
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://h1:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> </configuration> |
3.hdfs系统中的namenode和datanode配置(etc/hadoop/hdfs-site.xml)
1.namenode配置 <configuration> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/cloud/hadoop-2.9.1/namenode</value> </property> <property> <name>dfs.blocksize</name> <value>268435456</value> <property> </configuration> 2.datanode配置 在etc/hadoop/hdfs-site.xml配置文件中添加配置项 <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/cloud/hadoop-2.9.1/datanode</value> </property> |
4.yarn中资源管理和节点管理配置-etc/hadoop/yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.address</name> <value>h1:8032</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> |
5.mapreduce 配置-/etc/hadoop/mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>h1:19888</value> </property> </configuration> |
6.添加集群信息-/etc/hadoop/slaves
h1 h2 h3 |
7.复制h1中对应文件到h2、h3
scp -r /usr/local/cloud root@h2:/usr/local/ scp -r /usr/local/cloud root@h3:/usr/local/ scp -r /etc/profile root@h2:/etc/profile scp -r /etc/profile root@h3:/etc/profile 生效环境变量配置 source /etc/profile |
六、启动hadoop集群
1.格式化 hdfs ./hdfs namenode -format 2.启动namenode ./hadoop-daemon.sh start namenode 3.启动secondary namenode ./hadoop-daemon.sh start secondarynamenode 4.启动 datanode ./hadoop-daemon.sh start datanode 5.启动yarn 1.启动 资源管理(在master节点) ./yarn-daemon.sh start resourcemanager 2.启动nodemanager (datanode所在的节点) ./yarn-daemon.sh start nodemanager 6.启动jobhistory 服务 ./mr-jobhistory-daemon.sh start historyserver |
8.如果配置完ssh免密登录可以直接使用(并且在/etc/hadoop/slaves配置集群成员信息)
./start-dfs.sh ./start-yarn.sh |
七、验证hadoop集群是正常
--hdfs集群健康状态 http://192.168.56.11:50070/ --yarn集群资源管理 http://192.168.56.11:8088/ -- mapreduce 执行任务状态 http://192.168.56.11:19888/ |
八、其他补充说明
hadoop有很多相关的配置如需要需改请查看一下链接进行修改即可。
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml hadoop默认配置文件
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml hadoop的hdfs 分布式文件系统相关配置
http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml hadoop mapreduce 相关配置
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml hadoop yarn集群相关配置
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/DeprecatedProperties.html 过期或不推荐使用的配置项说明。