搭建Hadoop环境

1、关闭防火墙
service iptables status

service iptables stop

chkconfig --list

chkconfig iptables off

2、SSH 免密码登录
生成秘钥到root目录
ssh-keygen -t rsa
公钥文件放入authorized_keys
cp id_rsa.pub authorized_keys

验证:ssh localhost

3、安装JDK
配置etc/profile JDK环境变量

4、开始安装hadoop
1)解压,配置hadoop环境变量bin
HADOOP_HOME
PATH: HADOOP_HOME/bin
2)修改四个配置文件
hadoop-env.sh
JAVA_HOME
core-site.xml
fs.default.name
hdfs://hadoop0:9000

hadoop.tmp.dir
/usr/XXX
即:
<configuration>
<property>
   <name>fs.default.name</name>
   <value>hdfs://hadoop0:9000</value>
</property>
<property>
   <name>hadoop.tmp.dir</name>
   <value>/opt/data/hadoop272</value>
</property>
</configuration>

hdfs-site.xml
dfs.replication
dfs.permissions(false)
即:
<configuration>
<property>
   <name>dfs.replication</name>
   <value>1</value>
</property>
<property>
   <name>dfs.permissions</name>
   <value>false</value>
</property>
</configuration>



mapred-site.xml
mapred.job.tracer
hadoop0:9001
即:
<configuration>
<property>
   <name>mapred.job.tracker</name>
   <value>hadoop0:9001</value>
</property>
</configuration>

3)格式化(类似新硬盘)
执行hadoop namenode -format
启动start-all.sh
查看java进程jps

http://50070 (jetty服务器)
http://50030

猜你喜欢

转载自gaojingsong.iteye.com/blog/2169355