hadoop单机模式的安装:
1、安装jdk
配置环境变量:
2、设置免秘钥:
[root@node11 ~]# ssh-keygen -t dsa -P '' -f /root/.ssh/id_dsa
Generating public/private dsa key pair.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
3c:67:75:70:50:5b:c5:16:83:f3:1b:7d:0c:1e:bc:17 root@node11
The key's randomart image is:
+--[ DSA 1024]----+
| o+++=|
| =+E+|
| ..=*o|
| . . .oo=|
| S o .+|
| + . |
| |
| |
| |
+-----------------+
[root@node11 ~]#
要想登录对方,首先要将自己的公钥给对方
[root@node11 .ssh]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
3、上传hadoop的tar包并解压
配置环境变量,需要配置bin和sbin
4、修改hadoop的配置文件,进行相应的配置
在coresite.xml中加上:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node11:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/sxt/hadoop/local</value>
</property>
</configuration>
在map_env.sh中修改:
在yarn_env.sh中修改:
在hadoop-env.sh中修改:
在hdfs-site.xml中修改:
创建slaves文件:
5、进行格式化:
如果hdfs-site.xml和core-site.xml配置错误的话,格式化会通不过,就会报错。
6、启动hdfs:
start-dfs.sh