Free-tight configuration log ##
arranged densely Free login node01-> amdha01
SSH-keygen -t RSA
SSH-Copy-ID -i ~ / .ssh / id_rsa.pub the root @ amdha01
1, upload files
ftp
install yum install lrzsz -y
use the command rz
upload files hadoop the JDK
2, unpack
linux create a special installation package
mkdir / opt / ***
In order to facilitate future viewing
tar -zxvf jdk
3, configure the environment variables
vi / etc / profile
add
Export the JAVA_HOME = / opt / Software / jdk1.8.0_121
Export the PATH =
JAVA_HOME / bin
Source / etc / Profile ----- make the environment variables to take effect
Note: The
user variables .bashrc
system variables / etc / Profile
3, upload hadoop package
tar -zxvf hadoop-2.6.5.tar.gz
4, the configuration information of
the operating system command bin- (change search file deletions)
sbin- system management command (start cluster, off)
etc / hadoop- configuration
1, slaves- node (DN)
2、hdfs-site.xml
<property>
<name>dfs.replication</name> //设置备份个数
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name> //secondaryNamenode
<value>node01:50090</value>
</property>
3、core-site.xml
<property>
<name>fs.defaultFS</name> //namenode
<value>hdfs://node01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name> //namenode启动后产生的信息
<value>/var/abc/hadoop/local</value>
</property>
4、把*-env.sh的文件内部所有的java路径改为绝对路径
回去java_home路径去拿,依赖于jdk,
有可能找不到java路径,直接设成绝对路径
5, format
cd /opt/software/hadoop-2.6.5/bin/
./hdfs the NameNode -format ----- all documents take effect
6, the start command
cd / opt / software (file created their own installation package ) / hadoop / sbin
./start-dfs.sh
JPS view the process
7, the configuration hadoop environment variable
vi / etc / profile
add
Export HADOOP_HOME is = / opt / Software / hadoop-2.6.5
Export the PATH =
HADOOP_HOME/bin:$HADOOP_HOME/sbin