----- HDFS cluster to build a fully distributed build steps (Shoubashoujiao school)

Creative Commons License Copyright: Attribution, allow others to create paper-based, and must distribute paper (based on the original license agreement with the same license Creative Commons )

1, a plurality of nodes in the network map
192.168.79.123 amdha01
192.168.79.124 amdha02
192.168.79.125 node03
192.168.79.126 node04
Here Insert Picture Description
Note: ip write their own
send a file to a certain node on another node
SCP / etc / the hosts the root 192.168.79.124 @ : / etc
Note: Creating a send several times, each time to change ip right.
Otherwise found.
Here Insert Picture Description
2, between the multi-node configurations
(1) time synchronization
① each node installation install ntp ntp command yum
② Find the latest Internet time server ntp1.aliyun.com
③ time synchronization ntpdate ntp1.aliyun.com
(2) configuring a free confidential login
node01 -> node01 node01-> amdha02 node01-> node03 node01-> node04
'' -f ~ / .ssh / id_rsa (private key) performed all nodes ① keygen -t-RSA -P SSH
② performed node01 node, the node01 the public key is added to the other nodes in the white list (public key)
to establish a connection via the public key
ssh-copy-id -i ~ / .ssh / id_rsa.pub root @ node01
-i-Copy-ID SSH ~ / .ssh / id_rsa.pub the root @ amdha02
SSH-Copy-ID -i ~ / .ssh / id_rsa.pub the root @ node03
SSH-Copy-ID -i ~ / .ssh / id_rsa. the root @ node04 Pub
(. 3) modify hdfs-site.xml profile
(4) modified core-site.xml profile
(5) to modify slaves profile
(4) (4) (5) may not know the reference pseudo-distributed mode , a detailed picture
https://blog.csdn.net/power_k/article/details/91572131
and internal modifications * .env.sh more than some java file path into an absolute path
(6) modified to node02 node03 node04
will configure the installation package is distributed to other nodes
scp -r jdk1.8.0_121 root @ node03: / opt / Software
scp -r hadoop-2.6.5 root @ node03: / opt / Software
If your package is in the same file , a trip can send
scp -r / opt / Software root @ node03: / opt
(7) to format the NameNode
(create directories and files) in node node01
cd /opt/software/hadoop-2.6.5/bin
./hdfs namenode -format
Note: If the first format. Be sure to rm -rf / var / abc
twice formatted file to take effect, likely to cause conflict.

(9) to start HDFS start-dfs.sh
start sbin years
after start page view ip: 50070 Here Insert Picture Description
appears page is successful
(10) operating HDFS file system
① -mkdir -p create directories hdfs dfs / the User / root
② upload files hdfs dfs -D dfs.blocksize = 1048576 -put

Guess you like

Origin blog.csdn.net/power_k/article/details/91637973