Shell of HDFS operations (Note)

* Basic operation (single cluster): ***
1, create a folder command
[Master hadoop @ ~] $ hadoop FS -mkdir -p / 20,191,021
2, upload the file command
[hadoop @ master ~] $ hadoop fs -put test. TXT / 20,191,021
. 3, view the file command
[Hadoop Master @ ~] $ Hadoop FS -cat /20191021/test.txt
. 4, the file copy command (local copy)
[Hadoop Master @ ~] $ -get Hadoop FS / 20,191,021 / Test .txt
5, view the directory file directory
[Master hadoop @ ~] $ hadoop -ls FS / 20,191,021
6, delete the file command
[Master hadoop @ ~] $ hadoop FS -rm /20191021/test.txt
7, delete folders command
[hadoop @ master ~] $ hadoop fs -rmr / 20191021
administrator common operations command:
1, viewing Job running
[hadoop @ master ~] $ hadoop the Job -list
2, close the Job running
[hadoop @ master ~] $ hadoop job -kill (running the job name)
3, HDFS block status check to see if damage
[hadoop @ master ~] $ Hadoop the fsck
. 4, HDFS block status check and remove the damaged block
[hadoop @ master ~] $ Hadoop the fsck / -delete
. 5, HDFS check block status, including information DataNode
[hadoop @ master ~] $ dfsadmin -report hadoop
6, Hadoop into safe mode
[hadoop @ Master ~] $ hadoop dfsadmin -safemode the enter
7, Hadoop leave safe mode
[hadoop @ Master ~] $ hadoop dfsadmin -safemode the leave
8, balancing cluster file
[hadoop @ master hadoop] $ sbin / start-balancer.sh

Guess you like

Origin blog.51cto.com/14572091/2444268