Pseudo single node cluster hadoop

related software

 

Software system

version

Description / address

Hadoop

3.1.2

Download: https: //hadoop.apache.org/releases.html

jdk

1.8

Compatible: https: //cwiki.apache.org/confluence/display/HADOOP/Hadoop+Java+Versions

CentOS

7.6_64

 

 

 

 

 

 

 

 

 

Installation jdk

Jdk upload and install 
    RPM -ivh jdk-8u211-Linux- x64.rpm 
configure the environment variables 
in / / etc add or modify files profle 
    Export the JAVA_HOME = / usr / Java / jdk1. . 8 .0_211- AMD64 
    Export the PATH = $ the PATH: JAVA_HOME $ / bin 
so that the environment variables to take effect 
    Source   / etc / Profile 
validate the Java 
    the Java -version

 Installation hadoop

1 "upload hadoop- 3.1 . 2 . Tar .gz to / opt directory soldiers unpacked 
    $ tar -zxvf hadoop- 3.1 . 2 . Tar .gz
 2 " to modify the configuration file hadoop- env . SH (in the first row 54) 
    $ vim / opt / hadoop- 3.1 . 2 / etc / the Hadoop / hadoop- the env . SH   + 54 is   
       Export the JAVA_HOME = / usr / Java / jdk1. . 8 .0_211- AMD64
 . 3 "configuration environment variable 
   to modify the configuration file / etc / profile, add / and add the following
    Export HADOOP_HOME # = / opt / hadoop- 3.1 . 2              # New 
    #export the PATH = $ the PATH: $ JAVA_HOME / bin: $ HADOOP_HOME / bin # additional 
execution Source   / etc / Profile environment variables to take effect    

Free login dense set (also require single node)

1 ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
3 chmod 0600 ~/.ssh/authorized_keys

Modify the configuration file core-site.xml

vim /opt/hadoop-3.1.2/etc/Hadoop/core-site.xml

In <configuration> Add the following fields, (storage path profile customizable)

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
       <value>/opt/doufy/tmp/hadoop</value>
     </property>
</configuration>

Modify the configuration file hdfs-site.xml:

vim /opt/hadoop-3.1.2/etc/Hadoop/hdfs-site.xml

In <configuration> add the following fields

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

Change the configuration file mapred-site.xml:

vim /opt/hadoop-3.1.2/etc/Hadoop/mapred-site.xml

In <configuration> add the following fields

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.application.classpath</name>             
    <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value> </property> </configuration>

Modify the configuration file yarn-site.xml:

vim /opt/hadoop-3.1.2/etc/Hadoop/yarn-site.xml

In <configuration> add the following fields

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
</configuration>

Format File System

Will preconfiguration file ( Core-the site.xml generate a series of file path)

1 # hdfs namenode -format

Common Commands

 

 

NameNode start daemons and daemons DataNode

1 # / opt / hadoop- 3.1 . 2 . / Sbin / Start the DFS- SH 
2 # / opt / hadoop- 3.1 . 2 / sbin / Start-the Yarn. SH 
access address:
 the NameNode Web interface http: // IP: 9870 /
the ResourceManager Web interface
http: // IP: 8088 /

Root user will get an error, the solution is as follows

Processing. 1 
Vim sbin / Start-DFS. SH  
Vim sbin / STOP-DFS. SH  
two add the following 
HDFS_DATANODE_USER = the root 
HADOOP_SECURE_DN_USER = HDFS 
HDFS_NAMENODE_USER = the root 
HDFS_SECONDARYNAMENODE_USER = the root 
processing 2 
Vim sbin / Start-Yarn. SH  
Vim sbin / STOP- the Yarn. SH  
two to add the following 
YARN_RESOURCEMANAGER_USER = root 
HADOOP_SECURE_DN_USER = the Yarn 
YARN_NODEMANAGER_USER = root

 

 

 

 

Guess you like

Origin www.cnblogs.com/doufy/p/10978818.html