单机安装hadoop

平时在写代码调试验证阶段一般会搭建一个单机版的hadoop,方便测试

一、安装Java

在安装 hadoop 之前,请确保你的系统上安装了 Java。使用java -version命令检查已安装 Java 的版本。 

查看java 路径,要记住该路径,后面配置hadoop时会用到


二、安装hadoop

1.创建用于 hadoop 安装的系统帐户hadoop

[root@localhost ~]# useradd hadoop

[root@localhost ~]# passwd hadoop

2  给创建的hadoop账户root权限

vim /etc/sudoers 

添加hadoop  ALL=(ALL)       ALL


3. 为hadoop账户配置ssh密钥,启用无需密码的ssh登陆

1) 切换用户

[root@localhost ~]# su - hadoop

2)生成密钥对(设置登录无密码)

[hadoop@localhost ~]#    ssh-keygen -t rsa -P'' -f ~/.ssh/id_rsa

3) 追加公钥到对方的认证库中

[hadoop@localhost ~]#   cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys

[hadoop@localhost ~]#   chmod 0600~/.ssh/authorized_keys

4) 测试ssh到localhost(无需密码)

[hadoop@localhost ~]# ssh localhost

4. 解压hadoop安装包,并配置hadoop环境变量

1) 解压hadoop安装包

 tar-zxvf  hadoop-3.1.0.tar.gz

2) 编辑 vim ~/.bashrc 文件,并在文件末尾添加以下代码

//注:其中HADOOP_HOME为hadoop-3.1.0.tar.gz的解压目录

export HADOOP_HOME=/home/hadoop/hadoop-3.1.0
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/bin/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

source ~/.bashrc

3) 编辑 vim /home/hadoop/hadoop-31.0/etc/hadoop/hadoop-env.sh ,设置 JAVA_HOME环境变量,添加以下代码:

cd  /home/hadoop/hadoop-3.1.0/etc/hadoop
vim hadoop-env.sh
export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-5.b12.el7_4.x86_64

5. 编辑hadoop配置文件

①[hadoop@localhost hadoop]# vim core-site.xml 

     <property>
         <name>fs.defaultFS</name>
         <value>hdfs://10.175.118.105:9000</value>
     </property>

②[hadoop@localhost hadoop]# vim hdfs-site.xml

     <property>
         <name>dfs.replication</name>
         <value>1</value>
     </property>
     <property>
         <name>dfs.name.dir</name>
         <value>file:///home/hadoop/data/hdfs/namenode</value>
     </property>
     <property>
         <name>dfs.data.dir</name>
         <value>file:///home/hadoop/data/hdfs/datanode</value>
     </property>
     <property>
        <name>dfs.namenode.rpc-address</name>
        <value>10.175.118.105:9000</value>
     </property>


③[hadoop@localhost hadoop]# vim mapred-site.xml 

    <property>
         <name>mapreduce.framework.name</name>
         <value>yarn</value>
     </property>
    <property>
      <name>mapreduce.application.classpath</name>
      <value>
       /home/hadoop/hadoop-3.1.0/etc/hadoop,
       /home/hadoop/hadoop-3.1.0/share/hadoop/common/*,
       /home/hadoop/hadoop-3.1.0/share/hadoop/common/lib/*,
       /home/hadoop/hadoop-3.1.0/share/hadoop/hdfs/*,
       /home/hadoop/hadoop-3.1.0/share/hadoop/hdfs/lib/*,
       /home/hadoop/hadoop-3.1.0/share/hadoop/mapreduce/*,
       /home/hadoop/hadoop-3.1.0/share/hadoop/mapreduce/lib/*,
       /home/hadoop/hadoop-3.1.0/share/hadoop/yarn/*,
       /home/hadoop/hadoop-3.1.0/share/hadoop/yarn/lib/*
     </value>
   </property>

④[hadoop@localhost hadoop-2.7.3]# vim yarn-site.xml 

     <property>
         <name>yarn.nodemanager.aux-services</name>
         <value>mapreduce_shuffle</value>
     </property>

6. 格式化namenode,并启动hadoop服务

①格式化namenode

[hadoop@localhost hadoop]#   hdfs namenode -format

②启动hadoop所有服务

cd /home/hadoop/hadoop-3.1.0/sbin

修改sh的用户,不然启动会报错

vim start-dfs.sh 以及 vim stop-dfs.sh 分别添加下面4行

HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

vim start-yarn.sh 以及 vim stop-yarn.sh 分别添加下面4行

YARN_RESOURCEMANAGER_USER=root
YARN_NODEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn


./start-all.sh

jps


说明启动成功

7. 跑计算圆周率的程序,说明hadoop可以正常运行

hadoop jar /home/hadoop/hadoop-3.1.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.0.jar pi 5 10  


8. hadoop默认端口web地址

http://10.175.118.105:8088/cluster/apps















猜你喜欢

转载自blog.csdn.net/u013385018/article/details/80688532