安装hadoop3.0版本踩坑

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_32635069/article/details/80859790

1、hdfs的web页面默认端口是9870 yarn的web页面端口是8088
2、配置文件中的slaves文件没了,变成了workers文件,在里面配置datanode节点
3、在进行namenode格式化是有几个Fail,不要因此怀疑自己,只要common.Storage: Storage directory /usr/local/hadoop-3.0.2/hdfs/name has been successfully formatted. 这个提醒是存在的就没有问题
4、在启动时,start-dfs.sh start-yarn.sh时报错

Starting namenodes on [namenode]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [datanode1]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
Starting resourcemanager
ERROR: Attempting to operate on yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
Starting nodemanagers
ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.

解决办法:

注意是在文件开始空白处

在start-dfs.sh中:

HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root 

在start-yarn.sh中

YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

这样就解决了

后面加上一些基础点

ssh免密码登录的命令:

ssh-keygen -t rsa -P ""

在~目录的.ssh下生成秘钥
将生成的公钥id_rsa.pub 内容追加到authorized_keys(执行命令:cat id_rsa.pub >> authorized_keys)

core-site.xml

<configuration>
<!-- 指定hdfs的nameservice为ns1 -->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://namenode:9000</value>
        </property>
        <!-- Size of read/write buffer used in SequenceFiles. -->
        <property>
         <name>io.file.buffer.size</name>
         <value>131072</value>
       </property>
        <!-- 指定hadoop临时目录,自行创建 -->
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop-3.0.2/tmp</value>
        </property>

</configuration>

hdfs-site.xml

<configuration>
    <property>
      <name>dfs.namenode.secondary.http-address</name>
      <value>datanode1:50090</value>
    </property>
    <property>
      <name>dfs.replication</name>
      <value>3</value>
    </property>
    <property>
      <name>dfs.namenode.name.dir</name>
      <value>file:/usr/local/hadoop-3.0.2/hdfs/name</value>
    </property>
    <property>
      <name>dfs.datanode.data.dir</name>
      <value>file:/usr/local/hadoop-3.0.2/hdfs/data</value>
    </property>
</configuration>

mapred-site.xml

<configuration>

 <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>

yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->

     <property>
          <name>yarn.nodemanager.aux-services</name>
          <value>mapreduce_shuffle</value>
     </property>
     <property>
           <name>yarn.resourcemanager.address</name>
           <value>namenode:8032</value>
     </property>
     <property>
          <name>yarn.resourcemanager.scheduler.address</name>
          <value>namenode:8030</value>
      </property>
     <property>
         <name>yarn.resourcemanager.resource-tracker.address</name>
         <value>namenode:8031</value>
     </property>
     <property>
         <name>yarn.resourcemanager.admin.address</name>
         <value>namenode:8033</value>
     </property>
     <property>
         <name>yarn.resourcemanager.webapp.address</name>
         <value>namenode:8088</value>
     </property>
</configuration>

workers

namenode
datanode1
datanode2

hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.8.0_171

namenode格式化

hdfs namenode -format

猜你喜欢

转载自blog.csdn.net/qq_32635069/article/details/80859790