centos7环境 jdk1.8+hadoop2.9.0+spark2.2.1

1.安装jdk1.8, rpm安装或者解压 

2.配置jdk环境, vi /etc/profile ,末尾添加

    export JAVA_HOME=/usr/java/jdk1.8.0_121

    export JRE_HOME=${JAVA_HOME}/jre

    export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib

    export PATH=${JAVA_HOME}/bin:$PATH

    保存退出, source /etc/profile

3.hadoop安装,下载解压

    1)创建目录结构

    2)查看版本  ./hadoop-2.9.0/bin/hadoop version

    3)修改hadoop-2.9.0/etc/hadoop/core-site.xml,在<configuration/>添加
         <property>
                <name>fs.defaultFS</name>
                <value>hdfs://10.10.110.143:9000</value>
        </property>
        <property>
                <name>io.file.buffer.size</name>
                <value>131072</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>file:/data/hadoopfile/tmp/</value>
        </property>

    4)修改hadoop-2.9.0/etc/hadoop/hdfs-site.xml,在<configuration/>添加
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/data/hadoopfile/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/data/hadoopfile/dfs/data</value>
        </property>

    5)修改hadoop-2.9.0/etc/hadoop/mapred-site.xml,在<configuration/>添加
        <property> 
                <name>mapreduce.framework.name</name> 
                <value>yarn</value>
        </property>
        <property>
                 <name>mapreduce.jobhistory.address</name>
                 <value>10.10.110.143:10020</value>
        </property>
        <property>
                 <name>mapreduce.jobhistory.webapp.address</name>
                 <value>10.10.110.143:19888</value>
        </property>
        <property>
                 <name>mapreduce.jobtracker.http.address</name>
                 <value>10.10.110.143:50030</value>
        </property>
        <property>
                 <name>mapred.job.tracker</name>
                 <value>10.10.110.143:9001</value>
        </property>

    6)修改hadoop-2.9.0/etc/hadoop/yarn-site.xml,在<configuration/>添加
        <property>
                 <name>yarn.nodemanager.aux-services</name>
                 <value>mapreduce_shuffle</value>
        </property>
        <property>                                                               
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                <value>org.apache.hadoop.mapred.ShuffleHandler</value>
         </property>
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>10.10.110.143</value>
        </property>
        <property>
        <name>yarn.resourcemanager.address</name>
        <value>10.10.110.143:8032</value>
        </property>
        <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>10.10.110.143:8030</value>
        </property>
        <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>10.10.110.143:8031</value>
        </property>
        <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>10.10.110.143:8033</value>
        </property>
        <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>10.10.110.143:8088</value>
        </property>

    7)在hadoop-2.9.0/etc/hadoop/salve 添加 slaves地址

    8)修改hadoop-2.9.0/etc/hadoop/hadoop-env.sh,java_home地址修改为绝对路径

    9)启动hadoop , ./hadoop-2.9.0/sbin/start-all.sh (slaves启动失败参考ssh免密登录)

    10)查看结果  10.10.110.143:50070(网页)  10.10.110.143:8088(yarn环境)

4.spark2.2.1安装

    1)下载解压spark

    2)修改spark/conf/spark-env.sh, 末尾添加

            JAVA_HOME=/usr/java/jdk1.8.0_121   
            SPARK_MASTER_HOST=10.10.110.143      #Master的IP地址,默认的端口为7077

    3)修改spark/conf/slaves, 末尾添加 slaves服务器地址

    4)启动  ./spark/sbin/start-all.sh (slaves启动失败参考ssh免密登录)

    5)查看效果  10.10.10.143:8080

猜你喜欢

转载自blog.csdn.net/bighacker/article/details/79877373
今日推荐