spark1.0-集群搭建

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/shouhuxianjian/article/details/55102112

背景

机器环境:部门有10台服务器,每台配置为:intel E5-2690 v3 48核,775Gb内存。搭建了hdfs,hive,spark,并且spark的资源调度方案为yarn模式。因为资源分配有限。故而在自己组所拥有的6台服务器上,手动搭建spark集群,每台配置为:intel E5-2670 v3 48核,128Gb内存,18T硬盘(一个驱动控制器口)。
任务:20T压缩包(压缩率3左右,解压后超60T,数据都存放在组内6台中5台的机器上)的文本数据排序和简单的数据清理工作,以供后续其他操作。
数据格式:2种格式的数据,分别为类别A(5个字段。1亿以上的组),类别B(6个字段。未统计,不过组数大于类别A)。需要将这两种类别基于字段1有序的基础上,字段2有序。
进展:开始通过python多进程,或者将耗时部分调用shell方法,通过paramiko,sshpass等方法进行机器间通信,发现最优情况下,生成7G结果数据需要1小时,删除这7G数据同样需要1小时(涉及大量的文件夹和文件操作),自己写的基于python的伪分布式数据处理调度代码基本处于失败状态;上spark!

spark集群的搭建

spark集群的搭建,就需要用到hdfs(如果是单机,则直接读取本机即可),关键词有:hdfs、yarn、spark、standalone。
hdfs:一种分布式文件系统。将多台机器的linux文件系统上加了一层文件管理系统。使得对多机的文件存储,访问,具有高度抽象性,其本身具有的高容错和恢复机制,能够保证文件的完备性和正确性。如:

hadoop fs -cat /hdfs/text

该命令读取的文件其实有可能分布在多台机器上,而用户访问时是透明的。其中的

hadoop fs -put /local/text  /hdfs/remote 

只是将文件转换成hdfs的文件格式,并自动分布在各台机器上(优先分布在本机)。
yarn:一种资源管理框架,如内存的申请,释放;cpu核数使用个数等等。
standalone:spark自带的一种简易的资源管理框架,如果对生产环境要求不高,这可以代替yarn。
spark:计算框架,如hadoop生态圈中的mapreduce。

1 - 集群机器之间无密码ssh访问

先对集群所有机器之间(包括自己对自己)的ssh访问设置成无密码访问。
使用paramiko这个包实现快速操作:

import paramiko
import subprocess
from subprocess import Popen,PIPE
args = ['ssh-keygen -t rsa']
out = Popen(args,stdout = PIPE)
out.communicate()
content = open("/loginName/.ssh/id_rsa.pub").read()
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ips = [集群所有ip,包括本机]
for ip in ips:
    ssh.connect(ip,22,loginname,password)
    stdin,stdout,stderr = ssh.exec_command('echo "{}" >> ~/.ssh/authorized_keys '.format(content))

这样就实现了本机访问集群所有机器无密码ssh的功能

2 - 安装java 1.8

因为spark是scala写成的,而scala是一门jvm语言,而且hadoop内部都是java实现的,装java自然无需多说,这里推荐最新的1.8安装。并将如下环境变量配置写入bash的初始化文件”~/.bashrc”:

export JAVA_HOME=/java安装路径/jdk1.8.0_112
export CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH

hdfs和yarn的安装及配置

spark支持4种开启方式:

1) --master local[5]
2) --master spark://masterHost:7077
3) --master yarn
4) --master mesos://masterHost:port

其中1)就是单机模式,这里开启5个线程;剩下的都是集群模式。
而其实可以直接下载spark包,然后配置,接着部署即可,因为其自带standalone资源管理,也就是使用2)开启方式。如果单单下载spark,而且使用本地文件加载,其会爆出集群其他机器当前路径下无该文件,也就是假如我想要使用master机器“file:///root/test”这个文件,其会说其他机器无“/root/test”文件。所以,解决方法是如果你使用集群模式,那么还是通过读取hdfs上的文件(spark有两种文件读取方式,分别为hdfs://和file://,默认前者)。

1 - 登陆hadoop官网下载最新hadoop(我下的是2.7.3)
将其在master节点你指定路径下解压。更改如下几个配置文件:

1) etc/hadoop/core-site.xml;
2) etc/hadoop/hdfs-site.xml;
3) etc/hadoop/yarn-site.xml;
4) etc/hadoop/mapred-site.xml
5) etc/hadoop/slaves

官网配置教程
我的配置内容:
因为最开始配置结束后,yarn模式正常,而后面不知道为何无法通过yarn启动,所以目前都是通过standalone模式使用的,所以下面的yarn-site.xml也许是有问题的。即如果和我一样使用standalone,只需要配置上面的1)、2)、5)。【后续会接着研究找出本集群yarn的问题所在。】

如果是通过域名访问各个机器,需要修改/etc/hosts文件。或者将配置中所有域名表示的机器如masterHost直接指定为机器ip。

<!--core-site.xml-->
<configuration>
 <property>
          <name>fs.defaultFS</name>
          <value>hdfs://masterHost:9000</value>
 </property>                                                                    
 <property>
           <name>hadoop.tmp.dir</name>
           <value>/tmp/hadoop_2.7.3</value>
 </property>
 <property>
           <name>fs.trash.interval</name>
           <value>1440</value>
 </property>
 <property>
          <name>io.file.buffer.size</name>
          <value>131072</value>
 </property>
</configuration>
<!--hdfs-site.xml -->
<configuration>
   <property>
           <name>dfs.namenode.name.dir</name>    
                      <value>file:///你指定的一个文件夹路径用于namenode存放数据</value>
   </property>
   <property>
          <name>dfs.datanode.data.dir</name>          <value>file:///你指定的一个磁盘分区下文件夹路径用于数据节点存放数据,file:///另一个磁盘或者另一个分区下文件夹路径用于数据节点存放数据</value>
   </property>
    <property>
            <name>dfs.nameservices</name>
            <value>masterHost</value>
            <description>Comma-separated list of nameservices.</description>
   </property>
    <property>
         <name>dfs.namenode.handler.count</name>
         <value>100</value>
         <description>The number of server threads for the namenode.</description>
    </property>
    <property>
           <name>dfs.datanode.handler.count</name>
           <value>40</value>
           <description>The number of server threads for the datanode.</description>
    </property>
    <property>       <name>dfs.datanode.failed.volumes.tolerated</name>
         <value>1</value>
    </property>
    <property>
            <name>dfs.datanode.max.transfer.threads</name>
            <value>5120</value>
            <description>
                  Specifies the maximum number of threads to use for transferring data in and out of the DN.
            </description>
     </property>
    <property>
            <name>dfs.blocksize</name>
            <value>268435456</value>
            <description>
                The default block size for new files, in bytes. You can use the following suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in bytes (such as 134217728 for 128 MB).
            </description>
    </property>
   <property>
            <name>dfs.datanode.data.dir.perm</name>
            <value>755</value>
            <description>Permissions for the directories on on the local filesystem where the DFS data node store its blocks. The permissions can either be octal or symbolic.
           </description>
   </property>
    <property>
            <name>dfs.block.local-path-access.user</name>
            <value>root</value>
            <description>
                      Comma separated list of the users allowd to open block files on legacy short-circuit local read.
            </description>
    </property>
    <property>
           <name>dfs.replication</name>
           <value>2</value>
    </property>
</configuration>
<!--yarn-site.xml -->
<configuration>
    <property>
           <name>yarn.resourcemanager.address</name>
           <value>masterHost:8032</value>
    </property>
    <property>              <name>yarn.resourcemanager.schedule.address</name>
           <value>masterHost:8030</value>
    </property>
    <property>
           <name>yarn.resourcemanager.resource-tracker.address</name>
           <value>masterHost:8031</value>
    </property>
    <property>       <name>yarn.resourcemanager.admin.address</name>
           <value>masterHost:8033</value>
    </property>
    <property>
         <name>yarn.web-proxy.address</name>
         <value>masterHost:8034</value>
         <description>The address for the web proxy as HOST:PORT, if this is not given then the proxy will run as part of the RM
         </description>
    </property>
    <property>      <name>yarn.resourcemanager.webapp.address</name>
           <value>masterHost:8099</value>
    </property>
    <property>
        <name>yarn.resourcemanager.store.class</name>         <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
   </property>
   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle,spark_shuffle</value>
   </property>
   <property>
         <name>yarn.nodemanager.local-dirs</name>        <value>file:///你指定的一个文件夹路径</value>
   </property>
   <property>
    <name>yarn.resourcemanager.scheduler.class</name>  <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
   <property>           <name>yarn.scheduler.fair.allocation.file</name>
             <value>/放hadoop文件夹路径/hadoop-2.7.3/etc/hadoop/fair-scheduler.xml</value>
   </property>
   <property>
          <name>yarn.scheduler.fair.preemption</name>
           <value>true</value>
           <description>Whether to use preemption.</description>
   </property>
   <property>
           <name>yarn.scheduler.increment-allocation-mb</name>
           <value>512</value>
   </property>
   <property>
           <name>yarn.scheduler.minimum-allocation-mb</name>
           <value>512</value>
           <description>The minimum allocation for every container request at the RM, 
                   in MBs. Memory requests lower than this won't take effect, and 
                   the specified value will get allocated at minimum.
           </description>
   </property>   
   <property>
           <name>yarn.scheduler.maximum-allocation-mb</name>
           <value>32768</value>
   </property>
    <property>
          <name>yarn.nodemanager.resource.memory-mb</name>
          <value>102400</value>
          <description>Amount of physical memory, in MB, that can be allocated for containers.</description>
     </property>
     <property>
         <name>yarn.nodemanager.resource.cpu-vcores</name>
         <value>45</value>
         <description>Number of vcores that can be allocated for containers. This is used by the RM scheduler when allocating  resources for containers. This is not used to limit the number of  physical cores used by YARN containers.</description>
     </property>
     <property>
              <name>yarn.nodemanager.vmem-pmem-ratio</name>
              <value>1.8</value>
              <description>Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are  expressed in terms of physical memory, and virtual memory usage  is allowed to exceed this allocation by this ratio.
              </description>
     </property>
     <property>
              <name>yarn.nodemanager.pmem-check-enabled</name>
              <value>false</value>
     </property>
     <property>
              <name>yarn.log-aggregation.retain-check-interval-seconds</name>
              <value>86400</value>
              <description>How long to wait between aggregated log retention checks. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful set this too small and you will spam the name node.
              </description>
     </property>
     <property>
              <name>yarn.log-aggregation.retain-seconds</name>
              <value>2592000</value>
              <description> How long to keep aggregation logs before deleting them. -1 disables. Be careful set this too small and you will spam
              </description>
     </property>      
      <property>
              <name>yarn.log-aggregation-enable</name>
              <value>true</value>
       </property>
       <property>
             <name>yarn.nodemanager.vmem-check-enabled</name>
             <value>false</value>
             <description>Whether virtual memory limits will be enforced for containers.</description>
       </property>
   <property>
      <name>yarn.nodemanager.aux-services.spark_shuffle.class</name>     <value>org.apache.spark.network.yarn.YarnShuffleService</value>
   </property>
   <property>
      <name>yarn.log.server.url</name>    <value>http://masterHost:19888/jobhistory/logs</value>
   </property>
</configuration>
<!--mapred-site.xml -->
<configuration>
    <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
    </property>    
    <property>
             <name>mapreduce.map.java.opts</name>
             <value>-Xmx1024m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80 -XX:CMSFullGCsBeforeCompaction=1 -XX:+CMSParallelRemarkEnabled</value>
    </property>
    <property>
             <name>mapreduce.reduce.java.opts</name>
             <value>-Xmx2048m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:CMSFullGCsBeforeCompaction=1 -XX:+CMSParallelRemarkEnabled -XX:ParallelGCThreads=16</value>
   </property>
   <property>
             <name>mapreduce.map.memory.mb</name>
             <value>1500</value>
             <description>The amount of memory to request from the scheduler for each map task.</description>
   </property>
   <property>
            <name>mapreduce.reduce.memory.mb</name>
            <value>2500</value>
            <description>The amount of memory to request from the scheduler for each reduce task.</description>
   </property>
   <property>
      <name>mapreduce.task.io.sort.factor</name>
      <value>100</value>
      <description>The number of streams to merge at once while sorting files. This determines the number of open file handles.
      </description>
   </property>
   <property>
         <name>mapreduce.task.io.sort.mb</name>
         <value>512</value>
        <description>The total amount of buffer memory to use while sorting
                 files, in megabytes.  By default, gives each merge stream 1MB, which
                 should minimize seeks.</description>
   </property>
  <property>          <name>mapreduce.reduce.shuffle.parallelcopies</name>
           <value>50</value>
           <description>The default number of parallel transfers run by reduce during the copy(shuffle) phase.
           </description>
  </property>
   <property>
            <name>mapreduce.jobhistory.address</name>
            <value>masterHost:10020</value>
            <description>MapReduce JobHistory Server IPC host:port</description>
   </property>
   <property>          <name>mapreduce.jobhistory.webapp.address</name>
           <value>masterHost:19888</value>
           <description>MapReduce JobHistory Server Web UI host:port</description>
   </property>
   <property>
           <name>mapreduce.jobhistory.intermediate-done-dir</name>
           <value>/opt/hadoop-2.7.3/mr-history/tmp</value>
           <description>MapReduce JobHistory Server Web UI host:port</description>
   </property>
   <property>
           <name>mapreduce.jobhistory.done-dir</name>
           <value>/opt/hadoop-2.7.3/mr-history/done</value>
           <description>MapReduce </description>
   </property>
</configuration>
#slaves 文件只需要一行一个数据节点ip,或者其主机域名即可。
192.168.1.2
192.168.1.3

2 - 从主节点将hadoop-2.7.3分发到各个数据节点相同位置

scp -r hadoop-2.7.3 loginName@ip:`pwd`

接着在主节点上发起格式化整个集群hdfs文件的指令,并开启hdfs:

hadoop-2.7.3/bin/hdfs namenode -format
hadoop-2.7.3/sbin/start-dfs.sh

此时,可以在各个节点上使用:

hadoop-2.7.3/bin/hadoop fs -ls /

这一步,你的hdfs已经算搭建完成了。

ps: 如果需要开启yarn:

hadoop-2.7.3/sbin/start-yarn.sh

3 - 登陆spark官网下载最新spark(我下的是2.0.2).

将spark-2.0.2-bin-hadoop2.7压缩包解压到指定路径下,spark的配置相比来说,简单一些,我就修改了三个文件:

1) conf/spark-env.sh
2) conf/spark-defaults.conf
3) conf/slaves

spark-env.sh

export HADOOP_CONF_DIR=/opt/hadoop-2.7.3/etc/hadoop
export YARN_CONF_DIR=/opt/hadoop-2.7.3/etc/hadoop
export LD_LIBRARY_PATH=/opt/hadoop-2.7.3/lib/native:$LD_LIBRARY_PATH
export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 -Dspark.history.retainedApplications=20 -Dspark.history.fs.logDirectory=hdfs://ngpcluster/tmp/spark-events"
SPARK_MASTER_IP=172.16.26.4
SPARK_LOCAL_DIRS=/opt/spark-2.0.2-bin-hadoop2.7

spark-defaults.conf,基于2.0.2的各个参数官网说明

# Default system properties included when running spark-submit.
# This is useful for setting default environmental settings.

#application properties
spark.driver.cores    4
spark.driver.maxResultSize  2g
spark.driver.memory 40g
spark.executor.cores    4
#spark.executor.memory   20g

#runtime environment

#spark.executor.extraJavaOptions -XX:MaxPermSize=128M -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=80



#shuffle properties
spark.reducer.maxSizeInFlight   100m
#spark.reducer.maxReqsInFlight  Int.MaxValue
spark.shuffle.manager   sort
spark.shuffle.spill true
spark.shuffle.blockTransferService  netty
spark.shuffle.consolidateFiles  true
spark.shuffle.compress  true
spark.shuffle.file.buffer       1024
spark.shuffle.io.maxRetries 4
spark.shuffle.io.numConnectionsPerPeer 3
spark.shuffle.io.preferDirectBufs   true
spark.shuffle.io.retrywait  20s
spark.shuffle.service.enabled   true
spark.shuffle.service.port  7337
spark.shuffle.sort.bypassmergeThreshold 200
spark.shuffle.spill.compress    true


#spark ui property


#compress and serialization
spark.broadcast.compress    true
spark.io.compression.codec      lz4
spark.io.compression.lz4.blocksize  32k
spark.io.compression.snappy.blocksize   32k
#spark.kryo.classesToRegister   none
spark.kryo.referenceTracking    true
spark.kryo.registractionRequired    false
#spark.kryo.registractor    none
spark.kryoserializer.buffer.max 64m
spark.kryoserializer.buffer 64k
spark.rdd.compress  true
spark.serializer    org.apache.spark.serializer.KryoSerializer
spark.serializer.objectStreamReset  100


#memory properties
spark.memory.fraction   0.6
spark.memory.storageFraction    0.5
spark.memory.offHeap.enabled    false
spark.memory.offHeap.size   0
spark.memory.useLegacyMode  false
#spark.shuffle.memoryFraction   0.2
#spark.storage.memoryFraction   0.6
#spark.storage.unrollFraction   0.2


#executor properties
spark.broadcast.blockSize   4m
spark.executor.heartbeatInterval 50s
spark.files.fetchTimeout    60s
spark.files.useFetchCache   true
spark.files.overwrite   false
spark.storage.memoryMapThreshold    2m


# Network
spark.rpc.message.maxSize   128
spark.rpc.retry.wait    5s
spark.network.timeout   1000s
spark.rpc.askTimeout    2000s
spark.rpc.lookupTimeout 1000s


# Scheduling
spark.locality.wait 0
spark.scheduler.maxRegisteredResourcesWaitingTime   30s
spark.scheduler.minRegisteredResourcesRatio 0.8
spark.scheduler.mode    FAIR
spark.task.cpus     2
spark.task.maxFailures  4
spark.scheduler.revive.interval 10s


#dynamic allocation
spark.dynamicAllocation.enabled false
spark.dynamicAllocation.executorIdleTimeout 600s
spark.dynamicAllocation.initialExecutors    0
spark.dynamicAllocation.minExecutors        0
spark.broadcast.factory     org.apache.spark.broadcast.TorrentBroadcastFactory

# spark others
spark.scheduler.executorTaskBlacklistTime 30000


#yarn properties
spark.yarn.submit.file.replication  5
spark.yarn.scheduler.heartbeat.interval-ms  5000
spark.yarn.queue    bigdata_platform
spark.yarn.executor.memoryOverhead  1g
spark.yarn.max.executor.failures 15
spark.yarn.jars hdfs://masterHost:9000/home/hadoop/spark_jars/*

上面中spark.yarn.jars 参数的值是先运行:

hadoop fs -put spark-2.0.2-bin-hadoop2.7/jars/* /home/hadoop/spark_jars

slaves
如之前hdfs一样,将工作节点的域名或者ip放入该文件,一行一个

4 - 开启spark集群

spark-2.0.2-bin-hadoop2.7/sbin/start-all.sh

通过浏览器:

http://masterHost:50070  hdfs
http://masterHost:8080   spark
http://masterHost:8088   yarn

standalone方式进入spark

spark-2.0.2-bin-hadoop2.7/bin/soark-shell --master spark://masterHost:7077 --num-exectuors 40 --driver-memory 40g --executor-memory 15g --executor-cores 4 --conf spark.core.connection.ack.wait.timeout=100 --conf spark.task.maxFailures=5

如果不想打成jar包,然后调用spark-submit方式运行,可以如上使用spark-shell,只要将所需代码写入某个scala文件,如helloworld.scala

val fh = sc.textFile("/src/file1")
val fh.count

然后:

spark-2.0.2-bin-hadoop2.7/bin/soark-shell --master spark://masterHost:7077 --num-exectuors 40 --driver-memory 40g --executor-memory 15g --executor-cores 4 --conf spark.core.connection.ack.wait.timeout=100 --conf spark.task.maxFailures=5 < helloworld.scala

即可。

2017/02/17 第一次修改

猜你喜欢

转载自blog.csdn.net/shouhuxianjian/article/details/55102112