Spark项目开发及原理介绍

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/roczheng1990/article/details/79539451

写在开头

刚开始接触spark的时候,你肯定得看点scala的东西,刚开始接触scala的时候,我觉得scala真特么好使啊。wordcount是spark的入门demo,就是说,你看spark的并行处理能力真牛逼啊,竟然能统计出这个文件中所有出现单词的个数呢。哇塞,用scala写了四行代码就搞定了啊,六六六,继续看吧!
下面先说说环境搭建要不,网上帖子很多,坑也很多。在此分享下,作者在搭建spark环境的几个小贴士:

Spark环境搭建

  • CentOS7
  • hadoop-2.7.5.tar.gz
  • spark-2.2.0-bin-hadoop2.7.tgz

Hadoop配置安装、Spark环境搭建
请参阅笔记:http://note.youdao.com/noteshare?id=c5ade4f303edbcf73c870abb3baf6c35&sub=207F5266857D47A18403CC261A1A792C

Spark项目开发入门之WordCounter

环境介绍

  • IDEA MAVEN工程
  • Spark2.2.0
  • local模式

pom.xml文件配置

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>spark-wc-maven</groupId>
    <artifactId>wordCounter</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>2.11.8</version>
        </dependency>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-compiler</artifactId>
            <version>2.11.8</version>
        </dependency>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-reflect</artifactId>
            <version>2.11.8</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.12</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.7.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.7.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.7.1</version>
        </dependency>
    </dependencies>

</project>

创建wordCounter.scala文件

import org.apache.spark.{SparkContext, SparkConf}

object wordCount {
  def main(args: Array[String]) {
    var conf = new SparkConf().setMaster("local").setAppName("wordCount")
    val sc = new SparkContext(conf)
    val data = sc.textFile("d://wc.txt")
    data.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect().foreach(println)
  }
}

运行结果

18/03/13 13:34:45 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
18/03/13 13:34:45 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, ANY, 4621 bytes)
18/03/13 13:34:45 INFO Executor: Running task 0.0 in stage 1.0 (TID 1)
18/03/13 13:34:45 INFO ShuffleBlockFetcherIterator: Getting 1 non-empty blocks out of 1 blocks
18/03/13 13:34:45 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 7 ms
18/03/13 13:34:45 INFO Executor: Finished task 0.0 in stage 1.0 (TID 1). 4532 bytes result sent to driver
18/03/13 13:34:45 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 84 ms on localhost (executor driver) (1/1)
18/03/13 13:34:45 INFO DAGScheduler: ResultStage 1 (collect at wordCounter.scala:8) finished in 0.086 s
18/03/13 13:34:45 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
(touched,2)
(someone,1)
(this,2)
(too.,1)
(it,1)
(cry,those,1)
(past,1)
(There,1)
(it,to,1)
(have,7)
(crying.,1)
(opportunity,1)
(dream;go,1)
(yourself,1)
(chance,1)
(only,2)
(brighter,1)
(begins,1)
(happen,1)
(who,9)
(heartaches.,1)
(you,you,1)
(side,1)
(send,1)
(crying,1)
(smile,1)
(for,2)
(just,3)
(along,1)
(message,1)
(go,2)
(make,6)
(what,2)
(out,1)
(brighten,1)
(much,1)
(your,4)
(hurts,2)
(everyone,2)
(the,8)
(are,2)
(be,because,1)
(put,1)
(people,3)
(everything;they,1)
(enough,1)
(hope,1)
(can,1)
(if,1)
(was,1)
(most,1)
(real!,1)
(when,4)
(be,1)
(do,1)
(bad,1)
(way.Happiness,1)
(really,2)
(tried,for,1)
(all,1)
(ends,1)
(necessarily,1)
(mean,1)
(something,1)
(sweet,enough,1)
(message.,1)
(die,you're,1)
(smiling.Live,1)
(don��t,2)
(lives.Love,1)
(things,2)
(them,3)
(another,to,1)
(want,6)
(smile,grows,1)
(happiness,1)
(worry,nothing,1)
(Dream,1)
(failures,1)
(human,enough,1)
(is,2)
(searched,and,1)
(pick,1)
(����The,1)
(let,2)
(down,to,1)
(on,3)
(����May,1)
(one,4)
(around,2)
(with,4)
(person,,1)
(day,1)
(best,1)
(life,4)
(happiest,1)
(in,4)
(importance,1)
(kiss,1)
(strong,enough,1)
(appreciate,2)
(they,1)
(go;be,1)
(those,8)
(you,it,1)
(don��t,,1)
(comes,1)
(based,1)
(����When,1)
(Always,1)
(do.,1)
(from,1)
(other,1)
(well,1)
(way,1)
(����Please,1)
(past,,1)
(hug,1)
(moments,1)
(born,you,1)
(keep,1)
(can��t,1)
(others��shoes.If,1)
(probably,1)
(lifeuntil,1)
(friendship.And,1)
(hurt,,1)
(you,26)
(need,1)
(a,4)
(smiling,1)
(sorrow,1)
(that,6)
(tear.The,1)
(you,to,1)
(forgotten,1)
(will,3)
(brightest,1)
(future,1)
(to,15)
(know,1)
(,5)
(lies,1)
(or,1)
(see,1)
(����who,1)
(of,6)
(their,3)
(someone��s,1)
(where,1)
(everything,1)
(were,2)
(always,1)
(so,2)
(dreams,1)
(and,6)
(trials,1)
(feel,1)
(happy?,1)
(miss,2)
18/03/13 13:34:45 INFO DAGScheduler: Job 0 finished: collect at wordCounter.scala:8, took 1.380183 s
18/03/13 13:34:45 INFO SparkContext: Invoking stop() from shutdown hook
18/03/13 13:34:45 INFO SparkUI: Stopped Spark web UI at http://192.168.1.175:4041
18/03/13 13:34:45 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
18/03/13 13:34:45 INFO MemoryStore: MemoryStore cleared
18/03/13 13:34:45 INFO BlockManager: BlockManager stopped
18/03/13 13:34:45 INFO BlockManagerMaster: BlockManagerMaster stopped
18/03/13 13:34:45 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
18/03/13 13:34:45 INFO SparkContext: Successfully stopped SparkContext
18/03/13 13:34:45 INFO ShutdownHookManager: Shutdown hook called
18/03/13 13:34:45 INFO ShutdownHookManager: Deleting directory C:\Users\Administrator\AppData\Local\Temp\spark-e257c1c5-e0a5-4975-945c-234167cfb09f

以上述WordCounter项目为例介绍Spark工作原理

Spark是专为大规模数据处理而设计的快速通用的计算引擎

特点

  • 快速:内存计算比hadoop快100倍,硬盘计算快10倍
  • 易用:提供Java、R、Python、Scala接口
  • 通用:整合:spark sql、spark streaming、MLlib、GraphX
  • 移植性:可以有Mesos、Yarn、Kubernetes、Standalone等部署模式

Spark生态结构图

这里写图片描述

  • Spark SQL: 提供了类 SQL 的查询,返回 Spark-DataFrame 的数据结构
  • Spark Streaming: 流式计算,主要用于处理线上实时时序数据
  • MLlib: 提供机器学习的各种模型和调优
  • GraphX: 提供基于图的算法,如 PageRank

RDD相关内容介绍

RDD:(Resilient Distributed Datasets) ,弹性分布式数据集, 是分布式内存的一个抽象概念,RDD提供了一种高度受限的共享内存模型,即RDD是只读的记录分区的集合,只能通过在其他RDD执行确定的转换操作(如map、join和group by)而创建,然而这些限制使得实现容错的开销很低。
点击此处查看:RDD官方介绍

名词解释

  • 集群管理程序(Cluster Manager): 在集群上获取资源的外部服务(例如:Local、Standalone、Mesos或Yarn等集群管理系统);
  • Worker: 集群中任何可以运行Application代码的节点,在Standalone模式中指的是通过slave文件配置的Worker节点,在Spark on Yarn模式下就是NoteManager节点
  • Executor: 某个Application运行在worker节点上的一个进程, 该进程负责运行某些Task, 并且负责将数据存到内存或磁盘上,每个Application都有各自独立的一批Executor, 在Spark on Yarn模式下,其进程名称为CoarseGrainedExecutor Backend。一个CoarseGrainedExecutor Backend有且仅有一个Executor对象, 负责将Task包装成taskRunner,并从线程池中抽取一个空闲线程运行Task, 这个每一个oarseGrainedExecutor Backend能并行运行Task的数量取决与分配给它的cpu个数
  • Stage: 每个Job会被拆分成多组Task, 作为一个TaskSet, 其名称为Stage,Stage的划分和调度是有DAGScheduler来负责的,Stage有非最终的Stage(Shuffle Map Stage)和最终的Stage(Result Stage)两种,Stage的边界就是发生shuffle的地方
  • Task: 被送到某个Executor上的工作单元,但hadoopMR中的MapTask和ReduceTask概念一样,是运行Application的基本单位,多个Task组成一个Stage,而Task的调度和管理等是由TaskScheduler负责
  • 操作(Operation): 作用于RDD的各种操作分为Transformation和Action.
  • 应用程序(Application): 基于Spark的用户程序,包含了一个Driver Program 和集群中多个的Executor;
  • 驱动(Driver): 运行Application的main()函数并且创建SparkContext

以文章开始的demo讲述spark工作原理

这里写图片描述
上图讲述了整个spark项目中RDD的一个转换过程,以及DAG【有向无环图】的流向图。

这里写图片描述
创建RDD对象
- DAGScheduler模块介入运算,计算RDD之间的依赖关系,RDD之间的依赖关系就形成了DAG
- 每一个Job被分为多个Stage。划分Stage的一个主要依据是当前计算因子的输入是否是确定的,如果是则将其分在同一个Stage,避免多个Stage之间的消息传递开销
这里写图片描述

Spark部署模式介绍

猜你喜欢

转载自blog.csdn.net/roczheng1990/article/details/79539451