Apache Spark

Spark

概述

Spark是一个计算框架(快如闪电的统一分析引擎) 可以做大规模数据集的处理 Spark批处理计算性能大约是Hadoop MapReduce的10~100倍 因为Spark使用先进的基于DAG任务调度 可以将任务拆分成若个阶段 然后将这些阶段批次交给集群计算节点处理

在这里插入图片描述

MapReduce vs Spark

MapRduce作为第一代大数据处理框架 在设计初期只是为了满足基于海量数据级的计算的迫切需求 整个MapReduce的计算实现是基于磁盘的IO计算 由于MapReduce计算模型总是把结果存在磁盘中 每次迭代都需要将数据磁盘加载到内存 为后续的迭代带来更多延长

而spark的计算层方面明显优于hadoop的MapReduce 它可以使用内存对数据做计算 而且计算的中间结果也可以缓存在内存中 为后续的迭代计算节省了时间 大幅度提升了针对海量数据的计算效率

Spark也给出了在使⽤MapReduce和Spark做线性回归计算(算法实现需要n次迭代)上,Spark的速率⼏ 乎是MapReduce计算10~100倍这种计算速度

不仅如此Spark在设计理念中也提出了 One stack ruled them all 战略,并且提供了基于Spark批处

理⾄上的计算服务分⽀例如:实现基于Spark的交互查询、近实时流处理、机器学习、Grahx 图形关系存储 等。
在这里插入图片描述

从图中可以看出spark出于计算层 启到承上启下的作用 并没有废弃原有以hadoop为主题的大数据解决方案 因为Spark向下可以计算来⾃于HDFS、HBase、Cassandra和亚⻢逊 S3⽂件服务器的数据,也就意味着使⽤Spark作为计算层,⽤户原有的存储层架构⽆需改动。

计算流程

MapReduce计算流程:

在这里插入图片描述
MapRduce的缺点:

1)MapReduce虽然基于⽮量编程思想,但是计算状态过于简单,只是简单的将任务分为Map state 和Reduce State,没有考虑到迭代计算场景。

2)在Map任务计算的中间结果存储到本地磁盘,IO 调⽤过多,数据读写效率差。

3)MapReduce是先提交任务,然后在计算过程中申请资源。并且计 算⽅式过于笨重。每个并⾏度都是由⼀个JVM进程来实现计算。

通过上述罗列发现Mapreduce计算的问题 因此Spark在计算层面借鉴了MapReduce计算设计的经验 提出了DGASchedule(有向无环调度器)和TaskSchedual(任务调度器)概念 spark提出先进的设计理念 任务状态拆分 Spark在任务计算初期先通过DAGSchedule计算任务的Stage 将每个Stage封装成一个TaskSet 然后TskSchedual将TaskSet提交集群进行计算

在这里插入图片描述

Spark的优点:

1)智能DAG任务拆分,将⼀个复杂计算拆分成若⼲个Stage,满⾜迭代计算场景

2)Spark提供了计算的缓冲和容错策略,将计算结果存储在内存或者磁盘,加速每个stage的运 ⾏,提升运⾏效率

3)Spark在计算初期,就已经申请好计算资源。任务并⾏度是通过在Executor进程中启动线程实 现,相⽐较于MapReduce计算更加轻快。

⽬前Spark提供了Cluster Manager的实现由Yarn、Standalone、Messso、kubernates等实现。其中企 业常⽤的有Yarn和Standalone⽅式的管理

环境搭建

Spark On Yarn

前提:Hadoop环境运行正常

  • 设置进程数和文件书(可选)
[root@hbase ~]# vim /etc/security/limits.conf

* soft nofile 204800
* hard nofile 204800
* soft nproc 204800
* hard nproc 204800

提示:优化linux性能 修改这个最大值 重启生效

Spark环境

下载:https://archive.apache.org/dist/spark/spark-2.4.5/spark-2.4.5-bin-without-hadoop.tgz

  • 解压安装spark
[root@hbase ~]# tar -zxf spark-2.4.5-bin-without-hadoop.tgz -C /usr/ 
[root@hbase ~]# mv /usr/spark-2.4.5-bin-without-hadoop/ /usr/spark-2.4.5 
[root@hbase ~]# tree -L 1 /usr/soft/spark-2.4.5/
/usr/soft/spark-2.4.5/
├── bin
├── conf
├── data
├── examples
├── jars
├── kubernetes
├── LICENSE
├── licenses
├── logs
├── NOTICE
├── python
├── R
├── README.md
├── RELEASE
├── sbin
├── work
└── yarn

13 directories, 4 files
  • 配置Spark服务
[root@hbase ~]# cd /usr/soft/spark-2.4.5/
[root@hbase spark-2.4.5]#  mv conf/spark-env.sh.template conf/spark-env.sh
[root@hbase spark-2.4.5]# vim conf/spark-env.sh
# Options read in YARN client/cluster mode
# - SPARK_CONF_DIR, Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN
# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
HADOOP_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop #hadoop_conf的目录
YARN_CONF_DIR=/usr/hadoop-2.9.2/etc/hadoop #yarn_conf的目录
SPARK_EXECUTOR_CORES= 2
SPARK_EXECUTOR_MEMORY=1G
SPARK_DRIVER_MEMORY=1G
LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
export HADOOP_CONF_DIR
export YARN_CONF_DIR
export SPARK_EXECUTOR_CORES
export SPARK_DRIVER_MEMORY
export SPARK_EXECUTOR_MEMORY
export LD_LIBRARY_PATH
export SPARK_DIST_CLASSPATH=$(hadoop classpath):$SPARK_DIST_CLASSPATH
export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"
[root@CentOS spark-2.4.5]# mv conf/spark-defaults.conf.template conf/sparkdefaults.conf
[root@CentOS spark-2.4.5]# vi conf/spark-defaults.conf
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///spark-logs

需要现在在HDFS上创建 spark-logs ⽬录,⽤于作为Sparkhistory服务器存储历史计算数据的地 ⽅。

[root@CentOS ~]# hdfs dfs -mkdir /spark-logs
  • 启动Spark history server
[root@CentOS spark-2.4.5]# ./sbin/start-history-server.sh
[root@CentOS spark-2.4.5]# jps
124528 HistoryServer
122690 NodeManager
122374 SecondaryNameNode
122201 DataNode
122539 ResourceManager
122058 NameNode
124574 Jps
  • 访问http://主机ip:18080

测试环境

[root@CentOS spark-2.4.5]# ./bin/spark-submit
 --master yarn
 --deploy-mode client
 --class org.apache.spark.examples.SparkPi
 --num-executors 2
 --executor-cores 3
 /usr/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar

结果

19/04/21 03:30:40 INFO scheduler.DAGScheduler: Job 0 finished: reduce at
SparkPi.scala:38, took 30.317103 s
`Pi is roughly 3.141915709578548`
19/04/21 03:30:40 INFO server.AbstractConnector: Stopped Spark@41035930{HTTP/1.1,
[http/1.1]}{0.0.0.0:4040}
参数 说明
–master 连接资源服务器的名字yarn
–deploy-mode 部署模式 可选值有clientcluster 决定Driver程序是否在远程执行
–class 运行主类的名字
–num-executors 计算过程所需的进程数
–executor-cores 每个执行者最多使用的CPU的核数

spark shell

[root@CentOS bin]# ./spark-shell --master yarn --deploy-mode client --executor-cores
4 --num-executors 3
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).
19/04/17 01:46:04 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is
set, falling back to uploading libraries under SPARK_HOME.
Spark context Web UI available at http://CentOS:4040
Spark context available as 'sc' (master = yarn, app id =
application_1555383933869_0004).
Spark session available as 'spark'.
Welcome to
 ____ __
 / __/__ ___ _____/ /__
 _\ \/ _ \/ _ `/ __/ '_/
 /___/ .__/\_,_/_/ /_/\_\ version 2.4.1
 /_/
Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.
scala>

Spark Standalone

hadoop环境正常运行

spark环境-

注意:与上步骤相同 只有配置文件不同

  • 配置Spark服务
[root@CentOS ~]# cd /usr/spark-2.4.5/
[root@CentOS spark-2.4.5]# mv conf/spark-env.sh.template conf/spark-env.sh
[root@CentOS spark-2.4.5]# vi conf/spark-env.sh
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_DAEMON_CLASSPATH, to set the classpath for all daemons
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
SPARK_MASTER_HOST=CentOS
SPARK_MASTER_PORT=7077
SPARK_WORKER_CORES=4
SPARK_WORKER_INSTANCES=2
SPARK_WORKER_MEMORY=2g
export SPARK_MASTER_HOST
export SPARK_MASTER_PORT
export SPARK_WORKER_CORES
export SPARK_WORKER_MEMORY
export SPARK_WORKER_INSTANCES
export LD_LIBRARY_PATH=/usr/hadoop-2.9.2/lib/native
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
export SPARK_HISTORY_OPTS="-Dspark.history.fs.logDirectory=hdfs:///spark-logs"
[root@CentOS spark-2.4.5]# mv conf/spark-defaults.conf.template conf/sparkdefaults.conf
[root@CentOS spark-2.4.5]# vi conf/spark-defaults.conf
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///spark-logs
  • 启动历史服务
  • 启动Spark自己计算服务
[root@CentOS spark-2.4.5]# ./sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/spark-
2.4.5/logs/spark-root-org.apache.spark.deploy.master.Master-1-CentOS.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark-
2.4.5/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-CentOS.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /usr/spark-
2.4.5/logs/spark-root-org.apache.spark.deploy.worker.Worker-2-CentOS.out
[root@CentOS spark-2.4.5]# jps
7908 Worker
7525 HistoryServer
8165 Jps
122374 SecondaryNameNode
7751 Master
122201 DataNode
122058 NameNode
7854 Worker

用户可以访问 http://主机名:8080

测试环境

[root@CentOS spark-2.4.5]# ./bin/spark-submit
 --master spark://主机名:7077
 --deploy-mode client
 --class org.apache.spark.examples.SparkPi
 --total-executor-cores 6
 /usr/spark-2.4.5/examples/jars/spark-examples_2.11-2.4.5.jar

结果

SparkPi.scala:38) finished in 29.116 s
19/04/21 03:30:40 INFO scheduler.DAGScheduler: Job 0 finished: reduce at
SparkPi.scala:38, took 30.317103 s
`Pi is roughly 3.141915709578548`
19/04/21 03:30:40 INFO server.AbstractConnector: Stopped Spark@41035930{HTTP/1.1,
[http/1.1]}{0.0.0.0:4040}
参数 说明
–total-executor-cores 计算过程所需的计算资源线程数

Spark常见疑问

  • Spark与Apache Hadoop有何关系?

    Spark是与Hadoop数据兼容的快速通⽤处理引擎。它可以通过YARN或Spark的Standaone在Hadoop群集中

    运⾏,并且可以处理HDFS,HBase,Cassandra,Hive和任何Hadoop InputFormat中的数据。它旨在执⾏

    批处理(类似于MapReduce)和提供新的额⼯作特性,例如流计算,SparkSQL 交互式查询和 Machine

    Learning机器学习等 。

  • 我的数据需要容纳在内存中才能使用Spark吗

    不会。Spark的operators会在不适合内存的情况下将数据溢出到磁盘上,从⽽使其可以在任何⼤⼩的数

    据上正常运⾏。同样,由RDD(弹性分布式数据集合)的存储级别决定,如果内存不⾜,则缓存的数据

    集要么溢出到磁盘上,要么在需要时即时重新计算。

http://spark.apache.org/faq.html

Spark RDD详解

概述

At a high level, every Spark application consists of a driver program that runs the user’s main function

and executes various parallel operations on a cluster. The main abstraction Spark provides is a resilient

distributed dataset (RDD), which is a collection of elements partitioned across the nodes of the cluster that

can be operated on in parallel.

每个Spark应用程序都包含一个Driver 该Driver程序运行用户的main方法并在集群上执行各种并行操作 Spark是弹性分布式数据集(RDD resilient distributed dataset)它是 跨集群 分的元素的集合 可以并行操作

RDD可以通过任何其他hadoop支持的文件系统及hadoop本身系统中的文件或驱动程序中现有的Scala集合开始并进行转换来创建RDD 调用RDD算子实现对RDD的转换运算Spark可以将RDD持久存储在内存中 可以并行操作中高效重复使用 RDD会自动从节点故障恢复

开发环境

  • 导入Maven依赖
<!--Spark RDD依赖-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.4.5</version>
</dependency>
<!--和HDFS 集成-->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.9.2</version>
</dependency
  • Scala编译插件
<!--scala编译插件-->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>4.0.1</version>
<executions>
<execution>
<id>scala-compile-first</id>
<phase>process-resources</phase>
<goals>
<goal>add-source</goal>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
  • 打包fat jar插件
<!--创建fatjar插件-->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
  • JDK编译版本插件-可加可不加
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.2</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<encoding>UTF-8</encoding>
</configuration>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
  • Driver的编写
object SparkWordCountApplication {
def main(args: Array[String]): Unit = {

//1.创建sparkContext
val conf = new SparkConf().setMaster("spark://hbase:7077").setAppName("SparkWordCountApplication")
val sc = new SparkContext(conf)

//2.创建RDD -细化
val linesRDD: RDD[String] = sc.textFile("hdfs:///demo/words")

//3.RDD -> RDD 转换lazy 并行的 -细化
var resultRDD: RDD[(String, Int)] = linesRDD.flatMap(line => line.split("\\s+"))
  .map(word => (word, 1))
  .reduceByKey((v1, v2) => v1 + v2)

//4.RDD-> Unit或者本地集合Array|List 动作转化 触发job执行
val resultArray: Array[(String,Int)] = resultRDD.collect()

//Scala本地集合运算和Spark脱离关系
resultArray.foreach(t=>println(t._1+"->"+t._2))

//5.关闭SparkContext
sc.stop()
}
}
  • 使用maven package进行打包 将fatjar上传到CentOS
  • 使用spark-submit提交任务
[root@hbase spark-2.4.5]# ./bin/spark-submit --master spark://hbase:7077 --deploy-mode client --class SparkWordCountApplication --name wordcount --total-executor-cores 6 /root/spark-1.0-SNAPSHOT.jar

Spark提供了本地测试的方法

object SparkWordCountApplication1 {
def main(args: Array[String]): Unit = {

//1.创建sparkContext
val conf = new SparkConf()
  .setMaster("local[6]")
  .setAppName("SparkWordCountApplication")
val sc = new SparkContext(conf)

//关闭日志显示
sc.setLogLevel("ERROR")

//2.创建RDD -细化
val linesRDD: RDD[String] =sc.textFile("hdfs://hbase:9000/demo/words")

//3.RDD ->RDD 转换lazy 并行的 -细化
var resultRDD: RDD[(String,Int)] = linesRDD.flatMap(line=> line.split("\\s+"))
    .map(word=>(word,1))
    .reduceByKey((v1,v2)=>v1+v2)

//4.RDD -> Unit或者本地集合Array|List 动作转换 触发job执行
val resultArray: Array[(String,Int)] = resultRDD.collect()

//Scala本地集合运算和Spark脱离关系
resultArray.foreach(t=>println(t._1+"->"+t._2))
//5.关闭SparkContext
sc.stop()
}

}

需要resource导入log4j.properties

log4j.rootLogger = FATAL,stdout

log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target = System.out
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern = %p %d{yyyy-MM-dd HH:mm:ss} %c %m%n

运行结果

this->1
is->1
day->2
hello->1
up->1
spark->1
demo->1
good->2
study->1

RDD创建

Spark revolves(循环) around the concept(观念) of a resilient(有弹性的) distributed dataset (RDD), which is a fault-tolerant (容错性)

collection of elements(元素) that can be operated(操作) on in parallel. There are two ways(方法) to create RDDs:

parallelizing an existing collection in your driver program(程序), or referencing(定位) a dataset in an external(外部的)

storage(存储) system(系统), such(如此) as a shared(共享) filesystem(文件系统), HDFS, HBase, or any data source o!ering a Hadoop

InputFormat

Spark围绕弹性分布式数据集(RDD) 的概念展开 RDD是一个具有容错特征且可并行操作的元素集合 创建RDD的方法有两种:1.可以在Driver并行化现有的Scala集合 2.引用外部存储系统(例如共享文件系统 HDFS HBase或提供Hadoop InputFormat的任何数据源)中的数据集

Parallelized Collections-了解

通过Driver程序中的现有集合(Scala Seq)上调用SparkContext的parallelize或者makeRDD方法来创建并行集合 复制集合的元素以及形成可以并行操作的分布式数据集 例如:以下是创建包含数字1-5的并行化集合的方法

scala> val data = Array(1,2,3,4,5)
data: Array[Int] = Array(1, 2, 3, 4, 5)

scala> val distData = sc.parallelize(data)
distData: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:26

并行集合的可以指定一个分区参数 用于指定计算的并行度 Spark集群的为每个分区运行一个任务 当用户不指定分区的时候 sc会根据系统分配到的资源自动做分区 例如:

./bin/spark-shell --master spark://hbase:7077 --total-executor6
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://hbase:4040
Spark context available as 'sc' (master = spark://hbase:7077, app id = app-20200218194856-0004).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.5
      /_/

Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.

系统会自动在并行化集合的时候 指定分区数为6 用户也可以手动指定分区数

scala> val distData=sc.parallelize(data,10)
distData: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:26

scala> distData.getNumPartitions
res0: Int = 10

External Datasets-重点

Spark可以从Hadoop支持的任何存储源创建分布式数据集 包括您的本地文件系统 HDFS HBase Amazon S3 RDBMS等

外围的数据集

  • 本地文件系统
scala> sc.textFile("file:///root/word").collect
res0: Array[String] = Array(this is demo, hello spark, good good study, day day up)
  • 读HDFS

textFile

会将文件转换为RDD[String]集合对象 每一行文件表示RDD集合中的一个元素

scala> sc.textFile("hdfs:///demo/words/word").collect
res0: Array[String] = Array(this is demo, hello spark, good good study, day day up)

读参数也可以指定分区数 但是需要分区数>=文件系统数据块的个数 所以一般在不知道的情况下 用户可以省略不给

wholeTextFiles

会将文件转换为RDD[String,String]集合对象 RDD中每一个元组元素表示一个文件 其中 _1文件名_2文件内容

scala> sc.wholeTextFiles("hdfs:///demo/words",1).collect
res0: Array[(String, String)] =
Array((hdfs://hbase:9000/demo/words/word,"this is demo
hello spark
good good study
day day up
"))
                                                     ^
scala> sc.wholeTextFiles("hdfs:///demo/words",1).map(t=>t._2).flatMap(context=>context.split("\n")).collect
res2: Array[String] = Array(this is demo, hello spark, good good study, day day up)

newAPIHadoopRDD

MySQL

<!--MySQL依赖-->
<dependency>
 <groupId>mysql</groupId>
 <artifactId>mysql-connector-java</artifactId>
 <version>5.1.38</version>
</dependency>
object SparkNewHadoopAPIMySQL {
  //Driver
  def main(args: Array[String]): Unit = {
    //1.创建SparkContext
    val conf = new SparkConf()
      .setMaster("local[*]")
      .setAppName("SparkNewHadoopAPIMySQL")
    val sc = new SparkContext(conf)

    val hadoopConfig = new Configuration()

    DBConfiguration.configureDB(hadoopConfig, //配置数据库的链接参数
      "com.mysql.jdbc.Driver",
      "jdbc:mysql://10.15.0.5:3306/test",
      "root",
      "root"
    )

    //设置查询相关属性
    hadoopConfig.set(DBConfiguration.INPUT_QUERY,"select id,name,age,birthday from t_user")
    hadoopConfig.set(DBConfiguration.INPUT_COUNT_QUERY,"select count(id) from t_user")
    hadoopConfig.set(DBConfiguration.INPUT_CLASS_PROPERTY,"com.baizhi.createrdd.UserDBWritable")

    //通过hadoop提供inputformat读取外围数据源
    val jdbcRDD: RDD[(LongWritable,UserDBWritable)]= sc.newAPIHadoopRDD(
      hadoopConfig, //hadoop配置信息
      classOf[DBInputFormat[UserDBWritable]], //输入格式类
      classOf[LongWritable], //mapper读入的key类型
      classOf[UserDBWritable] //mapper读入的value类型
    )

    jdbcRDD.map(t=>(t._2.id,t._2.name,t._2.age,t._2.birthday))
      .collect() //动作算⼦ 远程数据 拿到 Driver端 ,⼀般⽤于⼩批量数据测试
      .foreach(t=>println(t))

    //关闭SparkContext
    sc.stop()
  }
}
class UserDBWritable extends DBWritable {
  var id:Int=_
  var name:String=_
  var age:Int=_
  var birthday:Date=_
  //主要用于DBOutputFormat 因为使用的是读取 该方法可以忽略
  override def write(preparedStatement: PreparedStatement): Unit = ???

  //在使用DBInputFormat 需要将读取的结果集封装给成员属性
  override def readFields(resultSet: ResultSet): Unit = {
    id=resultSet.getInt("id")
    name=resultSet.getString("name")
    age=resultSet.getInt("age")
    birthday=resultSet.getDate("birthday")
  }
}

测试结果

(1,zhangs,23,2020-02-18)

Process finished with exit code 0

Hbase

<!--Hbase依赖,注意顺序-->
<dependency>
 <groupId>org.apache.hadoop</groupId>
 <artifactId>hadoop-auth</artifactId>
 <version>2.9.2</version>
</dependency> <dependency>
 <groupId>org.apache.hbase</groupId>
 <artifactId>hbase-client</artifactId>
 <version>1.2.4</version>
</dependency> <dependency>
 <groupId>org.apache.hbase</groupId>
 <artifactId>hbase-server</artifactId>
 <version>1.2.4</version>
</dependency>

RDD Operations - 重点

RDD支持两种类型的操作:transformations-转换 将一个已经存在的RDD转换为一个新的RDD 另外一种称为actions-动作 动作算子一般在执行结束以后 会将结果返回给Driver 在Spark中所有的transformations都是lazy的 所有转换算子并不会立即执行 它们仅仅是记录对当前RDD的转换逻辑 仅当Actions算子要求将结果返回给Driver程序时transformations才开始真正的进行转换计算 这种设计使Spark可以更高效地运行

默认情况下 每次在其上执行操作时 都可能重新计算每个转换后的RDD 但是 您也可以使用persist(或cache)方法将RDD保留在内存中 在这种情况下 Spark会将元素保留在群集中 以便下次查询时可以更快地进行访问

scala> var rdd1=sc.textFile("hdfs:///demo/words/word",1).map(line=>line.split(" ").length)
rdd1: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[2] at map at <console>:24

scala> rdd1.cache
res0: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[2] at map at <console>:24

scala> rdd1.reduce(_+_)
res1: Int = 11

scala> rdd1.reduce(_+_)
res2: Int = 11

Spark还支持将RDD持久存储在磁盘上 或在多个节点之间复制 比如用户可调用persist(StorageLevel.DISK_ONLY_2)将RDD存储在磁盘上 并且存储2份

Transformations

map(func)-重要

Return a new distributed dataset formed(构成) by passing(通过) each element(每个) of the source through(通过) a function func .

将一个RDD[U]转换为RDD[T]类型 在转换的时候需要用户提供一个匿名函数func:U => T

scala> var rdd:RDD[String] = sc.makeRDD(List("a","b","c","a"))
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[3] at makeRDD at <console>:24

scala> val mapRDD:RDD[(String,Int)] = rdd.map(w=>(w,1))
mapRDD: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[4] at map at <console>:25

filter(func)-重要

Return a new dataset formed by selecting those elements of the source on which func returns true.

将对一个RDD[U]类型元素进行过滤 过滤产生新的RDD[U] 但是需要用户提供func:U => Bollean系统仅仅会保留返回true的元素

注意:

#报错
scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5))
<console>:24: error: not found: type RDD
       var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5))
               ^
#原因是缺少包
import org.apache.spark.rdd.RDD
scala>var rdd:RDD[Int]= sc.makeRDD(List(1,2,3,4,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[5] at makeRDD at <console>:24

scala> val mapRDDD:RDD[Int] = rdd.filter(num=> num %2==0)
mapRDD: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[6] at filter at <console>:25

scala> mapRDD.collect
res3: Array[Int] = Array(2, 4)

flatMap(func)-重要

Similar(类似的) to map, but each input item can be mapped to 0 or more output items (so func should return a Seq rather(相当) than a single(单一的) item).

和map类似 也是将一个RDD[U]转换为RDD[T]类型 但是需要用户提供一个方法func:U -> Seq[T]

scala> var rdd:RDD[String]=sc.makeRDD(List("this is","good good"))
rdd: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[1] at makeRDD at <console>:25

scala> var flatMapRDD:RDD[(String,Int)]=rdd.flatMap(line=> for(i<-line.split("\\s+"))yield(i,1))
flatMapRDD: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[2] at flatMap at <console>:26

scala> var flatMapRDD:RDD[(String,Int)]=rdd.flatMap( line=>line.split("\\s+").map((_,1)))
flatMapRDD: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[3] at flatMap at <console>:26

scala> flatMapRDD.collect
res0: Array[(String, Int)] = Array((this,1), (is,1), (good,1), (good,1))

mapPartitons(func)–未补全-重点

Similar to map, but runs separately on each partition (block) of the RDD, so func must be of type

Iterator => Iterator when running on an RDD of type T

和map类似 但是该方法的输入时一个分区的全量数据 因此需要用户提供一个分区的转换方法:func: Iterator<T> => Iterator<U>

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:25

scala> var mapPartitionsRDD=rdd.mapPartitions(values=>values.map(n=>(n,n%2==0)))
mapPartitionsRDD: org.apache.spark.rdd.RDD[(Int, Boolean)] = MapPartitionsRDD[1] at mapPartitions at <console>:26

scala> mapPartitionsRDD.collect
res0: Array[(Int, Boolean)] = Array((1,false), (2,true), (3,false), (4,true), (5,false))

mapPartionsWithIndex(func)-重要

Similar to mapPartitions, but also provides func with an integer value representing the index of the

partition, so func must be of type (Int, Iterator) => Iterator when running on an RDD of type T

和mapPartitions类型 但是该方法会提供RDD元素所在的分区编号 因此func:(Int,Iterator<T>)=>Iterator<U>

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5,6),2)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:25

scala> var mapPartitionsWithIndexRDD=rdd.mapPartitionsWithIndex((p,values)=>values.map(n=>(n,p)))
mapPartitionsWithIndexRDD: org.apache.spark.rdd.RDD[(Int, Int)] = MapPartitionsRDD[1] at mapPartitionsWithIndex at <console>:26

scala> mapPartitionsWithIndexRDD.collect
res0: Array[(Int, Int)] = Array((1,0), (2,0), (3,0), (4,1), (5,1), (6,1))

sample(withReplacement,fraction,seed)-懵懂

Sample a fraction fraction of the data, with or without replacement(替换), using a given random number generator seed.

抽取RDD中的样本数据 可以通过withReplacement:是否允许重复抽样 fraction:控制抽样大致比例 seed:控制的是随机抽样过程中产生随机数

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5,6))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[2] at makeRDD at <console>:25

scala> var simpleRDD:RDD[Int]=rdd.sample(false,0.5d,1L)
simpleRDD: org.apache.spark.rdd.RDD[Int] = PartitionwiseSampledRDD[3] at sample at <console>:26

scala> simpleRDD.collect
res1: Array[Int] = Array(1, 6)

种子不一样 会影响最终的抽样结果

union

Return a new dataset that contains(包含) the union(并集) of the elements in the source(来源) dataset and the argument(论证).

是将两个同种类型的RDD的元素进行合并

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5,6))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[4] at makeRDD at <console>:25

scala> var rdd2:RDD[Int]=sc.makeRDD(List(7,8,6))
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[5] at makeRDD at <console>:25

scala> rdd.union(rdd2).collect
res2: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 6)

intersection

Return a new RDD that contains the intersection(交集) of elements in the source dataset and the argument.

是将两个同种类型的RDD的元素进行计算交集

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5,6))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:25

scala> var rdd2:RDD[Int]=sc.makeRDD(List(6,3,8))
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[1] at makeRDD at <console>:25

scala> rdd.intersection(rdd2).collect
res0: Array[Int] = Array(6, 3)

distinct([numPartitions])-去重

Return a new dataset that contains the distinct elements of the source dataset.

去除RDD中重复元素 其中 numPartitions 是一个可选参数 是否修改RDD的分区 一般是在当数据集经过去重之后 如果数据量级大规模降低 可以尝试传递 numPartitions 减少分区数

scala> var rdd:RDD[Int]=sc.makeRDD(List(1,2,3,4,5,6,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:25

scala> rdd.distinct(3).collect
res0: Array[Int] = Array(6, 3, 4, 1, 5, 2)

join-重要

When called(称作) on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key. Outer joins are supported through le"OuterJoin, rightOuterJoin, and fullOuterJoin.

当调用RDD[(K,V)]和RDD[(K,W)]系统可以返回一个新的RDD[(k,(v,w))] (默认内连接) 目前支持leftOuterJoin,rightOuterJoin 和 fullOuterJoin

scala> var userRDD:RDD[(Int,String)]=sc.makeRDD(List((1,"zhangsan"),(2,"lisi")))
userRDD: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[4] at makeRDD at <console>:25

scala> case class OrderItem(name:String,price:Double,count:Int)
defined class OrderItem

scala> var orderItemRDD:RDD[(Int,OrderItem)]=sc.makeRDD(List((1,OrderItem("apple",4.5,2))))
orderItemRDD: org.apache.spark.rdd.RDD[(Int, OrderItem)] = ParallelCollectionRDD[5] at makeRDD at <console>:27

scala> userRDD.join(orderItemRDD).collect
res1: Array[(Int, (String, OrderItem))] = Array((1,(zhangsan,OrderItem(apple,4.5,2))))

scala> userRDD.leftOuterJoin(orderItemRDD).collect
res2: Array[(Int, (String, Option[OrderItem]))] = Array((1,(zhangsan,Some(OrderItem(apple,4.5,2)))), (2,(lisi,None)))

cogroup-了解

When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (Iterable, Iterable)) tuples. This operation is also called groupWith .

scala> var userRDD:RDD[(Int,String)]=sc.makeRDD(List((1,"zhangsan"),(2,"lisi")))
userRDD: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[12] at makeRDD at <console>:25

scala> var orderItemRDD:RDD[(Int,OrderItem)]=sc.makeRDD(List((1,OrderItem("apple",4.5,2)),(1,OrderItem("pear",1.5,2))))
orderItemRDD: org.apache.spark.rdd.RDD[(Int, OrderItem)] = ParallelCollectionRDD[13] at makeRDD at <console>:27

scala> userRDD.cogroup(orderItemRDD).collect
res3: Array[(Int, (Iterable[String], Iterable[OrderItem]))] = Array((1,(CompactBuffer(zhangsan),CompactBuffer(OrderItem(apple,4.5,2), OrderItem(pear,1.5,2)))), (2,(CompactBuffer(lisi),CompactBuffer())))

scala> userRDD.groupWith(orderItemRDD).collect
res4: Array[(Int, (Iterable[String], Iterable[OrderItem]))] = Array((1,(CompactBuffer(zhangsan),CompactBuffer(OrderItem(apple,4.5,2), OrderItem(pear,1.5,2)))), (2,(CompactBuffer(lisi),CompactBuffer())))

cartesian-了解

When called on datasets of types T and U, returns a dataset of (T, U) pairs (all pairs of elements).

计算集合笛卡尔积

scala> var rdd1:RDD[Int]=sc.makeRDD(List(1,2,4))
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at makeRDD at <console>:25

scala> var rdd2:RDD[String]=sc.makeRDD(List("a","b","c"))
rdd2: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[1] at makeRDD at <console>:25

scala> rdd1.cartesian(rdd2).collect
res0: Array[(Int, String)] = Array((1,a), (1,b), (1,c), (2,a), (2,b), (2,c), (4,a), (4,b), (4,c))

coalesce-缩放

只减不增

Decrease the number of partitions in the RDD to numPartitions. Useful for running operations more e!iciently a"er filtering down a large dataset.

当经过大规模的过滤数据以后 可以使coalesce对RDD进行分区的缩放(只能减少分区 不可以增加)

scala> var rdd1:RDD[Int]=sc.makeRDD(0 to 100)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[252] at makeRDD at
<console>:25
scala> rdd1.getNumPartitions
res129: Int = 6
scala> rdd1.filter(n=> n%2 == 0).coalesce(3).getNumPartitions
res127: Int = 3
scala> rdd1.filter(n=> n%2 == 0).coalesce(12).getNumPartitions
res128: Int = 6

repartition(重新分配)

Reshu!le the data in the RDD randomly to create either more or fewer partitions and balance it across them. This always shu!les all data over the network.

和coalesce相似 但是该算子能够变大或者缩小RDD的分区

scala> var rdd1:RDD[Int]=sc.makeRDD(0 to 100)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[7] at makeRDD at <console>:25

scala> rdd1.getNumPartitions
res6: Int = 1

scala> rdd1.filter(n=> n%2 == 0).repartition(12).getNumPartitions
res7: Int = 12

scala> rdd1.filter(n=> n%2 == 0).repartition(3).getNumPartitions
res8: Int = 3

repartitionAndSortWithinPartitions-了解

Repartition the RDD according to the given partitioner and, within each resulting partition, sort records by their keys. This is more e!icient than calling repartition and then sorting within each partition because it can push the sorting down into the shu!le machinery.

该算子能够使用用户提供的partitioner实现对RDD中数据分区 然后对分区内的数据按照他们key进行排序

scala> case class User(name:String,deptNo:Int)
defined class User
var empRDD:RDD[User]= sc.parallelize(List(User("张 三",1),User("lisi",2),User("wangwu",1)))
empRDD.map(t => (t.deptNo, t.name)).repartitionAndSortWithinPartitions(new Partitioner
{
 override def numPartitions: Int = 4
 override def getPartition(key: Any): Int = {
 key.hashCode() & Integer.MAX_VALUE % numPartitions
 }
}).mapPartitionsWithIndex((p,values)=> {
 println(p+"\t"+values.mkString("|"))
 values
}).collect()

xxxByKey-重点

在Spark中专门针对RDD[(K,V)]类型数据集提供了xxxByKey算子实现对RDD[(K,V)]类型针对性实现计算

  • groupByKey

When called on a dataset of (K, V) pairs, returns a dataset of (K, Iterable) pairs.

类似于MapReduce计算模型 将RDD[(K,V)]转换为RDD[(K,Iterable)]

var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[1] at parallelize at <console>:25

scala> lines.flatMap(_.split("\\s+")).map((_,1)).groupByKey.collect
res4: Array[(String, Iterable[Int])] = Array((this,CompactBuffer(1)), (is,CompactBuffer(1)), (good,CompactBuffer(1, 1)))
  • groupBy
scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at
<console>:24
scala> lines.flatMap(_.split("\\s+")).map((_,1)).groupBy(t=>t._1)
res5: org.apache.spark.rdd.RDD[(String, Iterable[(String, Int)])] = ShuffledRDD[18] at
groupBy at <console>:26
scala> lines.flatMap(_.split("\\s+")).map((_,1)).groupBy(t=>t._1).map(t=>
(t._1,t._2.size)).collect
res6: Array[(String, Int)] = Array((this,1), (is,1), (good,2))
  • reduceByKey

When called on a dataset of (K, V) pairs, returns a dataset of (K, V) pairs where the values for each key are aggregated using the given reduce function func , which must be of type (V,V) => V. Like in groupByKey , the number of reduce tasks is configurable through an optional second argument.

scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:24

scala> lines.flatMap(_.split("\\s+")).map((_,1)).reduceByKey(_+_).collect
res0: Array[(String, Int)] = Array((this,1), (is,1), (good,2))
  • aggregateByKey

When called on a dataset of (K, V) pairs, returns a dataset of (K, U) pairs where the values for each key

are aggregated using the given combine functions and a neutral “zero” value. Allows an aggregated

value type that is di!erent than the input value type, while avoiding unnecessary allocations. Like in

groupByKey , the number of reduce tasks is configurable through an optional second argument.

scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[4] at parallelize at <console>:24

scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0)(_+_,_+_).collect
res1: Array[(String, Int)] = Array((this,1), (is,1), (good,2))
  • sortByKey

When called on a dataset of (K, V) pairs where K implements Ordered, returns a dataset of (K, V) pairs

sorted by keys in ascending or descending order, as specified in the boolean ascending argument.

scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[4] at parallelize at <console>:24

scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0)(_+_,_+_).collect
res1: Array[(String, Int)] = Array((this,1), (is,1), (good,2))

scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[8] at parallelize at <console>:24

scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0)(_+_,_+_).sortByKey(true).collect
res2: Array[(String, Int)] = Array((good,2), (is,1), (this,1))

scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0)(_+_,_+_).sortByKey(false).collect
res3: Array[(String, Int)] = Array((this,1), (is,1), (good,2))

注意:true 是降序 false是升序

  • sortBy(T=>U,ascending,[numPartitions])
scala> var lines=sc.parallelize(List("this is good good"))
lines: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at
<console>:24
scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0) (_+_,_+_).sortBy(_._2,false).collect
res18: Array[(String, Int)] = Array((good,2), (this,1), (is,1))
scala> lines.flatMap(_.split("\\s+")).map((_,1)).aggregateByKey(0) (_+_,_+_).sortBy(t=>t,false).collect
res19: Array[(String, Int)] = Array((this,1), (is,1), (good,2))

面试题

两个大文件 1TB 2TB 需要进行join 请问有什么优化手段

在这里插入图片描述

Actions

spark任何一个计算任务 有且仅有一个动作算子 用于触发job得执行 将RDD中得数据写出到外围系统或者传递给Driver主程序

reduce(func)

Aggregate(聚合) the elements of the dataset using a function func (which takes two arguments(参数) and returns one). The function should be commutative(交换的) and associative(关联的) so that it can be computed correctly(正确的) in parallel.

使用函数func聚合数据集的元素(需要两个论证而返回一个)应该将函数交换并且结合 才可以并行计算出正确结果

该算子能够对远程结果进行计算 然后将计算结果返回给Driver 计算文件中的字符数

scala> sc.textFile("file:///root/words").map(_.length).reduce(_+_)
res0: Int = 39

collect()

Return all the elements of the dataset as an array at the driver program(程序). This is usually useful after a filter or other operation that returns a sufficiently(足够的) small subset of the data.

在驱动程序中以数组的形式返回数据集的所有元素 这通常在过滤或其他操作返回足够小的数据子集之后才有用。

将远程RDD中数据传输给Driver端 通常用于测试环境或者RDD中数据非常小的情况才可以使用collect算子 否则driver可能因为数据太大导致内存溢出

scala> sc.textFile("file:///root/words").collect
res1: Array[String] = Array(this is demo good good study day day up)

forech(func)

Run a function func on each element of the dataset. This is usually done for side effects(效果) such as updating an Accumulator(累加器) or interacting(交互) with external storage systems.

对数据集的每个元素运行函数func 这通常是为了解决诸如更新累加器或与外部存储系统交互存在一定的副作用

在数据集的每个元素上运行函数func 通常这样做是出于副作用 例如更新累加器与外部存储系统交互

scala> sc.textFile("file:///root/words").foreach(line=>println(line))
this is demo good good study day day up

count()

Return the number of elements in the dataset.

返回数据集中元素的数量

 sc.textFile("file:///root/word").flatMap(line=>line.split("\\s+")).count()
res10: Long = 11

first()|take(n)

Return the first element of the dataset (similar to take(1)). take(n) Return an array with the first n elements of the dataset.

返回数据集的第一个元素(类似于take(1)) take(n)返回一个包含数据集前n个元素的数组。

scala> sc.textFile("file:///root/word").flatMap(line=>line.split("\\s+")).first
res11: String = this

scala> sc.textFile("file:///root/word").flatMap(line=>line.split("\\s+")).take(1)
res12: Array[String] = Array(this)

scala> sc.textFile("file:///root/word").flatMap(line=>line.split("\\s+")).take(2)
res13: Array[String] = Array(this, is)
takeSample(withReplacement,num,[seed])

Return an array with a random sample of num elements of the dataset, with or without replacement, optionally pre-specifying a random number generator seed.

随机的从RDD中采样num个元素 并且将采样的元素返回给Driver主程序 因此这和sample转换算子有很大的区别

scala> sc.textFile("file:///root/word").takeSample(false,2)
res15: Array[String] = Array(day day up, good good study)

scala> sc.textFile("file:///root/word").takeSample(false,2)
res16: Array[String] = Array(good good study, day day up)
takeOrdered(n,[ordering])

Return the first n elements of the RDD using either their natural order or a custom comparator

返回RDD中前n个元素 用户可以指定比较规则 需要定义隐式值

scala> case class User(name:String,deptNo:Int,salary:Double)
defined class User
scala> var
userRDD=sc.parallelize(List(User("zs",1,1000.0),User("ls",2,1500.0),User("ww",2,1000.0
)))
userRDD: org.apache.spark.rdd.RDD[User] = ParallelCollectionRDD[51] at parallelize at
<console>:26

scala> userRDD.takeOrdered(3)
<console>:26: error: No implicit Ordering defined for User.
       userRDD.takeOrdered(3)
                          ^

scala> implicit var userOrder=new Ordering[User]{
     |   override def compare(x: User, y: User): Int = {
     |   if(x.deptNo!=y.deptNo){
     |   x.deptNo.compareTo(y.deptNo)
     |   }else{
     |   x.salary.compareTo(y.salary) * -1
     |   }
     |   }
     |   }
userOrder: Ordering[User] = $anon$1@55c5690d

scala> userRDD.takeOrdered(3)
res20: Array[User] = Array(User(zs,1,1000.0), User(ww,2,1600.0), User(ls,2,1500.0))

saveAsTextFile(path)

Write the elements of the dataset as a text file (or set of text files) in a given directory in the local filesystem, HDFS or any other Hadoop-supported file system. Spark will call toString on each element to convert it to a line of text in the file

将数据集的元素作为文本文件(或文本文件集)写入本地文件系统的给定目录中 HDFS或任何其他hadoop支持的文件系统 Spark将对每个元素调用to String 将其转换为文件中的一行文本

Spark会调用RDD中元素的toString方法将元素以文本行的形式写入到文件中

scala> sc.textFile("file:///root/word").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._1,true,1).map(t=>t._1+"\t"+t._2).saveAsTextFile("hdfs:///demo/results01")

测试结果:[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VuXbCXKe-1582723263615)(C:\Users\YT\AppData\Roaming\Typora\typora-user-images\1582199420524.png)]

saveAsSequenceFile(path)

Write the elements of the dataset as a Hadoop SequenceFile in a given path in the local filesystem,

HDFS or any other Hadoop-supported file system. This is available on RDDs of key-value pairs that

implement Hadoop’s Writable interface. In Scala, it is also available on types that are implicitly convertible to Writable (Spark includes conversions for basic types like Int, Double, String, etc).

将数据集的元素作为Hadoop SequenceFile写入本地文件系统的给定路径中 HDFS或任何其他hadoop支持的文件系统 这在实现Hadoop的可写接口的键值对的RDDs上是可用的 在Scala中 它也可用于隐式转换为可写的类型(Spark包括对Int、Double、String等基本类型的转换)

该方法只能用于RDD[(K,V)]类型 并且k/v都必须实现Writable接口 由于使用scala编程 Spark已经实现隐式转换将将Int, Double, String, 等类型可以⾃动的转换为Writable

scala> sc.textFile("file:///root/words").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._1,true,1).saveAsSequenceFile("hdfs:///demo/result02")

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4Z2KRXjA-1582723263615)(C:\Users\YT\AppData\Roaming\Typora\typora-user-images\1582200340757.png)]

scala>  sc.sequenceFile[String,Int]("hdfs:///demo/result02").collect
res29: Array[(String, Int)] = Array((day,2), (demo,1), (good,2), (is,1), (study,1), (this,1), (up,1))

共享变量

当RDD中的转换算子需要用到定义Driver中的变量的时候 计算节点在运行该转换算子之前 会通过网络将Driver中定义的变量下载到计算节点 同时如果计算节点在修改了下载的变量 该修改对Driver端定义的变量不可见

scala> var i:Int=0
i: Int = 0

scala> sc.textFile("file:///root/words").foreach(line=> i=i+1)

scala> print(i)
0

广播变量-运用广泛

问题:

当出现超大数和小数据集合进行连接的时候 能否使用join算子直接进行join 如果不行为什么?

//1.创建sparkContext
var conf = new SparkConf()
   //连接资源服务器的名字
   .setMaster("local[*]")
   //应用程序的名字
   .setAppName("broadCast")
val sc=new SparkContext(conf)
//100GB
var orderIntems=List("001 apple 2 4.5","002 pear 1 2.0","003 orange 1 7.0")
//10MB
var users=List("001 zhangs","002 lisi","003 wangw")
var rdd1:RDD[(String,String)]=sc.makeRDD(orderIntems)
      .map(line=>(line.split(" ")(0),line))
var rdd2:RDD[(String,String)]=sc.makeRDD(users)
      .map(line=>(line.split(" ")(0),line))
rdd1.join(rdd2).collect().foreach(println)
//关闭sparkContext
sc.stop()

系统在做join的操作的时候会产生shuffle 会在各个计算节点当中传输100GB的数据用于完成join操作 因此join网络代价和内存代价都很高 因此可以考虑将小数据定义成Driver中成员变量 在Map操作的时候完成join

scala> var users=List("001 zhangsan","002 lisi","003 王五").map(line=>line.split(" ")).map(ts=>ts(0)->ts(1)).toMap
users: scala.collection.immutable.Map[String,String] = Map(001 -> zhangsan, 002 ->
lisi, 003 -> 王五)
scala> var orderItems=List("001 apple 2 4.5","002 pear 1 2.0","001 ⽠⼦ 1 7.0")
orderItems: List[String] = List(001 apple 2 4.5, 002 pear 1 2.0, 001 ⽠⼦ 1 7.0)
scala> var rdd1:RDD[(String,String)] =sc.makeRDD(orderItems).map(line=>(line.split(" ")(0),line))
rdd1: org.apache.spark.rdd.RDD[(String, String)] = MapPartitionsRDD[89] at map at
<console>:32
scala> rdd1.map(t=> t._2+"\t"+users.get(t._1).getOrElse("未知")).collect()
res33: Array[String] = Array(001 apple 2 4.5 zhangsan, 002 pear 1 2.0 lisi,
001 ⽠⼦ 1 7.0 zhangsan)

但是上面写法会存在一个问题 每当一个map算子遍历元素的时候都会向Driver下载userMap变量 虽然该值不大 但是在计算节点会频繁的下载 正是因为此种情景会导致没有必要的重复变量的拷贝 Spark提出广播变量

Spark在程序运行前期 提前将需要广播的变量通知给所有的计算节点 计算节点会对需要广播的变量在计算之前进行下载操作并且将该变量缓存 该计算节点其他线程在使用到该变量的时候就不需要西在

//100GB
var orderItems=List("001 apple 2 4.5","002 pear 1 2.0","001 ⽠⼦ 1 7.0")
//10MB 声明Map类型变量
var users:Map[String,String]=List("001 zhangsan","002 lisi","003 王 五").map(line=>line.split(" ")).map(ts=>ts(0)->ts(1)).toMap
//声明⼴播变量,调⽤value属性获取⼴播值
val ub = sc.broadcast(users)
var rdd1:RDD[(String,String)] =sc.makeRDD(orderItems).map(line=>(line.split(" ") (0),line))
rdd1.map(t=> t._2+"\t"+ub.value.get(t._1).getOrElse("未知")).collect().foreach(println)

累加器

Spark提供的Accumulator 主要用于多个节点对一个变量进行共享性的操作 Accumulator只提供了累加的功能 但是却给我们提供了多个task对一个变量并行操作的功能 但是task只能对Accumulator进行累加操作 不能读取它的值 只能Driver程序可以读取Accumulator的值

scala> val accum=sc.longAccumulator("mycount")
accum: org.apache.spark.util.LongAccumulator = LongAccumulator(id: 650, name: Some(mycount), value: 0)

scala> sc.parallelize(Array(1,2,3,4),6).foreach(x=>accum.add(x))

scala> accum.value
res34: Long = 10

Spark数据写出

将数据写出HDFS

scala> sc.textFile("file:///root/words").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).sortBy(_._1,true,1).saveAsSequenceFile("hdfs:///demo/result03")

因为saveAsxxx都是讲计算结果写入到HDFS或者是本地文件系统中 因此如果需要 将计算结果写出到第三方数据此时就需要借助于spark给我们提供的一个算子foreach算子写出

forech写出

场景1:频繁的打开和关闭连接 写入效率很低(可以运行成功)

sc.textFile("file:///root/t_word") .flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_) .sortBy(_._1,true,3) .foreach(tuple=>{ //数据库
 //1,创建链接
 //2.开始插⼊
 //3.关闭链接
})

场景2:错误写法 因为连接池不可能被序列化(运行失败)

//1.定义连接Connection
var conn=... //定义在Driver
sc.textFile("file:///root/t_word") .flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_) .sortBy(_._1,true,3) .foreach(tuple=>{ //数据库
 //2.开始插⼊
})
//3.关闭链接

场景3:一个分区一个连接池?(还不错 但是不是最优) 有可能一个JVM运行多个分区 也就意味着一个JVM创建多个连接造成资源的浪费 单例对象?

sc.textFile("file:///root/t_word") .flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_) .sortBy(_._1,true,3) .foreachPartition(values=>{
 //创建链接
 //写⼊分区数据
 //关闭链接
})

将创建连接代码使用单例对象创建 如果一个计算节点拿到多个分区 通过JVM单例定义可以知道 在整个JVM中仅仅只会创建一次

val conf = new SparkConf()
.setMaster("local[*]") .setAppName("SparkWordCountApplication")
val sc = new SparkContext(conf)
sc.textFile("hdfs://CentOS:9000/demo/words/") .flatMap(_.split(" "))
.map((_,1))
.reduceByKey(_+_) .sortBy(_._1,true,3) .foreachPartition(values=>{
 HbaseSink.writeToHbase("baizhi:t_word",values.toList)
})
sc.stop()
package com.baizhi.sink
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.{HConstants, TableName}
import org.apache.hadoop.hbase.client.{Connection, ConnectionFactory, Put}
import scala.collection.JavaConverters._
object HbaseSink {
 //定义链接参数
 private var conn:Connection=createConnection()
 def createConnection(): Connection = {
 val hadoopConf = new Configuration()
 hadoopConf.set(HConstants.ZOOKEEPER_QUORUM,"CentOS")
 return ConnectionFactory.createConnection(hadoopConf)
 }
 def writeToHbase(tableName:String,values:List[(String,Int)]): Unit ={
 var tName:TableName=TableName.valueOf(tableName)
val mutator = conn.getBufferedMutator(tName)
 var scalaList=values.map(t=>{
 val put = new Put(t._1.getBytes())
 put.addColumn("cf1".getBytes(),"count".getBytes(),(t._2+" ").getBytes())
 put
 })
 //批量写出
 mutator.mutate(scalaList.asJava)
 mutator.flush()
 mutator.close()
 }
 //监控JVM退出,如果JVM退出系统回调该⽅法
 Runtime.getRuntime.addShutdownHook(new Thread(new Runnable {
 override def run(): Unit = {
 println("-----close----")
 conn.close()
 }
 }))
}

RDD进阶

分析WordCount

sc.textFile("hdfs:///demo/word/words")	//RDD0
	.flatMap(_.split(" "))				//RDD1
	.map((_,1))							//RDD2
	.reduceByKey(_+_)					//RDD3 finalRDD
	.collect							//Array 任务提交

在这里插入图片描述

RDD都有哪些特征?

* Internally, each RDD is characterized by five main properties:

* - A list of partitions

* - A function for computing each split

* - A list of dependencies on other RDDs

* - Optionally, a Partitioner for key-value RDDs (e.g. to say that the RDD is hash-partitioned)

* - Optionally, a list of preferred locations to compute each split on (e.g. block locations for

* an HDFS file)

  • RDD具有分区-分区数等于该RDD并行度
  • 每个分区独立运算 尽可能实现分区本地性计算
  • 只读的数据集且RDD与RDD之间存在着相互依赖关系
  • 针对于key-value RDD 可以指定分区策略[可选]
  • 基于数据所属的位置 选择最优位置实现本地性计算[可选]

RDD容错

在理解DAGSchedule如何做状态划分的前提是需要大家了解一个专业术语lineage通常被人们称为RDD的血统 在了解什么是RDD的血统之前 先来看看程序猿进化过程

在这里插入图片描述
Spark的计算本质就是对RDD做各种转换 因为RDD是一个不可变只读的集合 因此每次的转换都需要上一次的RDD作为本次转换的输入 因此RDD的lineage描述的是RDD间的相互依赖关系 为了保证RDD中数据的健壮性 RDD数据集通过所谓的血统关系记住了它是如何从其他RDD中演变过来的 Spark将RDD之间的关系归类为宽依赖窄依赖 Spark会根据血统存储的RDD的依赖关系对RDD计算做故障容错 目前Spark的容错策略根据RDD依赖关系重新计算-无需干预 RDD做Cache-临时缓存 RDD做Checkpoint-持久化手段完成RDD计算的故障容错

RDD缓存

是一种RDD计算容错的一种手段 程序在RDD数据丢失的时候 可以通过缓存快速计算当前RDD的值 而不需要反推出所有RDD重新计算 因此Spark在需要对某个RDD多次使用的时候 为了提高程序的执行效率 可以考虑使用RDD的cache

scala> var finalRDD=sc.textFile("hdfs:///demo/word/words").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)
finalRDD: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:24

scala> finalRDD.cache
res0: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:24

scala> finalRDD.collect
res1: Array[(String, Int)] = Array((up,1), (this,1), (is,1), (day,2), (demo,1), (good,2), (study,1))

scala> finalRDD.collect
res2: Array[(String, Int)] = Array((up,1), (this,1), (is,1), (day,2), (demo,1), (good,2), (study,1))

在这里插入图片描述

用户可以调用upersist方法清空缓存

scala> finalRDD.unpersist()
res4: org.apache.spark.rdd.RDD[(String, Int)] @scala.reflect.internal.annotations.uncheckedBounds = ShuffledRDD[4] at reduceByKey at <console>:24

除了调用cache之外 Spark提供了更细粒度的RDD缓存方案 用户可以根据集群的内存选则合适的缓存策略 用户可以使用persist方法指定缓存级别

RDD#persist(StorageLevel.MEMORY_ONLY)

目前Spark支持的缓存方案如下:

object StorageLevel {
 val NONE = new StorageLevel(false, false, false, false)
 val DISK_ONLY = new StorageLevel(true, false, false, false)# 仅仅存储磁盘
 val DISK_ONLY_2 = new StorageLevel(true, false, false, false, 2) # 仅仅存储磁盘 存储两
份
 val MEMORY_ONLY = new StorageLevel(false, true, false, true)
 val MEMORY_ONLY_2 = new StorageLevel(false, true, false, true, 2)
 val MEMORY_ONLY_SER = new StorageLevel(false, true, false, false) # 先序列化再 存储内
存,费CPU节省内存
 val MEMORY_ONLY_SER_2 = new StorageLevel(false, true, false, false, 2)
 val MEMORY_AND_DISK = new StorageLevel(true, true, false, true)
 val MEMORY_AND_DISK_2 = new StorageLevel(true, true, false, true, 2)
 val MEMORY_AND_DISK_SER = new StorageLevel(true, true, false, false)
 val MEMORY_AND_DISK_SER_2 = new StorageLevel(true, true, false, false, 2)
 val OFF_HEAP = new StorageLevel(true, true, true, false, 1)
...

那如何选择呢?

默认情况下,性能最⾼的是MEMORY_ONLY,但前提是你的内存必须⾜够⼤,可以存放下整个RDD的所有数据。因为不进⾏序列化与反序列化操作,就避免了这部分的性能开销;对这个 RDD的后续算⼦操作,都是基于纯内存中的数据的操作,不需要从磁盘⽂件中读取数据,性能也很⾼; ⽽且不需要复制⼀份数据副本,并远程传送到其他节点上。但是这⾥必须要注意的是,在实际的⽣产环 境中,能够直接⽤这种策略的场景还是有限的,如果RDD中数据⽐较多时(⽐如⼏⼗亿),直接⽤ 这种持久化级别,会导致JVM的OOM内存溢出异常。 如果使⽤MEMORY_ONLY级别时发⽣了内存溢出,那么建议尝试使⽤MEMORY_ONLY_SER级别。该级别会 将RDD数据序列化后再保存在内存中,此时每个partition仅仅是⼀个字节数组⽽已,⼤⼤减少了对象数 量,并降低了内存占⽤。这种级别⽐MEMORY_ONLY多出来的性能开销,主要就是序列化与反序列化的 开销。但是后续算⼦可以基于纯内存进⾏操作,因此性能总体还是⽐较⾼的。此外,可能发⽣的问题同 上,如果RDD中的数据量过多的话,还是可能会导致OOM内存溢出的异常。 不要泄漏到磁盘,除⾮你在内存中计算需要很⼤的花费,或者可以过滤⼤量数据,保存部分相对重要的 在内存中。否则存储在磁盘中计算速度会很慢,性能急剧降低。 后缀为_2的级别,必须将所有数据都复制⼀份副本,并发送到其他节点上,数据复制以及⽹络传输会导 致较⼤的性能开销,除⾮是要求作业的⾼可⽤性,否则不建议使⽤。

Check Point机制

对于⼀些RDD的lineage较⻓的场景,计算⽐较耗时,可以尝试使用checkpoint机制存 储RDD的计算结果,该机制和缓存最⼤的不同在于,使⽤checkpoint之后被checkpoint的RDD数据直接 持久化在⽂件系统中,⼀般推荐将结果写在hdfs中,这种checpoint并不会⾃动清空。注意checkpoint在 计算的过程中先是对RDD做mark,在任务执⾏结束后,再对mark的RDD实⾏checkpoint,也就是要重新 计算被Mark之后的rdd的依赖和结果。

sc.setCheckpointDir("hdfs://CentOS:9000/checkpoints")
val rdd1 = sc.textFile("hdfs://CentOS:9000/demo/words/") .map(line => {
 println(line)
})
//对当前RDD做标记
rdd1.checkpoint()
rdd1.collect()

因此在checkpoint一般需要和cache连用 这样就可以保证计算一次

sc.setCheckpointDir("hdfs://CentOS:9000/checkpoints")
val rdd1 = sc.textFile("hdfs://CentOS:9000/demo/words/") .map(line => {
 println(line)
})
rdd1.persist(StorageLevel.MEMORY_AND_DISK)//先cache
//对当前RDD做标记
rdd1.checkpoint()
rdd1.collect()
rdd1.unpersist()//删除缓存

任务计算源码剖析

理论指导

通过分析以上的代码,我们可以看出Spark在执⾏任务前期,会根据RDD的转换关系形成⼀个任务执⾏ DAG。将任务划分成若⼲个stage。Spark底层在划分stage的依据是根据RDD间的依赖关系划分。Spark将 RDD与RDD间的转换分类:ShuffleDependency-宽依赖| NarrowDependency-窄依赖 ,Spark如果 发现RDD间存在窄依赖关系,系统会⾃动将存在窄依赖关系的RDD的计算算⼦归纳为⼀个 stage,如果遇到宽依赖系统开启⼀个新的stage.

Spark 宽窄依赖判断

在这里插入图片描述
**宽依赖:**父RDD的一个分区对应子RDD的多个分区 出现分叉ShuffleDependency

**窄依赖:**⽗RDD的1个分区(多个⽗RDD)仅仅只对应⼦RDD的⼀个分区OneToOneDependency |RangeDependency |PruneDependency

Spark在任务提交前期 首先根据finalRdd逆推出所有依赖RDD 以及RDD间依赖关系 如果遇到窄依赖则合并在当前的stage中 如果是宽依赖则开启新的stage

在这里插入图片描述

getMissingParentStages

private def getMissingParentStages(stage: Stage): List[Stage] = {
 val missing = new HashSet[Stage]
 val visited = new HashSet[RDD[_]]
 // We are manually maintaining a stack here to prevent StackOverflowError
 // caused by recursively visiting
 val waitingForVisit = new ArrayStack[RDD[_]]
 def visit(rdd: RDD[_]) {
 if (!visited(rdd)) {
 visited += rdd
 val rddHasUncachedPartitions = getCacheLocs(rdd).contains(Nil)
 if (rddHasUncachedPartitions) {
 for (dep <- rdd.dependencies) {
 dep match {
 case shufDep: ShuffleDependency[_, _, _] =>
 val mapStage = getOrCreateShuffleMapStage(shufDep, stage.firstJobId)
 if (!mapStage.isAvailable) {
 missing += mapStage
 }
 case narrowDep: NarrowDependency[_] =>
 waitingForVisit.push(narrowDep.rdd)
 }
 }
 }
 }
 }
 waitingForVisit.push(stage.rdd)
 while (waitingForVisit.nonEmpty) {
 visit(waitingForVisit.pop())
 }
 missing.toList
 }

遇到宽依赖 系统会自动的创建一个ShuffleMapStage

submitMissingTasks

private def submitMissingTasks(stage: Stage, jobId: Int) {
 
 //计算分区
 val partitionsToCompute: Seq[Int] = stage.findMissingPartitions()
 ...
 //计算最佳位置
 val taskIdToLocations: Map[Int, Seq[TaskLocation]] = try {
 stage match {
 case s: ShuffleMapStage =>
 partitionsToCompute.map { id => (id, getPreferredLocs(stage.rdd,
id))}.toMap
 case s: ResultStage =>
 partitionsToCompute.map { id =>
 val p = s.partitions(id)
 (id, getPreferredLocs(stage.rdd, p))
 }.toMap
 }
 } catch {
 case NonFatal(e) =>
 stage.makeNewStageAttempt(partitionsToCompute.size)
 listenerBus.post(SparkListenerStageSubmitted(stage.latestInfo, properties))
 abortStage(stage, s"Task creation failed: $e\n${Utils.exceptionString(e)}",
Some(e))
 runningStages -= stage
 return
 }
 //将分区映射TaskSet
 val tasks: Seq[Task[_]] = try {
 val serializedTaskMetrics =
closureSerializer.serialize(stage.latestInfo.taskMetrics).array()
 stage match {
 case stage: ShuffleMapStage =>
 stage.pendingPartitions.clear()
 partitionsToCompute.map { id =>
 val locs = taskIdToLocations(id)
 val part = partitions(id)
 stage.pendingPartitions += id
 new ShuffleMapTask(stage.id, stage.latestInfo.attemptNumber,
 taskBinary, part, locs, properties, serializedTaskMetrics,
Option(jobId),
 Option(sc.applicationId), sc.applicationAttemptId,
stage.rdd.isBarrier())
 }
 case stage: ResultStage =>
 partitionsToCompute.map { id =>
 val p: Int = stage.partitions(id)
 val part = partitions(p)
 val locs = taskIdToLocations(id)
 new ResultTask(stage.id, stage.latestInfo.attemptNumber,
 taskBinary, part, locs, id, properties, serializedTaskMetrics,
 Option(jobId), Option(sc.applicationId), sc.applicationAttemptId,
 stage.rdd.isBarrier())
 }
 }
 } catch {
 case NonFatal(e) =>
 abortStage(stage, s"Task creation failed: $e\n${Utils.exceptionString(e)}",
Some(e))
 runningStages -= stage
 return
 }
 //调⽤taskScheduler#submitTasks TaskSet
 if (tasks.size > 0) {
 logInfo(s"Submitting ${tasks.size} missing tasks from $stage (${stage.rdd})
(first 15 " +
 s"tasks are for partitions ${tasks.take(15).map(_.partitionId)})")
 taskScheduler.submitTasks(new TaskSet(
 tasks.toArray, stage.id, stage.latestInfo.attemptNumber, jobId, properties))
 }
 ...
 }
发布了11 篇原创文章 · 获赞 1 · 访问量 298

猜你喜欢

转载自blog.csdn.net/weixin_45106430/article/details/104525301