Spark series (eight) - Spark SQL's DataFrame and Dataset

A, Spark SQL Introduction

Spark Spark the SQL is one sub-module is mainly used for operating structured data. It has the following features:

  • SQL queries can be blended seamlessly with the Spark program that allows you to use SQL or DataFrame API to query structured data;
  • Support for multiple programming languages;
  • It supports up to hundreds of external data sources, including Hive, Avro, Parquet, ORC, JSON and JDBC the like;
  • Support HiveQL grammar and Hive SerDes and UDF, allowing you to access the existing Hive warehouse;
  • It supports standard JDBC and ODBC connections;
  • It supports optimized, a column code generation and storage characteristics;
  • Support the expansion and to ensure fault tolerance.

https://github.com/heibaiying

二、DataFrame & DataSet

2.1 DataFrame

In order to support the processing of structured data, Spark SQL provides a new data structure DataFrame. DataFrame is finished with a composition of the data set. It is conceptually equivalent to a relational database table or R / Python language data frame. Because Spark SQL developers to support multiple languages, so each language defines the DataFrameabstract, notably the following:

Language The main abstract
Scala Dataset[T] & DataFrame (Dataset[Row] 的别名)
Java Dataset[T]
Python DataFrame
R DataFrame

2.2 DataFrame contrast RDDs

DataFrame RDDs and the main difference is that for a structured data, unstructured data is a face, which internal data structure is as follows:

https://github.com/heibaiying

Internal DataFrame clear Scheme structure, that column name, column field types are known, the benefits of this is that you can reduce data read and to better optimize the execution plan to ensure the efficiency of the query.

How DataFrame and RDDs should choose?

  • If you want to use functional programming rather than DataFrame API, using RDDs;
  • If your data is unstructured (such as streaming media or character stream), use RDDs,
  • If you are structured data (such as data in an RDBMS) or semi-structured (e.g., logs), for performance considerations, should be preferred DataFrame.

2.3 DataSet

Dataset is a collection of distributed data, it is introduced Spark 1.6 version, which integrates the advantages of RDD and DataFrame, with a strongly-typed characteristics, while supporting Lambda functions, but can only be used in Scala and Java language. After Spark 2.0, in order to facilitate the developer, the Spark and the Dataset API will DataFrame fused together, providing a structured API (Structured API), i.e., the user may be able to complete the operation of both by a standard API.

Here note: DataFrame is marked Untyped API, which is marked DataSet to explain to both Typed API, later would.

https://github.com/heibaiying

2.4 static type and run-time type safety

Static type (Static-typing) and run-time type safety (runtime type-safety) mainly as follows:

In actual use, if you use the Spark SQL query, the runtime until you will find syntax errors, and if you are using a DataFrame and Dataset, it can be found at compile time error (which saves development time and overall costs). DataFrame and Dataset main difference is:

In DataFrame, when you call a function outside the API, the compiler will report an error, but if you use a field name that does not exist, the compiler still can not find. Dataset and the API functions are used and JVM Lambda type object representation, the parameters of all types will be found that does not match at compile time.

These are interpreted as the ultimate safety with regard to the type of map, corresponding to the development and analysis of syntax errors. In the atlas, Dataset most stringent, but for developers the highest efficiency.

<div align="center"> <img width="600px" src="https://raw.githubusercontent.com/heibaiying/BigData-Notes/master/pictures/spark-运行安全.png"/>; </div>
上面的描述可能并没有那么直观,下面的给出一个 IDEA 中代码编译的示例:

https://github.com/heibaiying

这里一个可能的疑惑是 DataFrame 明明是有确定的 Scheme 结构 (即列名、列字段类型都是已知的),但是为什么还是无法对列名进行推断和错误判断,这是因为 DataFrame 是 Untyped 的。

2.5 Untyped & Typed

在上面我们介绍过 DataFrame API 被标记为 Untyped API,而 DataSet API 被标记为 Typed API。DataFrame 的 Untyped 是相对于语言或 API 层面而言,它确实有明确的 Scheme 结构,即列名,列类型都是确定的,但这些信息完全由 Spark 来维护,Spark 只会在运行时检查这些类型和指定类型是否一致。这也就是为什么在 Spark 2.0 之后,官方推荐把 DataFrame 看做是 DatSet[Row],Row 是 Spark 中定义的一个 trait,其子类中封装了列字段的信息。

相对而言,DataSet 是 Typed 的,即强类型。如下面代码,DataSet 的类型由 Case Class(Scala) 或者 Java Bean(Java) 来明确指定的,在这里即每一行数据代表一个 Person,这些信息由 JVM 来保证正确性,所以字段名错误和类型错误在编译的时候就会被 IDE 所发现。

case class Person(name: String, age: Long)
val dataSet: Dataset[Person] = spark.read.json("people.json").as[Person]

三、DataFrame & DataSet & RDDs 总结

这里对三者做一下简单的总结:

  • RDDs 适合非结构化数据的处理,而 DataFrame & DataSet 更适合结构化数据和半结构化的处理;
  • DataFrame & DataSet 可以通过统一的 Structured API 进行访问,而 RDDs 则更适合函数式编程的场景;
  • 相比于 DataFrame 而言,DataSet 是强类型的 (Typed),有着更为严格的静态类型检查;
  • DataSets、DataFrames、SQL 的底层都依赖了 RDDs API,并对外提供结构化的访问接口。

<div align="center"> <img width="600px" src="https://raw.githubusercontent.com/heibaiying/BigData-Notes/master/pictures/spark-structure-api.png"/>; </div>

四、Spark SQL的运行原理

DataFrame、DataSet 和 Spark SQL 的实际执行流程都是相同的:

  1. 进行 DataFrame/Dataset/SQL 编程;
  2. 如果是有效的代码,即代码没有编译错误,Spark 会将其转换为一个逻辑计划;
  3. Spark 将此逻辑计划转换为物理计划,同时进行代码优化;
  4. Spark 然后在集群上执行这个物理计划 (基于 RDD 操作) 。

4.1 逻辑计划(Logical Plan)

执行的第一个阶段是将用户代码转换成一个逻辑计划。它首先将用户代码转换成 unresolved logical plan(未解决的逻辑计划),之所以这个计划是未解决的,是因为尽管您的代码在语法上是正确的,但是它引用的表或列可能不存在。 Spark 使用 analyzer(分析器) 基于 catalog(存储的所有表和 DataFrames 的信息) 进行解析。解析失败则拒绝执行,解析成功则将结果传给 Catalyst 优化器 (Catalyst Optimizer),优化器是一组规则的集合,用于优化逻辑计划,通过谓词下推等方式进行优化,最终输出优化后的逻辑执行计划。

https://github.com/heibaiying

4.2 物理计划(Physical Plan)

得到优化后的逻辑计划后,Spark 就开始了物理计划过程。 它通过生成不同的物理执行策略,并通过成本模型来比较它们,从而选择一个最优的物理计划在集群上面执行的。物理规划的输出结果是一系列的 RDDs 和转换关系 (transformations)。

https://github.com/heibaiying

4.3 执行

在选择一个物理计划后,Spark 运行其 RDDs 代码,并在运行时执行进一步的优化,生成本地 Java 字节码,最后将运行结果返回给用户。

参考资料

  1. Matei Zaharia, Bill Chambers . Spark: The Definitive Guide[M] . 2018-02
  2. Spark SQL, DataFrames and Datasets Guide
  3. 且谈 Apache Spark 的 API 三剑客:RDD、DataFrame 和 Dataset(译文)
  4. A Tale of Three Apache Spark APIs: RDDs vs DataFrames and Datasets(原文)

More big data series can be found GitHub open source project : Big Data Getting Started

Guess you like

Origin blog.51cto.com/14183932/2440196