Spark-Core RDD type conversion operator -Value

1、 map(func)

Action : Returns a new RDD, each element of which is the former RDD RDD values through the conversion function is composed of data in RDD do the conversion.

Create a RDD 1-10 of, and then each element of the formation of new RDD * 2

scala > val rdd1 = sc.parallelize(1 to 10)
// 得到一个新的 RDD, 但是这个 RDD 中的元素并不是立即计算出来的
scala> val rdd2 = rdd1.map(_ * 2)

2, mapPartitions (func)

Role : similar map (func), but is independent in each partition running on it:Iterator<T> => Iterator<U>

Suppose there are N elements, there are M partition, then the function map will be called N times, the M times mapPartitions is called a function process all partitions.

scala > val rdd1 = sc.parallelize(1 to 10)
// 得到一个新的 RDD, 但是这个 RDD 中的元素并不是立即计算出来的
scala> val rdd2 = rdd1.mapPartitions(_ * 2)
scala> rdd2.collect
res9: Array[Int] = Array(2, 4, 6, 8, 10, 12, 14, 16, 18, 20)

3, mapPartitionsWithIndex (func)

Role : and mapPartitions (func) but similar. Will provide a multi-func Int value to represent the index partition so func types are:

(Int, Iterator<T>)=> Iterator<U>

scala> val rdd1 = sc.parallelize(Array(10,20,30,40,50,60))
scala> val rdd2 = rdd1.mapPartitionsWithIndex((index, items) => items.map((index, _)))
scala> res2.collect
res9: Array[(Int, Int)] = Array((0,10), (0,20), (0,30), (1,40), (1,50), (1,60))
(1) determining the number of partitions
override def defaultParallelism(): Int =
   scheduler.conf.getInt("spark.default.parallelism", totalCores)
(2) the elements of partition
// length: RDD 中数据的长度  numSlices: 分区数
def positions(length: Long, numSlices: Int): Iterator[(Int, Int)] = {
 (0 until numSlices).iterator.map { i =>
   val start = ((i * length) / numSlices).toInt
   val end = (((i + 1) * length) / numSlices).toInt
   (start, end)
 }
}
seq match {
 case r: Range =>
   
 case nr: NumericRange[_] =>
   
 case _ =>
   val array = seq.toArray // To prevent O(n^2) operations for List etc
   positions(array.length, numSlices).map { case (start, end) =>
       array.slice(start, end).toSeq
   }.toSeq
}

5, the difference between the map and mapPartitions

(1) map (): each time a data processing .

(2) mapPartition (): Every time a partition of data processing , data processing after the partition, the original data in the RDD partition to release, may lead to OOM .

6、flatMap(func)

Action : similar map, but each input element may be mapped to 0or output elements (func it should return a sequence, rather than a single elementT => TraversableOnce[U])

Create an element for the RDD 1-5, using flatMap create a new RDD, the new RDD squared and cubed raw RDD each element to make up 1,1,4,8,9,27 ..

scala> val rdd1 = sc.parallelize(Array(1,2,3,4,5))
scala> val rdd2 =rdd1.flatMap(x => Array(x * x, x * x * x))
scala> rdd2.collect
res14: Array[Int] = Array(1, 1, 4, 8, 9, 27, 16, 64, 25, 125)

7、glom()

Action : each combined into a partition of an array element, to form a new type RDD RDD [Array [T]]

scala> var rdd1 = sc.parallelize(Array(10,20,30,40,50,60), 4)
scala> rdd1.glom.collect
res2: Array[Array[Int]] = Array(Array(10), Array(20, 30), Array(40), Array(50, 60))

8、groupBy(func)

Action : in accordance with the return value of func group.

func as the return value, into a corresponding key value in the iterator. returns RDD: RDD[(K,Iterable[T])

Create an RDD, grouped by parity elements

scala> val rdd1 = sc.makeRDD(Array(1, 3, 4, 20, 4, 5, 8))
scala> val rdd2 = rdd1.groupBy(x => if(x % 2 == 1) "odd" else "even")
scala> rdd2.collect
res5: Array[(String, Iterable[Int])] = Array((even,CompactBuffer(4, 20, 4, 8)), (odd,CompactBuffer(1, 3, 5)))

9、filter(func)

Role: Filter Returns a new RDD is to func. 返回值为trueThose elements make up

Create a RDD (of strings), filtered off a new RDD (containing "xiao" substring)

scala> val rdd1 = sc.parallelize(Array("xiaoli", "laoli", "laowang", "xiaocang", "xiaojing", "xiaokong"))
scala> val rdd2 = rdd1.filter(_.contains("xiao"))
scala> rdd2.collect
res4: Array[String] = Array(xiaoli, xiaocang, xiaojing, xiaokong) 

10、sample(withReplacement,fraction,seed)

effect

(1) to specify a random seed random sampling ratio fractiondata (the number is drawn to: size * fraction) Note that the result does not guarantee exact ratio.

(2) withReplacement representation is whether the extracted data back

true to sampling with replacement

false for sampling without replacement

represents a true back data may be extracted to be repeated, is true, then the fraction is greater than or equal to 0 it

. false if it is not possible to repeat the extraction is false, the fraction must be: [0,1]

(3) seed used for specifying the random number generator seed. Generally use the default, or incoming current timestamp

(4) sampling without replacement

scala> val rdd1 = sc.parallelize(1 to 10)

scala> rdd1.sample(false, 0.5).collect
res15: Array[Int] = Array(1, 3, 4, 7)

(5) sampling with replacement

scala> val rdd1 = sc.parallelize(1 to 10)
scala> rdd1.sample(true, 2).collect
res25: Array[Int] = Array(1, 1, 2, 3, 3, 4, 4, 5, 5, 5, 5, 5, 6, 6, 7, 7, 8, 8, 9)

11、distinct([numTasks])

Role : to perform heavy operations on the parameter indicates the number of elements in RDD task. The default value and number of partitions consistent .

scala> val rdd1 = sc.parallelize(Array(10,10,2,5,3,5,3,6,9,1))

scala> rdd1.distinct().collect
res29: Array[Int] = Array(6, 10, 2, 1, 3, 9, 5)

Essentially reduceByKey

  /**
   * Return a new RDD containing the distinct elements in this RDD.
   */
  def distinct(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {
    map(x => (x, null)).reduceByKey((x, y) => x, numPartitions).map(_._1)
  }

12 、coalesce(numPartitions)

Action : 缩减分区数to the specified number, for large data sets after filtering, improve the efficiency of small data sets.

scala> val rdd1 = sc.parallelize(0 to 100, 5)

scala> rdd1.partitions.length
res3: Int = 5

// 减少分区的数量至 2 
scala>val rdd2= rdd1.coalesce(2)

scala> rdd2.partitions.length
res4: Int = 2
note:

第二个参数表示是否shuffle, If you do not pass or passed as false ,** 则表示不进行shuffer, this time effectively reducing the number of partitions to increase the number of partitions invalid. **

13, distribution (numPartitions)

Role: According to the new number of partitions, re-shuffle all the data, this operation will always be (cross-partition operations) over the network.

The new number of partitions can be more than before, it can also be less , generally used to increase the partition, semantic clarity

scala> val rdd1 = sc.parallelize(0 to 100, 5)

scala> val rdd2 = rdd1.repartition(3)

scala> res2.partitions.length
res4: Int = 3

scala> val rdd3 = rdd1.repartition(10)

scala> rdd3.partitions.length
res5: Int = 10
coalasce和repartition的区别

(1) coalesce re-partition, you can choose whether or not to shuffle process. By parameter shuffle: Boolean = false / true decision.

(2) repartition实际上是调用的的coalesce, a shuffle

def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {
coalesce(numPartitions, shuffle = true)
}

(3) If the partition is to reduce, as much as possible to avoid shuffle

14、sortBy(func,[ascending],[numTasks])

Action: func is used to process the data, sorted according to the processing result, the default is the positive sequence .

scala> val rdd1 = sc.parallelize(Array(1,3,4,10,4,6,9,20,30,16))

scala> rdd1.sortBy(x => x).collect
res17: Array[Int] = Array(1, 3, 4, 4, 6, 9, 10, 16, 20, 30)

scala> rdd1.sortBy(x => x, true).collect
res18: Array[Int] = Array(1, 3, 4, 4, 6, 9, 10, 16, 20, 30)

// 不用正序
scala> rdd1.sortBy(x => x, false).collect
res19: Array[Int] = Array(30, 20, 16, 10, 9, 6, 4, 4, 3, 1)

15、pipe(command,[envVars])

Action : a conduit 针对每个分区, each data RDD is passed to the shell script or command through the pipeline, the output returns RDD.

一个分区执行一次这个命令. 如果只有一个分区, 则执行一次命令.

note:

Script to put worker node has access to location

(1) create a script file pipe.sh

echo "hello"
while read line;do
    echo ">>>"$line
done

(2) create only one partition RDD

scala> val rdd1 = sc.parallelize(Array(10,20,30,40), 1)

scala> rdd1.pipe("./pipe.sh").collect
res1: Array[String] = Array(hello, >>>10, >>>20, >>>30, >>>40)

(3) create partitions have RDD 2

scala> val rdd1 = sc.parallelize(Array(10,20,30,40), 2)

scala> rdd1.pipe("./pipe.sh").collect
res2: Array[String] = Array(hello, >>>10, >>>20, hello, >>>30, >>>40)

Each partition executed once the script, but each element considered standard input line

Guess you like

Origin www.cnblogs.com/hyunbar/p/12045543.html