1、有如下文件testdata.txt()
At a high level
every Spark application consists of a driver program that runs the user’s main function and executes various parallel operations on a cluster
The main abstraction Spark provides is a resilient distributed dataset (RDD)
which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel
RDDs are created by starting with a file in the Hadoop file system (or any other Hadoop-supported file system)
or an existing Scala collection in the driver program
and transforming it
Users may also ask Spark to persist an RDD in memory
allowing it to be reused efficiently across parallel operations. Finally
RDDs automatically recover from node failures
要求完成如下功能
(1)筛选出包含Spark的行,并统计行数
(2)输出包含单词最多的那一行的单词数
(3)统计数据中包含“a”的行数和包含“b”的行数
import org.apache.spark.{Accumulator, SparkConf, SparkContext}
import org.apache.spark.rdd.RDD
object work02 {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("localtion").setMaster("local[*]")
val sc=new SparkContext(conf)
//获取数据
val user1:RDD[String]=sc.textFile("E://aaa//testdata.txt",1)
//1.统计Spark行数
val result=user1.map((lines:String)=>if(lines.contains("Spark"))
("Spark",1) else (" ",0)).filter(_.equals(("Spark",1)))
val result1=result.reduceByKey(_+_)
//result1.foreach(println)
//2.单词最多的哪一行的单词数
var i:Int=0
val result2=user1.map((lines:String)=>{
val str=lines.split(" ").toList
i=i+1
(str,i)
})
val result3=result2.map(temp=>(temp._1.length,temp._2)).sortByKey(false).take(1)
//result3.foreach(println)
//3.统计数据中包含“a”的行数和包含“b”的行数
val result4=user1.map((lines:String)=>if(lines.contains("a"))
("a",1) else (" ",0)).filter(_.equals(("a",1)))
val result5=result4.reduceByKey(_+_)
result5.foreach(println)
val result6=user1.map((lines:String)=>if(lines.contains("b"))
("b",1) else (" ",0)).filter(_.equals(("b",1)))
val result7=result6.reduceByKey(_+_)
result7.foreach(println)
}
} |