使用Scala实现Spark wordcount统计

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u013803572/article/details/79768732

使用Scala实现Spark wordcount统计

sc.textFile("./ml-1m/movies.dat").flatMap(line => line.split("::")).map(word => (word,1))
.reduceByKey((v1,v2) => v1 + v2).collect
// 两者实现的功能一样,只是写法不同
sc.textFile("./ml-1m/movies.dat").flatMap(line => line.split("::")).map(word => (word,1))
.reduceByKey(_+_).collect

猜你喜欢

转载自blog.csdn.net/u013803572/article/details/79768732