Environment: Spark1.6.1+Hadoop2.6.4+Scala2.10.6
Goal: Develop a word count applet in Scala and upload it to the spark cluster to run successfully.
step:
1. Local development code.
1.1, configure pom
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.xhj.scalafirst</groupId>
<artifactId>scalafirst</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<encoding>UTF-8</encoding>
<scala.version>2.10.6</scala.version>
<spark.version>1.6.1</spark.version>
<hadoop.version>2.6.4</hadoop.version>
</properties>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<testSourceDirectory>src/test/scala</testSourceDirectory>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
<configuration>
<args>
<arg>-make:transitive</arg>
<arg>-dependencyfile</arg>
<arg>${project.build.directory}/.scala_dependencies</arg>
</args>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>
shade
</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<transformers>
<transformer
implementation="org.apache.maven.plugins
.shade.resource.ManifestResourceTransformer">
<Main-Class>com.qf.spark.WordCount</Main-Class>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
1.2, write code
package scalapackage.testspark
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
import scala.collection.mutable
/**
* Created by Germmy on 2018/5/3.
*/
object SparkWC {
def main(args: Array[String]) {
val sparkConf: SparkConf = new SparkConf().setAppName("SparkWC")//.setMaster("local[*]")
val sc: SparkContext = new SparkContext(sparkConf)
val lines: RDD[String] = sc.textFile(args(0))
val by: RDD[(String, Int)] = lines.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _).sortBy(_._2)
// val toBuffer: mutable.Buffer[(String, Int)] = by.collect().toBuffer
// println(toBuffer)
by.saveAsTextFile(args(1))
sc.stop()
}
}
Note :
1), setMaster(local[*]) indicates that the local CPU opens multi-threading to simulate the Spark cluster. After local adjustment, before uploading the Spark cluster, it needs to be commented out
2), this program accepts 2 parameters, one is where to read from, the other is where the result is written
2. Packing.
2.1, run the command
mvn clean package
problem solved