spark-compile spark 1.6.1

abstract,spark can be compiled with:

maven,

sbt,

intellj ideal

ref:Spark1.0.0 源码编译和部署包生成

 

  also,if u want to load spark-project into eclipse ,then it is necessary to make a 'eclipse project' first by one of below solutions:

1.mvn eclipse:eclipse [optional]

2. ./sbt/sbt clean compile package

 or sbt/sbt then input 'eclipse'

   e.g. if u want build a hadoop 2.5.2 jar u can run with :

扫描二维码关注公众号,回复: 511381 查看本文章
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.5.2 -DskipTests clean package

 yarn's/hadoop profile:2.4

 yarn/hadoop version:2.5.2

 hive:enable default 

[INFO] Building jar: /home/user/build-spark/spark-1.6.1/external/kafka-assembly/target/spark-streaming-kafka-assembly_2.10-1.6.1-test-sources.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [01:11 min]
[INFO] Spark Project Test Tags ............................ SUCCESS [ 33.845 s]
[INFO] Spark Project Launcher ............................. SUCCESS [ 33.728 s]
[INFO] Spark Project Networking ........................... SUCCESS [ 13.160 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 10.966 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 16.310 s]
[INFO] Spark Project Core ................................. SUCCESS [04:29 min]
[INFO] Spark Project Bagel ................................ SUCCESS [  6.629 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 23.603 s]
[INFO] Spark Project Streaming ............................ SUCCESS [ 49.411 s]
[INFO] Spark Project Catalyst ............................. SUCCESS [01:15 min]
[INFO] Spark Project SQL .................................. SUCCESS [01:53 min]
[INFO] Spark Project ML Library ........................... SUCCESS [02:02 min]
[INFO] Spark Project Tools ................................ SUCCESS [  2.954 s]
[INFO] Spark Project Hive ................................. SUCCESS [01:39 min]
[INFO] Spark Project Docker Integration Tests ............. SUCCESS [ 23.671 s]
[INFO] Spark Project REPL ................................. SUCCESS [ 12.683 s]
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [  7.344 s]
[INFO] Spark Project YARN ................................. SUCCESS [ 21.928 s]
[INFO] Spark Project Assembly ............................. SUCCESS [02:05 min]
[INFO] Spark Project External Twitter ..................... SUCCESS [ 12.928 s]
[INFO] Spark Project External Flume Sink .................. SUCCESS [ 20.712 s]
[INFO] Spark Project External Flume ....................... SUCCESS [ 10.053 s]
[INFO] Spark Project External Flume Assembly .............. SUCCESS [  3.930 s]
[INFO] Spark Project External MQTT ........................ SUCCESS [01:49 min]
[INFO] Spark Project External MQTT Assembly ............... SUCCESS [  8.549 s]
[INFO] Spark Project External ZeroMQ ...................... SUCCESS [ 11.317 s]
[INFO] Spark Project External Kafka ....................... SUCCESS [ 24.706 s]
[INFO] Spark Project Examples ............................. SUCCESS [03:14 min]
[INFO] Spark Project External Kafka Assembly .............. SUCCESS [  9.263 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 25:38 min
[INFO] Finished at: 2016-03-24T15:58:43+08:00
[INFO] Final Memory: 207M/6072M
./bin/spark-submit --master local --class org.apache.spark.examples.JavaWordCount examples/target/spark-examples_2.10-1.6.1.jar CHANGES.txt

note:

-the spark.ui.port is used by 'spark-shell'(or spark-submit?no) to show info about jobs,stages,executors,environments(these will be shown also by any completed apps) etc.related logs will be printed when it starts up:

15/11/11 15:12:45 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/11/11 15:12:45 INFO SparkUI: Started SparkUI at http://192.168.1.138:4040
15/11/11 15:12:45 INFO Executor: Starting executor ID driver on host localhost

 

  also ,if u want to make a dist tar ,below is a feasible demo:

./make-distribution.sh -Phadoop-2.4 -Dhadoop.version=2.5.2 -DskipTests

 

if u want to build a spark/import project,some steps are listed here[1]

ref:

http://blog.csdn.net/yunlong34574/article/details/39213503

https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools#UsefulDeveloperTools-IDESetup

http://blog.csdn.net/chenxingzhen001/article/details/25901237

 apach spark build

[1] ide setup

猜你喜欢

转载自leibnitz.iteye.com/blog/2243334