hadoop 源码编译 导入eclipse

hadoop 源码编译 导入eclipse

      hadoop2版本出来后,突然心血来潮想编译下hadoop2的代码并导入eclipse,以备以后有时间了进行代码研究用,现在总结了下编译过程以及如何导入到elipse。一下是必备条件

  1. hadoop版本:hadoop的尝鲜版release-2.0.3-alpha,svn地址:http://svn.apache.org/repos/asf/hadoop/common/tags
  2. JDK1.6
  3. ANT 及ivy,下载ant,配置环境变量,将ivy的下载jar包放到ant的lib目录下,即:%ANT_HOME%/lib
  4. mvn3.0
  5. 装protoc buffer http://protobuf.googlecode.com/files/protobuf-2.4.1.tar.gz,将下载的的包解压,然后将protoc.exe复制到 cygwin的bin目录(%cygwin_home%/bin),
  6. 安装cygwin,在win下面,首先下载cygwin:http://www.cygwin.com/ 在path中设置路径

  以下是需要注意修改的地方:

1、由于编译的源码过大,以至于mvn编译过程中会出现内存溢出,所以需要设置mvn的编译内存,打开mvn  的bin目录下的mvn.bat文件,添加set MAVEN_OPTS= -Xms128m -Xmx1024m。

2、由于在执行过程中mvn的mojo不能执行sh脚本文件,如在<executable>saveVersion.sh</executable>

,所以需要做如下修改,在elease-2.0.3-alpha\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-common目录下和pom.xml同一目录,新建saveVersion.bat的文件,文件内容是调用saveVersion.sh文件,如:sh scripts\saveVersion.sh %1 %2

3、修改release-2.0.3-alpha\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-common\scripts\saveVersion.sh文件,修改user=`who 为user=hdfs,因为在windows环境下活不到用户名,会出现编译page-info的类出错

最后按着hadoop的wiki(http://wiki.apache.org/hadoop/EclipseEnvironment)进行编译打包:

$ mvn install -DskipTests
$ mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true

Note: This may take a while the first time, as all libraries are fetched from the internet, and the whole build is performed.

 

In Eclipse

After the above, do the following to finally have projects in Eclipse ready and waiting for you to go on that scratch-itching development spree:

For Common

  • File -> Import...

  • Choose "Existing Projects into Workspace"
  • Select the hadoop-common-project directory as the root directory
  • Select the hadoop-annotations, hadoop-auth, hadoop-auth-examples and hadoop-common projects
  • Click "Finish"
  • File -> Import...

  • Choose "Existing Projects into Workspace"
  • Select the hadoop-assemblies directory as the root directory
  • Select the hadoop-assemblies project
  • Click "Finish"
  • To get the projects to build cleanly:
  • * Add target/generated-test-sources/java as a source directory for hadoop-common
  • * You may have to add then remove the JRE System Library to avoid errors due to access restrictions

For HDFS

  • File -> Import...

  • Choose "Existing Projects into Workspace"
  • Select the hadoop-hdfs-project directory as the root directory
  • Select the hadoop-hdfs project
  • Click "Finish"

For MapReduce

  • File -> Import...

  • Choose "Existing Projects into Workspace"
  • Select the hadoop-mapreduce-project directory as the root directory
  • Select the hadoop-mapreduce-project project
  • Click "Finish"

 

猜你喜欢

转载自lbxhappy.iteye.com/blog/1853729