MapReduce中的HelloWorld,安排一下?

相信绝大多数程序员在看到 HelloWorld这个词的时候,总会情不自禁的翘起嘴角吧!虽然早已离开了校园,但每每看到这个词,我总会自然而然地想起曾经和我的那群“狐朋狗友”在大学里肆无忌惮敲代码的日子。。。
似乎有点跑题了(尴尬脸),看了上篇的原理,是不是手痒,想来操作一下了!
https://blog.csdn.net/Forever_ck/article/details/84589932
下面我们就来看看MapReduce里的“helloworld”,也就是WorldCount。
先来看下需求: 统计一堆文件中单词出现的个数
分析:
首先我们需要准备一点数据,并按照 mapreduce 编程规范,分别编写 Mapper,Reducer,Driver。

一、编写 mapper 类

package com.ck

import java.io.IOException;import org.apache.hadoop.io.IntWritable; 
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class WordcountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{ 
   
   Text k = new Text();
   IntWritable v = new IntWritable(1);

 @Override
 protected void map(LongWritable key, Text value, Context context)throws IOException, InterruptedException {
 
   // 获取一行数据
   String line = value.toString();
   // 切割
   String[] words = line.split("");
   // 输出
   for (String word : words) {
   	k.set(word);
   	context.write(k, v);
   }
 }  
}    

二、编写reduce类

 package com.ck
 
 import java.io.IOException;
 import org.apache.hadoop.io.IntWritable; 
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapreduce.Reducer;    

 public class WordcountReducer extends Reducer<Text, IntWritable, Text,IntWritable>{
 
 @Override
 protected void reduce(Text key, Iterable<IntWritable> value,Context context) throws IOException, InterruptedException {
 
    //累加求和
    int sum = 0;
    for (IntWritable count : value) { 
      sum += count.get();
    }
    //输出
    context.write(key, new IntWritable(sum));
  } 
}

三、编写驱动类

package com.ck

import java.io.IOException;importorg.apache.hadoop.conf.Configuration; 
importorg.apache.hadoop.fs.Path;importorg.apache.hadoop.io.IntWritable; 
importorg.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 

public class WordcountDriver {
  public static void main(String[] args) throws IOException, ClassNotFoundException,InterruptedException {
  
    //1.获取配置文件
    Configuration configuration = new Configuration(); 
    Job job = Job.getInstance(configuration);
    
    //2.设置 jar 加载路径
    job.setJarByClass(WordcountDriver.class);
    
    //3.设置 map 和 Reduce 类
    job.setMapperClass(WordcountMapper.class); 
    job.setReducerClass(WordcountReducer.class);
    
    //4.设置map输出
    job.setMapOutputKeyClass(Text.class); 
    job.setMapOutputValueClass(IntWritable.class);
    
    //5.设置reduce输出
    job.setOutputKeyClass(Text.class); 
    job.setOutputValueClass(IntWritable.class);
   
    //6.设置输入输出路径
    FileInputFormat.setInputPaths(job, new Path(args[0])); 
    FileOutputFormat.setOutputPath(job, new Path(args[1]));

    //7.提交
    boolean result = job.waitForCompletion(true);
    System.exit(result ? 0 : 1);
  }
 }

好了,这样一个MapReduce的WorldCount就已经写完了,快去试试吧,如果不想在集群上测试,在本地也是可以的,但必须要保证windows上已经安装了Hadoop环境!

猜你喜欢

转载自blog.csdn.net/Forever_ck/article/details/84590812