ahxcjxzxh 发表于 2016-12-6 10:01:01

hadoop mapreducce: wrong key class

  今天做hadoop 的时候出现了一个异常,内容如下:
  java.io.IOException: wrong key class: class org.apache.hadoop.io.Text is not class org.apache.hadoop.io.IntWritable
  我的Mapper 和Reducer如下所示:

public static class MyMapper extends Mapper<Object,Text,IntWritable,Text>{

public void map(Object key,Text value,Context context)throws IOException, InterruptedException {
// map内容
context.write(temp_key, temp_value);
}
}

public static class MyReducer extends Reducer<IntWritable,Text,Text,IntWritable> {

private Text result=new Text();

public void reduce(IntWritable key,Iterable<Text> values,Context context) throws IOException,InterruptedException{

// reduce内容
context.write(result, key);
}

  }
  我的主函数如下:
  public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: KeansDmeo02 <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "myjob");
job.setJarByClass(TestDemo.class);
job.setMapperClass(MyMapper.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(Text.class);
job.setCombinerClass(MyReducer.class); // 此处要被注释掉 才不会出现异常
job.setReducerClass(MyReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs));
FileOutputFormat.setOutputPath(job, new Path(otherArgs));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
  这样设置后,运行程序,会出现 wrong key class的异常,解决方法是在主函数中注释掉setCombinerClass就可以了。具体可以参考如下网页:http://blog.pfa-labs.com/2010/01/first-stab-at-hadoop-and-map-reduce.html , 这个异常是wrong value class 异常,和本例有相似的地方。大家可以参考下。

  
页: [1]
查看完整版本: hadoop mapreducce: wrong key class