public class WordCount {
public static class Map extends MapReduceBase implements
Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value,
OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class Reduce extends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
Ps : 可以不创建out目录,要不运行WordCount程序时会报output文件已经存在,所以下面的命令行中使用了output1为输出目录;
3.6运行
到/usr/george/dev/wkspace/hadoop/wordcount/classes目录,运行
[iyunv@localhost classes]# hadoop jar WordCount.jar WordCount wordcount/input wordcount/output1
11/12/02 05:53:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/12/02 05:53:59 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
11/12/02 05:53:59 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/12/02 05:53:59 INFO mapred.FileInputFormat: Total input paths to process : 2
11/12/02 05:54:00 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
11/12/02 05:54:00 INFO mapreduce.JobSubmitter: number of splits:2
11/12/02 05:54:00 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null
11/12/02 05:54:00 INFO mapreduce.Job: Running job: job_201112020429_0003
11/12/02 05:54:01 INFO mapreduce.Job: map 0% reduce 0%
11/12/02 05:54:20 INFO mapreduce.Job: map 50% reduce 0%
11/12/02 05:54:23 INFO mapreduce.Job: map 100% reduce 0%
11/12/02 05:54:29 INFO mapreduce.Job: map 100% reduce 100%
11/12/02 05:54:32 INFO mapreduce.Job: Job complete: job_201112020429_0003
11/12/02 05:54:32 INFO mapreduce.Job: Counters: 33
FileInputFormatCounters
BYTES_READ=54
FileSystemCounters
FILE_BYTES_READ=132
FILE_BYTES_WRITTEN=334
HDFS_BYTES_READ=274
HDFS_BYTES_WRITTEN=65
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
Job Counters
Data-local map tasks=2
Total time spent by all maps waiting after reserving slots (ms)=0
Total time spent by all reduces waiting after reserving slots (ms)=0
SLOTS_MILLIS_MAPS=24824
SLOTS_MILLIS_REDUCES=6870
Launched map tasks=2
Launched reduce tasks=1
Map-Reduce Framework
Combine input records=12
Combine output records=12
Failed Shuffles=0
GC time elapsed (ms)=291
Map input records=4
Map output bytes=102
Map output records=12
Merged Map outputs=2
Reduce input groups=10
Reduce input records=12
Reduce output records=10
Reduce shuffle bytes=138
Shuffled Maps =2
Spilled Records=24
SPLIT_RAW_BYTES=220
3.7 查看输出目录
[iyunv@localhost classes]# hadoop fs -ls wordcount/output1
11/12/02 05:54:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/12/02 05:55:00 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Found 2 items
-rw-r--r-- 1 root supergroup 0 2011-12-02 05:54 /user/root/wordcount/output1/_SUCCESS
-rw-r--r-- 1 root supergroup 65 2011-12-02 05:54 /user/root/wordcount/output1/part-00000
[iyunv@localhost classes]# hadoop fs -cat /user/root/wordcount/output1/part-00000
11/12/02 05:56:05 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/12/02 05:56:05 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
You 1
are 2
china 1
hello,i 1
i 1
love 2
ok 1
ok? 1
word 1
you 1