网中网 发表于 2016-12-11 10:51:09

Hadoop 实战之MapReduce链接作业之预处理

环境:Vmware 8.0 和Ubuntu11.04
Hadoop 实战之MapReduce链接作业之预处理
第一步:首先创建一个工程命名为HadoopTest.目录结构如下图:



第二步: 在/home/tanglg1987目录下新建一个start.sh脚本文件,每次启动虚拟机都要删除/tmp目录下的全部文件,重新格式化namenode,代码如下:

viewplaincopy



[*]sudorm-rf/tmp/*
[*]rm-rf/home/tanglg1987/hadoop-0.20.2/logs
[*]hadoopnamenode-format
[*]hadoopdatanode-format
[*]start-all.sh
[*]hadoopfs-mkdirinput
[*]hadoopdfsadmin-safemodeleave


第三步:给start.sh增加执行权限并启动hadoop伪分布式集群,代码如下:
viewplaincopy



[*]chmod777/home/tanglg1987/start.sh
[*]./start.sh


执行过程如下:

第四步:上传本地文件到hdfs

在/home/tanglg1987目录下新建Customer.txt内容如下:
viewplaincopy



[*]100tom90
[*]101mary85
[*]102kate60

上传本地文件到hdfs:
viewplaincopy



[*]hadoopfs-put/home/tanglg1987/ChainMapper.txtinput

第五步:新建一个ChainMapperDemo.java,代码如下:
viewplaincopy



[*]packagecom.baison.action;
[*]importjava.io.IOException;
[*]importjava.util.*;
[*]importjava.lang.String;
[*]importorg.apache.hadoop.fs.Path;
[*]importorg.apache.hadoop.conf.*;
[*]importorg.apache.hadoop.io.*;
[*]importorg.apache.hadoop.mapred.*;
[*]importorg.apache.hadoop.util.*;
[*]importorg.apache.hadoop.mapred.lib.*;
[*]publicclassChainMapperDemo{
[*]publicstaticclassMap00extendsMapReduceBaseimplements
[*]Mapper<Text,Text,Text,Text>{
[*]publicvoidmap(Textkey,Textvalue,OutputCollectoroutput,
[*]Reporterreporter)throwsIOException{
[*]Textft=newText("100");
[*]if(!key.equals(ft)){
[*]output.collect(key,value);
[*]}
[*]}
[*]}
[*]publicstaticclassMap01extendsMapReduceBaseimplements
[*]Mapper<Text,Text,Text,Text>{
[*]publicvoidmap(Textkey,Textvalue,OutputCollectoroutput,
[*]Reporterreporter)throwsIOException{
[*]Textft=newText("101");
[*]if(!key.equals(ft)){
[*]output.collect(key,value);
[*]}
[*]}
[*]}
[*]publicstaticclassReduceextendsMapReduceBaseimplements
[*]Reducer<Text,Text,Text,Text>{
[*]publicvoidreduce(Textkey,Iteratorvalues,OutputCollectoroutput,
[*]Reporterreporter)throwsIOException{
[*]while(values.hasNext()){
[*]output.collect(key,values.next());
[*]}
[*]
[*]}
[*]}
[*]publicstaticvoidmain(String[]args)throwsException{
[*]String[]arg={"hdfs://localhost:9100/user/tanglg1987/input/ChainMapper.txt",
[*]"hdfs://localhost:9100/user/tanglg1987/output"};
[*]JobConfconf=newJobConf(ChainMapperDemo.class);
[*]conf.setJobName("ChainMapperDemo");
[*]conf.setInputFormat(KeyValueTextInputFormat.class);
[*]conf.setOutputFormat(TextOutputFormat.class);
[*]ChainMappercm=newChainMapper();
[*]JobConfmapAConf=newJobConf(false);
[*]cm.addMapper(conf,Map00.class,Text.class,Text.class,Text.class,
[*]Text.class,true,mapAConf);
[*]JobConfmapBConf=newJobConf(false);
[*]cm.addMapper(conf,Map01.class,Text.class,Text.class,Text.class,
[*]Text.class,true,mapBConf);
[*]conf.setReducerClass(Reduce.class);
[*]conf.setOutputKeyClass(Text.class);
[*]conf.setOutputValueClass(Text.class);
[*]FileInputFormat.setInputPaths(conf,newPath(arg[0]));
[*]FileOutputFormat.setOutputPath(conf,newPath(arg[1]));
[*]JobClient.runJob(conf);
[*]}
[*]}

第六步:Run On Hadoop,运行过程如下:
12/10/17 21:05:53 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/10/17 21:05:53 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/10/17 21:05:53 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/10/17 21:05:54 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/17 21:05:54 INFO mapred.JobClient: Running job: job_local_0001
12/10/17 21:05:54 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/17 21:05:54 INFO mapred.MapTask: numReduceTasks: 1
12/10/17 21:05:54 INFO mapred.MapTask: io.sort.mb = 100
12/10/17 21:05:54 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/17 21:05:54 INFO mapred.MapTask: record buffer = 262144/327680
12/10/17 21:05:54 INFO mapred.MapTask: Starting flush of map output
12/10/17 21:05:54 INFO mapred.MapTask: Finished spill 0
12/10/17 21:05:54 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/10/17 21:05:54 INFO mapred.LocalJobRunner: hdfs://localhost:9100/user/tanglg1987/input/ChainMapper.txt:0+35
12/10/17 21:05:54 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
12/10/17 21:05:54 INFO mapred.LocalJobRunner:
12/10/17 21:05:54 INFO mapred.Merger: Merging 1 sorted segments
12/10/17 21:05:54 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 16 bytes
12/10/17 21:05:54 INFO mapred.LocalJobRunner:
12/10/17 21:05:54 INFO mapred.TaskRunner: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
12/10/17 21:05:54 INFO mapred.LocalJobRunner:
12/10/17 21:05:54 INFO mapred.TaskRunner: Task attempt_local_0001_r_000000_0 is allowed to commit now
12/10/17 21:05:54 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/17 21:05:54 INFO mapred.LocalJobRunner: reduce > reduce
12/10/17 21:05:54 INFO mapred.TaskRunner: Task 'attempt_local_0001_r_000000_0' done.
12/10/17 21:05:55 INFO mapred.JobClient: map 100% reduce 100%
12/10/17 21:05:55 INFO mapred.JobClient: Job complete: job_local_0001
12/10/17 21:05:55 INFO mapred.JobClient: Counters: 15
12/10/17 21:05:55 INFO mapred.JobClient: FileSystemCounters
12/10/17 21:05:55 INFO mapred.JobClient: FILE_BYTES_READ=36152
12/10/17 21:05:55 INFO mapred.JobClient: HDFS_BYTES_READ=70
12/10/17 21:05:55 INFO mapred.JobClient: FILE_BYTES_WRITTEN=73202
12/10/17 21:05:55 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=12
12/10/17 21:05:55 INFO mapred.JobClient: Map-Reduce Framework
12/10/17 21:05:55 INFO mapred.JobClient: Reduce input groups=1
12/10/17 21:05:55 INFO mapred.JobClient: Combine output records=0
12/10/17 21:05:55 INFO mapred.JobClient: Map input records=3
12/10/17 21:05:55 INFO mapred.JobClient: Reduce shuffle bytes=0
12/10/17 21:05:55 INFO mapred.JobClient: Reduce output records=1
12/10/17 21:05:55 INFO mapred.JobClient: Spilled Records=2
12/10/17 21:05:55 INFO mapred.JobClient: Map output bytes=12
12/10/17 21:05:55 INFO mapred.JobClient: Map input bytes=35
12/10/17 21:05:55 INFO mapred.JobClient: Combine input records=0
12/10/17 21:05:55 INFO mapred.JobClient: Map output records=1
12/10/17 21:05:55 INFO mapred.JobClient: Reduce input records=1
第七步:查看结果集,运行结果如下:



viewplaincopy



[*]sudorm-rf/tmp/*
[*]rm-rf/home/tanglg1987/hadoop-0.20.2/logs
[*]hadoopnamenode-format
[*]hadoopdatanode-format
[*]start-all.sh
[*]hadoopfs-mkdirinput
[*]hadoopdfsadmin-safemodeleave


第三步:给start.sh增加执行权限并启动hadoop伪分布式集群,代码如下:viewplaincopy



[*]chmod777/home/tanglg1987/start.sh
[*]./start.sh
页: [1]
查看完整版本: Hadoop 实战之MapReduce链接作业之预处理