神甫 发表于 2017-12-16 20:38:53

hadoop 2.7.3伪分布式环境运行官方wordcount

17/01/08 16:25:32 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id  17/01/08 16:25:32 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
  17/01/08 16:25:32 INFO input.FileInputFormat: Total input paths to process : 2
  17/01/08 16:25:33 INFO mapreduce.JobSubmitter: number of splits:2
  17/01/08 16:25:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local247232145_0001
  17/01/08 16:25:33 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
  17/01/08 16:25:33 INFO mapreduce.Job: Running job: job_local247232145_0001
  17/01/08 16:25:33 INFO mapred.LocalJobRunner: OutputCommitter set in config null
  17/01/08 16:25:33 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
  17/01/08 16:25:33 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
  17/01/08 16:25:33 INFO mapred.LocalJobRunner: Waiting for map tasks
  17/01/08 16:25:33 INFO mapred.LocalJobRunner: Starting task: attempt_local247232145_0001_m_000000_0
  17/01/08 16:25:33 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
  17/01/08 16:25:33 INFO mapred.Task:Using ResourceCalculatorProcessTree : [ ]
  17/01/08 16:25:33 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/test/input/file2.txt:0+70
  17/01/08 16:25:35 INFO mapreduce.Job: Job job_local247232145_0001 running in uber mode : false
  17/01/08 16:25:35 INFO mapreduce.Job:map 0% reduce 0%
  17/01/08 16:25:35 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
  17/01/08 16:25:35 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
  17/01/08 16:25:35 INFO mapred.MapTask: soft limit at 83886080
  17/01/08 16:25:35 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
  17/01/08 16:25:35 INFO mapred.MapTask: kvstart = 26214396; length = 6553600

  17/01/08 16:25:37 INFO mapred.MapTask: Map output collector>  17/01/08 16:25:37 INFO mapred.LocalJobRunner:
  17/01/08 16:25:37 INFO mapred.MapTask: Starting flush of map output
  17/01/08 16:25:37 INFO mapred.MapTask: Spilling map output
  17/01/08 16:25:37 INFO mapred.MapTask: bufstart = 0; bufend = 114; bufvoid = 104857600
  17/01/08 16:25:37 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214356(104857424); length = 41/6553600
  17/01/08 16:25:38 INFO mapred.MapTask: Finished spill 0
  17/01/08 16:25:38 INFO mapred.Task: Task:attempt_local247232145_0001_m_000000_0 is done. And is in the process of committing
  17/01/08 16:25:38 INFO mapred.LocalJobRunner: map
  17/01/08 16:25:38 INFO mapred.Task: Task 'attempt_local247232145_0001_m_000000_0' done.
  17/01/08 16:25:38 INFO mapred.LocalJobRunner: Finishing task: attempt_local247232145_0001_m_000000_0
  17/01/08 16:25:38 INFO mapred.LocalJobRunner: Starting task: attempt_local247232145_0001_m_000001_0
  17/01/08 16:25:38 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
  17/01/08 16:25:38 INFO mapred.Task:Using ResourceCalculatorProcessTree : [ ]
  17/01/08 16:25:38 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/test/input/file1.txt:0+37
  17/01/08 16:25:38 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
  17/01/08 16:25:38 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
  17/01/08 16:25:38 INFO mapred.MapTask: soft limit at 83886080
  17/01/08 16:25:38 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
  17/01/08 16:25:38 INFO mapred.MapTask: kvstart = 26214396; length = 6553600

  17/01/08 16:25:38 INFO mapred.MapTask: Map output collector>  17/01/08 16:25:38 INFO mapred.LocalJobRunner:
  17/01/08 16:25:38 INFO mapred.MapTask: Starting flush of map output
  17/01/08 16:25:38 INFO mapred.MapTask: Spilling map output
  17/01/08 16:25:38 INFO mapred.MapTask: bufstart = 0; bufend = 65; bufvoid = 104857600
  17/01/08 16:25:38 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214372(104857488); length = 25/6553600
  17/01/08 16:25:38 INFO mapred.MapTask: Finished spill 0
  17/01/08 16:25:38 INFO mapred.Task: Task:attempt_local247232145_0001_m_000001_0 is done. And is in the process of committing
  17/01/08 16:25:38 INFO mapred.LocalJobRunner: map
  17/01/08 16:25:38 INFO mapred.Task: Task 'attempt_local247232145_0001_m_000001_0' done.
  17/01/08 16:25:38 INFO mapred.LocalJobRunner: Finishing task: attempt_local247232145_0001_m_000001_0
  17/01/08 16:25:38 INFO mapred.LocalJobRunner: map task executor complete.
  17/01/08 16:25:38 INFO mapreduce.Job:map 100% reduce 0%
  17/01/08 16:25:39 INFO mapred.LocalJobRunner: Waiting for reduce tasks
  17/01/08 16:25:39 INFO mapred.LocalJobRunner: Starting task: attempt_local247232145_0001_r_000000_0
  17/01/08 16:25:39 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
  17/01/08 16:25:39 INFO mapred.Task:Using ResourceCalculatorProcessTree : [ ]
  17/01/08 16:25:39 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@6bef8a0a
  17/01/08 16:25:39 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=363285696, maxSingleShuffleLimit=90821424, mergeThreshold=239768576, ioSortFactor=10, memToMemMergeOutputsThreshold=10
  17/01/08 16:25:39 INFO reduce.EventFetcher: attempt_local247232145_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
  17/01/08 16:25:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local247232145_0001_m_000000_0 decomp: 98 len: 102 to MEMORY
  17/01/08 16:25:39 INFO reduce.InMemoryMapOutput: Read 98 bytes from map-output for attempt_local247232145_0001_m_000000_0

  17/01/08 16:25:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of>  17/01/08 16:25:39 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local247232145_0001_m_000001_0 decomp: 68 len: 72 to MEMORY
  17/01/08 16:25:39 INFO reduce.InMemoryMapOutput: Read 68 bytes from map-output for attempt_local247232145_0001_m_000001_0

  17/01/08 16:25:39 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of>  17/01/08 16:25:39 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
  17/01/08 16:25:39 INFO mapred.LocalJobRunner: 2 / 2 copied.
  17/01/08 16:25:39 INFO reduce.MergeManagerImpl: finalMerge called with 2 in-memory map-outputs and 0 on-disk map-outputs
  17/01/08 16:25:40 WARN io.ReadaheadPool: Failed readahead on ifile
  EBADF: Bad file descriptor
  at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
  at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
  at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
  at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)
  17/01/08 16:25:40 INFO mapred.Merger: Merging 2 sorted segments

  17/01/08 16:25:40 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total>  17/01/08 16:25:40 INFO reduce.MergeManagerImpl: Merged 2 segments, 166 bytes to disk to satisfy reduce memory limit
  17/01/08 16:25:40 INFO reduce.MergeManagerImpl: Merging 1 files, 168 bytes from disk
  17/01/08 16:25:40 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
  17/01/08 16:25:40 INFO mapred.Merger: Merging 1 sorted segments

  17/01/08 16:25:40 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total>  17/01/08 16:25:40 INFO mapred.LocalJobRunner: 2 / 2 copied.
  17/01/08 16:25:40 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
  17/01/08 16:25:40 INFO mapred.Task: Task:attempt_local247232145_0001_r_000000_0 is done. And is in the process of committing
  17/01/08 16:25:40 INFO mapred.LocalJobRunner: 2 / 2 copied.
  17/01/08 16:25:40 INFO mapred.Task: Task attempt_local247232145_0001_r_000000_0 is allowed to commit now
  17/01/08 16:25:40 INFO output.FileOutputCommitter: Saved output of task 'attempt_local247232145_0001_r_000000_0' to hdfs://localhost:9000/user/test/output2/_temporary/0/task_local247232145_0001_r_000000
  17/01/08 16:25:40 INFO mapred.LocalJobRunner: reduce > reduce
  17/01/08 16:25:40 INFO mapred.Task: Task 'attempt_local247232145_0001_r_000000_0' done.
  17/01/08 16:25:40 INFO mapred.LocalJobRunner: Finishing task: attempt_local247232145_0001_r_000000_0
  17/01/08 16:25:40 INFO mapred.LocalJobRunner: reduce task executor complete.
  17/01/08 16:25:40 INFO mapreduce.Job:map 100% reduce 100%
  17/01/08 16:25:40 INFO mapreduce.Job: Job job_local247232145_0001 completed successfully
  17/01/08 16:25:41 INFO mapreduce.Job: Counters: 35
  File System Counters
  FILE: Number of bytes read=889201
  FILE: Number of bytes written=1745401
  FILE: Number of read operations=0
  FILE: Number of large read operations=0
  FILE: Number of write operations=0
  HDFS: Number of bytes read=284
  HDFS: Number of bytes written=82
  HDFS: Number of read operations=22
  HDFS: Number of large read operations=0
  HDFS: Number of write operations=5
  Map-Reduce Framework
  Map input records=3
  Map output records=18
  Map output bytes=179
  Map output materialized bytes=174
  Input split bytes=224
  Combine input records=18
  Combine output records=14
  Reduce input groups=11
  Reduce shuffle bytes=174
  Reduce input records=14
  Reduce output records=11
  Spilled Records=28
  Shuffled Maps =2
  Failed Shuffles=0
  Merged Map outputs=2
  GC time elapsed (ms)=117
  Total committed heap usage (bytes)=457912320
  Shuffle Errors
  BAD_ID=0
  CONNECTION=0
  IO_ERROR=0
  WRONG_LENGTH=0
  WRONG_MAP=0
  WRONG_REDUCE=0
  File Input Format Counters
  Bytes Read=107
  File Output Format Counters
  Bytes Written=82
页: [1]
查看完整版本: hadoop 2.7.3伪分布式环境运行官方wordcount