[iyunv@master conf]# hadoop
Warning: $HADOOP_HOME is deprecated.
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
namenode -format format the DFS filesystem
....
done
三、修改hadoop JAVA_HOME路径:
[iyunv@slave01 conf]# vi hadoop-env.sh
# The java implementation to use. Required.
export JAVA_HOME=/usr/jdk1.6.0_45
[iyunv@master usr]# service iptables stop
iptables: Flushing firewall rules: [ OK ]
iptables: Setting chains to policy ACCEPT: filter [ OK ]
iptables: Unloading modules: [ OK ]
[iyunv@master usr]#
(slave忘记关闭防火墙)插曲:
[hadoop@master hadoop-1.2.1]$ hadoop jar hadoop-examples-1.2.1.jar pi 10 100
Warning: $HADOOP_HOME is deprecated.
Number of Maps = 10
Samples per Map = 100
13/09/08 02:17:05 INFO hdfs.DFSClient: Exception in createBlockOutputStream 192.168.70.102:50010 java.net.NoRouteToHostException: No route to host
13/09/08 02:17:05 INFO hdfs.DFSClient: Abandoning blk_9160013073143341141_4460
13/09/08 02:17:05 INFO hdfs.DFSClient: Excluding datanode 192.168.70.102:50010
13/09/08 02:17:05 INFO hdfs.DFSClient: Exception in createBlockOutputStream 192.168.70.103:50010 java.net.NoRouteToHostException: No route to host
13/09/08 02:17:05 INFO hdfs.DFSClient: Abandoning blk_-1734085534405596274_4461
13/09/08 02:17:05 INFO hdfs.DFSClient: Excluding datanode 192.168.70.103:50010
13/09/08 02:17:05 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/hadoop/PiEstimator_TMP_3_141592654/in/part0 could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
关闭后解决,另外,配置尽量用IP吧。
启动
[iyunv@master usr]# su hadoop
[hadoop@master usr]$ start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-namenode-master.hadoop.out
slave01.hadoop: starting datanode, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-slave01.hadoop.out
slave02.hadoop: starting datanode, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-slave02.hadoop.out
The authenticity of host 'master.hadoop (192.168.70.101)' can't be established.
RSA key fingerprint is 6c:e0:d7:22:92:80:85:fb:a6:d6:a4:8f:75:b0:96:7e.
Are you sure you want to continue connecting (yes/no)? yes
master.hadoop: Warning: Permanently added 'master.hadoop,192.168.70.101' (RSA) to the list of known hosts.
master.hadoop: starting secondarynamenode, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-master.hadoop.out
starting jobtracker, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-jobtracker-master.hadoop.out
slave02.hadoop: starting tasktracker, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-slave02.hadoop.out
slave01.hadoop: starting tasktracker, logging to /usr/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-slave01.hadoop.out
[hadoop@master usr]$
从日志看出,启动过程:namenode(master)----> datanode(slave01、slave02)---->secondarynamenode(master)-----> jobtracker(master)-----> 最后启动tasktracker(slave01、slave02)
[hadoop@master hadoop-1.2.1]$ hadoop jar hadoop-examples-1.2.1.jar pi 10 100
第一个参数10:表示运行10次map任务
第二个参数100:表示每个map取样的个数
正常结果:
[hadoop@master hadoop-1.2.1]$ hadoop jar hadoop-examples-1.2.1.jar pi 10 100
Warning: $HADOOP_HOME is deprecated.
Number of Maps = 10
Samples per Map = 100
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
13/09/08 02:21:50 INFO mapred.FileInputFormat: Total input paths to process : 10
13/09/08 02:21:52 INFO mapred.JobClient: Running job: job_201309080221_0001
13/09/08 02:21:53 INFO mapred.JobClient: map 0% reduce 0%
13/09/08 02:24:06 INFO mapred.JobClient: map 10% reduce 0%
13/09/08 02:24:07 INFO mapred.JobClient: map 20% reduce 0%
13/09/08 02:24:21 INFO mapred.JobClient: map 30% reduce 0%
13/09/08 02:24:28 INFO mapred.JobClient: map 40% reduce 0%
13/09/08 02:24:31 INFO mapred.JobClient: map 50% reduce 0%
13/09/08 02:24:32 INFO mapred.JobClient: map 60% reduce 0%
13/09/08 02:24:38 INFO mapred.JobClient: map 70% reduce 0%
13/09/08 02:24:41 INFO mapred.JobClient: map 80% reduce 13%
13/09/08 02:24:44 INFO mapred.JobClient: map 80% reduce 23%
13/09/08 02:24:45 INFO mapred.JobClient: map 100% reduce 23%
13/09/08 02:24:47 INFO mapred.JobClient: map 100% reduce 26%
13/09/08 02:24:53 INFO mapred.JobClient: map 100% reduce 100%
13/09/08 02:24:54 INFO mapred.JobClient: Job complete: job_201309080221_0001
13/09/08 02:24:54 INFO mapred.JobClient: Counters: 30
13/09/08 02:24:54 INFO mapred.JobClient: Job Counters
13/09/08 02:24:54 INFO mapred.JobClient: Launched reduce tasks=1
13/09/08 02:24:54 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=638017
13/09/08 02:24:54 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/09/08 02:24:54 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/09/08 02:24:54 INFO mapred.JobClient: Launched map tasks=10
13/09/08 02:24:54 INFO mapred.JobClient: Data-local map tasks=10
13/09/08 02:24:54 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=44458
13/09/08 02:24:54 INFO mapred.JobClient: File Input Format Counters
13/09/08 02:24:54 INFO mapred.JobClient: Bytes Read=1180
13/09/08 02:24:54 INFO mapred.JobClient: File Output Format Counters
13/09/08 02:24:54 INFO mapred.JobClient: Bytes Written=97
13/09/08 02:24:54 INFO mapred.JobClient: FileSystemCounters
13/09/08 02:24:54 INFO mapred.JobClient: FILE_BYTES_READ=226
13/09/08 02:24:54 INFO mapred.JobClient: HDFS_BYTES_READ=2460
13/09/08 02:24:54 INFO mapred.JobClient: FILE_BYTES_WRITTEN=623419
13/09/08 02:24:54 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
13/09/08 02:24:54 INFO mapred.JobClient: Map-Reduce Framework
13/09/08 02:24:54 INFO mapred.JobClient: Map output materialized bytes=280
13/09/08 02:24:54 INFO mapred.JobClient: Map input records=10
13/09/08 02:24:54 INFO mapred.JobClient: Reduce shuffle bytes=280
13/09/08 02:24:54 INFO mapred.JobClient: Spilled Records=40
13/09/08 02:24:54 INFO mapred.JobClient: Map output bytes=180
13/09/08 02:24:54 INFO mapred.JobClient: Total committed heap usage (bytes)=1414819840
13/09/08 02:24:54 INFO mapred.JobClient: CPU time spent (ms)=377130
13/09/08 02:24:54 INFO mapred.JobClient: Map input bytes=240
13/09/08 02:24:54 INFO mapred.JobClient: SPLIT_RAW_BYTES=1280
13/09/08 02:24:54 INFO mapred.JobClient: Combine input records=0
13/09/08 02:24:54 INFO mapred.JobClient: Reduce input records=20
13/09/08 02:24:54 INFO mapred.JobClient: Reduce input groups=20
13/09/08 02:24:54 INFO mapred.JobClient: Combine output records=0
13/09/08 02:24:54 INFO mapred.JobClient: Physical memory (bytes) snapshot=1473769472
13/09/08 02:24:54 INFO mapred.JobClient: Reduce output records=0
13/09/08 02:24:54 INFO mapred.JobClient: Virtual memory (bytes) snapshot=4130349056
13/09/08 02:24:54 INFO mapred.JobClient: Map output records=20
Job Finished in 184.973 seconds
Estimated value of Pi is 3.14800000000000000000
[hadoop@master hadoop-1.2.1]$
由于未关闭slave防火墙,见第十步插曲。