chenqb 发表于 2016-12-5 08:56:06

hadoop 2.5.2安装实录

  1. prepare the virtual environment for the hadoop cluster.
you can choose Virtual Box or VM Wave. There are some issues with Vitural box in my labtop. So i choose VM 10.0.

then you need use the following software tools. you'd better prepare them well before you start to install the env.

maven linux version 3.11
jdk   1.7.72
protoc 2.5.0 re-compile hadoop 2.5.2
https://code.google.com/p/protobuf/downloads/list

tar -xvf protobuf-2.5.0.tar.bz2 
cd protobuf-2.5.0 
./configure --prefix=/opt/xxxxx/protoc/ 
make && make install
 
yum install gcc 
yum intall gcc-c++
yum install make

yum install cmake 
yum install openssl-devel 
yum install ncurses-devel
if you haven't these tools, you more or less meet some compile problem.

2. install jdk, maven, and config maven.
I am in China, so the forgien maven central repository some time is not available, ro too slow for me. So i config a mirror maven server in China.
<mirror> 
     <id>nexus-osc</id> 
      <mirrorOf>*</mirrorOf> 
  <name>Nexusosc</name> 
  <url>http://maven.oschina.net/content/groups/public/</url> 
</mirror>

<profile> 
       <id>jdk-1.7</id> 
       <activation> 
         <jdk>1.7</jdk> 
       </activation> 
       <repositories> 
         <repository> 
           <id>nexus</id> 
           <name>local private nexus</name> 
           <url>http://maven.oschina.net/content/groups/public/</url> 
           <releases> 
             <enabled>true</enabled> 
           </releases> 
           <snapshots> 
             <enabled>false</enabled> 
           </snapshots> 
         </repository> 
       </repositories> 
       <pluginRepositories> 
         <pluginRepository> 
           <id>nexus</id> 
          <name>local private nexus</name> 
           <url>http://maven.oschina.net/content/groups/public/</url> 
           <releases> 
             <enabled>true</enabled> 
           </releases> 
           <snapshots> 
             <enabled>false</enabled> 
           </snapshots> 
         </pluginRepository> 
       </pluginRepositories> 
     </profile>

When every thing in above are ready. Next, you will download hadoop from apache offical site. be noted: download the src version. I use the version 2.5.2

mvn clean package -Pdist,native -DskipTests -Dtar

the build process will last 30-60 mins based on you PC.

if one of the maven task fails, you need build it manually to save time.

at last,you will see the sucessfull screen liking below.

main:
     $ tar cf hadoop-2.5.2.tar hadoop-2.5.2
     $ gzip -f hadoop-2.5.2.tar
    
     Hadoop dist tar available at: /root/hadoopsrc/srcdir/hadoop-2.5.2-src/hadoop-dist/target/hadoop-2.5.2.tar.gz
    
Executed tasks

--- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
Building jar: /root/hadoopsrc/srcdir/hadoop-2.5.2-src/hadoop-dist/target/hadoop-dist-2.5.2-javadoc.jar
------------------------------------------------------------------------
Reactor Summary:

Apache Hadoop Main ................................ SUCCESS
Apache Hadoop Project POM ......................... SUCCESS
Apache Hadoop Annotations ......................... SUCCESS
Apache Hadoop Assemblies .......................... SUCCESS
Apache Hadoop Project Dist POM .................... SUCCESS
Apache Hadoop Maven Plugins ....................... SUCCESS
Apache Hadoop MiniKDC ............................. SUCCESS
Apache Hadoop Auth ................................ SUCCESS
Apache Hadoop Auth Examples ....................... SUCCESS
Apache Hadoop Common .............................. SUCCESS
Apache Hadoop NFS ................................. SUCCESS
Apache Hadoop Common Project ...................... SUCCESS
Apache Hadoop HDFS ................................ SUCCESS
Apache Hadoop HttpFS .............................. SUCCESS
Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS
Apache Hadoop HDFS-NFS ............................ SUCCESS
Apache Hadoop HDFS Project ........................ SUCCESS
hadoop-yarn ....................................... SUCCESS
hadoop-yarn-api ................................... SUCCESS
hadoop-yarn-common ................................ SUCCESS
hadoop-yarn-server ................................ SUCCESS
hadoop-yarn-server-common ......................... SUCCESS
hadoop-yarn-server-nodemanager .................... SUCCESS
hadoop-yarn-server-web-proxy ...................... SUCCESS
hadoop-yarn-server-applicationhistoryservice ...... SUCCESS
hadoop-yarn-server-resourcemanager ................ SUCCESS
hadoop-yarn-server-tests .......................... SUCCESS
hadoop-yarn-client ................................ SUCCESS
hadoop-yarn-applications .......................... SUCCESS
hadoop-yarn-applications-distributedshell ......... SUCCESS
hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS
hadoop-yarn-site .................................. SUCCESS
hadoop-yarn-project ............................... SUCCESS
hadoop-mapreduce-client ........................... SUCCESS
hadoop-mapreduce-client-core ...................... SUCCESS
hadoop-mapreduce-client-common .................... SUCCESS
hadoop-mapreduce-client-shuffle ................... SUCCESS
hadoop-mapreduce-client-app ....................... SUCCESS
hadoop-mapreduce-client-hs ........................ SUCCESS
hadoop-mapreduce-client-jobclient ................. SUCCESS
hadoop-mapreduce-client-hs-plugins ................ SUCCESS
Apache Hadoop MapReduce Examples .................. SUCCESS
hadoop-mapreduce .................................. SUCCESS
Apache Hadoop MapReduce Streaming ................. SUCCESS
Apache Hadoop Distributed Copy .................... SUCCESS
Apache Hadoop Archives ............................ SUCCESS
Apache Hadoop Rumen ............................... SUCCESS
Apache Hadoop Gridmix ............................. SUCCESS
Apache Hadoop Data Join ........................... SUCCESS
Apache Hadoop Extras .............................. SUCCESS
Apache Hadoop Pipes ............................... SUCCESS
Apache Hadoop OpenStack support ................... SUCCESS
Apache Hadoop Client .............................. SUCCESS
Apache Hadoop Mini-Cluster ........................ SUCCESS
Apache Hadoop Scheduler Load Simulator ............ SUCCESS
Apache Hadoop Tools Dist .......................... SUCCESS
Apache Hadoop Tools ............................... SUCCESS
Apache Hadoop Distribution ........................ SUCCESS
------------------------------------------------------------------------
BUILD SUCCESS
------------------------------------------------------------------------
Total time: 30:50.549s
Finished at: Tue Dec 09 07:31:56 PST 2014
Final Memory: 81M/243M
------------------------------------------------------------------------
#

----------------------------------------------
next
install hadoop
tar hadoopxxxxx -C /opt/hadoop
create hadoop user

you must grant the follow folder authority to hadoop user
chown -R hadoop:hadoop /hadoop /opt/hadoop

switch to hadoop user       
config the following 7 files for hadoop cluster

create folder
tmp
dfs/name
dfs/data
be noted:
the three folders must be mapped to the config files

next
shutdown the master server. and clone it to slave1 and slave2.

next start three servers.

config the host name and network.

configurate the ssh login from master to two slaves.
master---slave1
master---slave2

next make sure the iptables are shutdown.

for my test env.
execute the below command by root user
chkconfig iptables off 
it will close iptales for ever.
chkconfig iptables on  (open)

./hdfs namenode -format

then test the installation
master:
$ ./start-dfs.sh
Starting namenodes on
master: starting namenode, logging to /opt/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-namenode-master.out
slave1: starting datanode, logging to /opt/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-datanode-slave1.out
slave2: starting datanode, logging to /opt/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-datanode-slave2.out
Starting secondary namenodes
master: starting secondarynamenode, logging to /opt/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-secondarynamenode-master.out
$ jps
2440 SecondaryNameNode
2539 Jps
2274 NameNode
$ ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/hadoop-2.5.2/logs/yarn-hadoop-resourcemanager-master.out
slave1: starting nodemanager, logging to /opt/hadoop/hadoop-2.5.2/logs/yarn-hadoop-nodemanager-slave1.out
slave2: starting nodemanager, logging to /opt/hadoop/hadoop-2.5.2/logs/yarn-hadoop-nodemanager-slave2.out
$ jps
2440 SecondaryNameNode
2660 Jps
2274 NameNode
2584 ResourceManager
$ pwd
/opt/hadoop/hadoop-2.5.2/sbin
$ cd ..
$

slave1:
$ ls
bin  dfs  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share  tmp
$ rm -rf tmp/
$ rm -rf dfs/
$ ls
bin  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share
$ jps
2146 Jps
2079 DataNode
$ jps
2213 Jps
2079 DataNode
2182 NodeManager
$

slave2:
$ jps
2080 DataNode
2147 Jps
$ jps
2270 Jps
2080 DataNode
2183 NodeManager
$


check cluster nodes:
http://192.168.23.129:8088/cluster/nodes
  

 
  
check the status of every node
http://192.168.23.129:50070/dfshealth.html


 
./stop-dfs.sh
./stop-yarn.sh

# groupadd hadoop
# useradd -g hadoop hadoop
# passwd hadoop                     

cat id_rsa.pub > authorized_keys
chmod go-rw ~/.ssh/authorized_keys
scp * hadoop@slave1:/opt/hadoop/xxxxx
$ chmod 700 .ssh
$ mkdir ~/.ssh
$ chmod 700 .ssh

done!

My QQ: 735028566
  http://www.iyunv.com/topic/1136809
  http://my.oschina.net/fhzsy/blog/363045
  http://www.iyunv.com/topic/1136947
  run jar file
  $ hadoop jar /opt/jack.jar org.apache.hadoop.t1.WordCount  /jackdemodir/wordcount/input /jackdemodir/wordcount/output1
15/08/01 22:44:35 INFO client.RMProxy: Connecting to ResourceManager at hadoopmaster/192.168.1.50:8032
15/08/01 22:44:37 INFO input.FileInputFormat: Total input paths to process : 1
15/08/01 22:44:37 INFO mapreduce.JobSubmitter: number of splits:1
15/08/01 22:44:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1438494222950_0001
15/08/01 22:44:38 INFO impl.YarnClientImpl: Submitted application application_1438494222950_0001
15/08/01 22:44:38 INFO mapreduce.Job: The url to track the job: http://hadoopmaster:8088/proxy/application_1438494222950_0001/
15/08/01 22:44:38 INFO mapreduce.Job: Running job: job_1438494222950_0001
15/08/01 22:44:47 INFO mapreduce.Job: Job job_1438494222950_0001 running in uber mode : false
15/08/01 22:44:47 INFO mapreduce.Job:  map 0% reduce 0%
15/08/01 22:44:55 INFO mapreduce.Job:  map 100% reduce 0%
15/08/01 22:45:01 INFO mapreduce.Job:  map 100% reduce 100%
15/08/01 22:45:02 INFO mapreduce.Job: Job job_1438494222950_0001 completed successfully
15/08/01 22:45:02 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=571
                FILE: Number of bytes written=212507
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=463
                HDFS: Number of bytes written=385
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=5427
                Total time spent by all reduces in occupied slots (ms)=4297
                Total time spent by all map tasks (ms)=5427
                Total time spent by all reduce tasks (ms)=4297
                Total vcore-seconds taken by all map tasks=5427
                Total vcore-seconds taken by all reduce tasks=4297
                Total megabyte-seconds taken by all map tasks=5557248
                Total megabyte-seconds taken by all reduce tasks=4400128
        Map-Reduce Framework
                Map input records=1
                Map output records=55
                Map output bytes=556
                Map output materialized bytes=571
                Input split bytes=128
                Combine input records=55
                Combine output records=45
                Reduce input groups=45
                Reduce shuffle bytes=571
                Reduce input records=45
                Reduce output records=45
                Spilled Records=90
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=131
                CPU time spent (ms)=1910
                Physical memory (bytes) snapshot=462319616
                Virtual memory (bytes) snapshot=1765044224
                Total committed heap usage (bytes)=275251200
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=335
        File Output Format Counters
                Bytes Written=385
  check the result
  $ hadoop fs -cat /jackdemodir/wordcount/output1/part-r-00000
350     1
ASF     1
Abdera  1
Apache? 1
Are     1
From    1
Open    2
Source  2
The     1
Zookeeper,      1
a       2
all-volunteer   1
and     3
are     3
by      1
chances 1
cover   1
develops,       1
experience      1
find    1
for     1
going   1
here.   1
if      1
in      1
incubates       1
industry        1
initiatives     1
it      1
leading 1
looking 1
more    1
of      1
powered 1
projects        1
range   1
rewarding       1
software,       1
stewards,       1
technologies.   1
than    1
that    1
to      2
wide    1
you     3
页: [1]
查看完整版本: hadoop 2.5.2安装实录