xywuyiba6 发表于 2018-10-30 09:37:24

Hadoop安装 1.0(简版)

  前提:
  一定要保证iptables是关闭的并且selinux是disabled
  1、准备硬件
  1台namenode和3台datanode
  namenode 192.168.137.100
  datanode1 192.168.137.101
  datanode2 192.168.137.102
  datanode3 192.168.137.103
  2、在4台机器上建立hadoop用户(也可以是别的用户名)
  useradd hadoop
  3、在4台机器上安装JDK 1.6
  安装后的JAVA_HOME放在/jdk
  配置环境变量
  vim /etc/bashrc
  export JAVA_HOME=/jdk
  scp -r /jdk* datanode1:/
  scp -r /jdk* datanode2:/
  scp -r /jdk* datanode3:/
  4、配置4台机器的多机互信
  一定记得将各个节点的
  /home/hadoop/.ssh
  和其以下的所有文件都设成700权限位
  5、安装hadoop
  tar zxvf hadoop-1.0.4.tar
  安装在/hadoop
  将/hadoop权限位置为755
  vim /hadoop/conf/hadoop-env.sh
  export JAVA_HOME=/jdk
  vim /hadoop/conf/core-site.xml
  
  hadoop.tmp.dir
  /home/hadoop/tmp
  
  
  fs.default.name
  hdfs://namenode:9000
  
  vim /hadoop/conf/mapred-site.xml
  
  mapred.job.tracker
  namenode:9001
  
  vim /hadoop/conf/hdfs-site.xml
  
  dfs.name.dir
  /home/hadoop/name
  
  
  dfs.data.dir
  /home/hadoop/name
  
  
  dfs.replication
  2
  vim /hadoop/conf/masters
  192.168.137.100
  vim /hadoop/conf/slaves
  192.168.137.101
  192.168.137.102
  192.168.137.103
  把配置好的HADOOP拷贝到datanode上去
  cd /
  scp -r hadoop datanode1:/hadoop
  scp -r hadoop datanode2:/hadoop
  scp -r hadoop datanode3:/hadoop
  6、安装zookeeper
  tar zxvf zookeeper-3.3.4.tar
  安装在/zookeeper
  cd /zookeeper/conf
  cp zoo_sample.cfg zoo.cfg
  vim zoo.cfg
  加入
  dataDir=/zookeeper-data
  dataLogDir=/zookeeper-log
  server.1=namenode:2888:3888
  server.2=datanode1:2888:3888
  server.3=datanode2:2888:3888
  server.4=datanode3:2888:3888
  建立/zookeeper-data
  mkdir /zookeeper-data
  建立/zookeeper-log
  建立文件/zookeeper-data/myid
  vim /zookeeper-data/myid
  1
  (datanode1里对应写入2)
  (datanode2里对应写入3)
  (datanode3里对应写入4)
  10、安装hive
  tarzxvf hive-0.8.0.tar
  到/hive
  vim /hive/bin/hive-config.sh
  export HADOOP_HOME=/hadoop
  export PATH=.$HADOOP_HOME/bin:$PATH
  export HIVE_HOME=/hive
  export PATH=$HIVE_HOME/bin:$PATH
  export JAVA_HOME=/jdk
  export JRE_HOME=/jdk/jre

  export>  export PATH=.$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
  vim /etc/bashrc
  export HIVE_HOME=/hive
  11、启动hadoop
  格式化并启动系统
  su hadoop
  cd /usr/local/hadoop/bin
  ./hadoop namenode -format
  ./start-dfs.sh
  ./start-mapred.sh
  http://192.168.137.100:50070查看HDFS namenode
  http://192.168.137.100:50030 查看MAPREDUCE JOB TRACKERS
  http://192.168.137.101:5006查看datanode1上的TASK TRACKER
  12、相关命令
  hadoop fs -mkdir direc
  hadoop fs -ls
  hadoop fs -cp file:///tmp/test.file /user/hadoop/direc

页: [1]
查看完整版本: Hadoop安装 1.0(简版)