xiaoyu28 发表于 2018-10-29 09:50:04

spark和zeppelin实践一:安装hadoop篇

  一、安装JDK
  1.7 JDK下载地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
  下载后安装
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  rpm -ivh jdk-8u112-linux-x64.rpm
  设置JDK环境变量
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  export JAVA_HOME=/usr/java/jdk1.8.0_112
[*]  export CLASSPATH=$JAVA_HOME/lib/tools.jar
[*]  export PATH=$JAVA_HOME/bin:$PATH
  二、安装Hadoop
  1、DNS绑定
  vi /etc/hosts,增加一行内容,如下(这里我的Master节点IP设置的为192.168.80.100):
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  192.168.80.100 IMM-SJJ01-Server18
  2、SSH的免密码登录
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  cd /home/data/.ssh
[*]  ssh-keygen -t rsa
[*]  cat id_rsa.pub >> authorized_keys
  3、安装Hadoop
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  #http://hadoop.apache.org/releases.html
[*]  wget http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
[*]
[*]  cd /home/game/soft
[*]  tar zxvf hadoop-2.7.3.tar.gz
[*]  ln -s /home/game/soft/hadoop-2.7.3 /home/game/soft/hadoop
  #4、配置
  1) 设置Hadoop环境变量
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  vim ~/.bash_profile 或 /etc/profile
[*]  export HADOOP_HOME=/home/game/soft/hadoop
[*]  export PATH=$HADOOP_HOME/bin:$PATH
[*]
[*]  echo $HADOOP_HOME
[*]
  2)修改hadoop-env.sh
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh
[*]  export JAVA_HOME=${JAVA_HOME} 改为
[*]  export JAVA_HOME=/usr/java/jdk1.8.0_112
  3)修改/etc/hosts
  4)修改core-site.xml
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  cd $HADOOP_HOME
[*]  cp ./share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml ./etc/hadoop/core-site.xml
[*]  cp ./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml ./etc/hadoop/hdfs-site.xml
[*]  cp ./share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/yarn-default.xml ./etc/hadoop/yarn-site.xml
[*]  cp ./share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml ./etc/hadoop/mapred-site.xml
[*]
[*]
[*]  vim $HADOOP_HOME/etc/hadoop/core-site.xml
[*]  
[*]  fs.default.name
[*]  hdfs://192.168.80.100:19000
[*]  
[*]  
[*]  hadoop.tmp.dir
[*]  /home/game/hadoop/tmp
[*]  
  5)修改配置hdfs-site.xml
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  
[*]  dfs.namenode.rpc-address
[*]  192.168.80.100:19001
[*]  
[*]
[*]  
[*]  dfs.namenode.http-address
[*]  0.0.0.0:10070
[*]  
  6)修改mapred-site.xml
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  cp mapred-site.xml.template mapred-site.xml
[*]  
[*]  mapreduce.framework.name
[*]  yarn
[*]  
  7)修改yarn-site.xml
   view plain copy https://code.csdn.net/assets/CODE_ico.pnghttps://code.csdn.net/assets/ico_fork.svg

[*]  
[*]  The http address of the RM web application.
[*]  yarn.resourcemanager.webapp.address
[*]  ${yarn.resourcemanager.hostname}:18088
[*]  
  5、启动
  1)格式化NameNode
  cd $HADOOP_HOME/bin
  ./hdfs namenode -format
  #2)启动hdfs
  /home/game/soft/hadoop/sbin/start-dfs.sh
  
  jps查看是否启动成功
  16704 DataNode
  16545 NameNode
  16925 SecondaryNameNode
  hdfs dfs -ls hdfs://192.168.80.100:19001/
  #3) 启动yarn
  /home/game/hadoop-2.7.3/sbin/start-yarn.sh
  $jps
  17427 NodeManager
  19668 ResourceManager
  yarn node -list
  yarn node -status
  #4)页面显示
  192.168.80.100:10070
  192.168.80.100:18088
  #6、上传测试
  hadoop fs -mkdir -p hdfs://192.168.80.100:19001/test/
  hadoop fs -copyFromLocal ./test.txt hdfs://192.168.80.100:19001/test/
  hadoop fs -ls hdfs://192.168.80.100:19001/
  hadoop fs -put /opt/program/userall20140828 hdfs://localhost:9000/tmp/tvbox/

页: [1]
查看完整版本: spark和zeppelin实践一:安装hadoop篇