舒畅 发表于 2018-10-28 14:51:26

安装hadoop集群(Multi Cluster)

  配置环境
  本文档安装hadoop集群环境,一个master作为namenode节点,一个slave作为datanode节点:
  (1) master:

  os: CentOS>  ip: 172.16.101.58
  user:root
  hadoop-2.9.0.tar.gz
  (2) slave:

  os: CentOS>  ip: 172.16.101.59
  user:root
  hadoop-2.9.0.tar.gz
  前提条件
  (1) master和slave都安装好java环境,并配置好环境变量;
  (2)master节点解压好hadoop-2.9.0.tar.gz,并配置好环境变量;
  (3)本篇文档使用的是root用户安装,所以需要master上的root用户可以ssh无密码使用root用户登录slave节点;
  配置集群文件
  在 master节点上执行(本文档先在master节点上配置文件,然后通过scp拷贝到其他slave节点)
  (1)slaves文件:将作为 DataNode 的主机名或者ip写入该文件,每行一个,默认为 localhost,所以在伪分布式配置时,节点既作为 NameNode 也作为 DataNode。
  # cat slaves
  172.16.101.59
  (2)文件core-site.xml
  #cat /usr/local/hadoop-2.9.0/etc/hadoop/core-site.xml
  
  
  fs.defaultFS
  hdfs://172.16.101.58:9000
  
  
  hadoop.tmp.dir
  /usr/local/hadoop-2.9.0/tmp
  Abase for other temporary directories.
  
  
  (3)文件hdfs-site.xml
  # cat /usr/local/hadoop-2.9.0/etc/hadoop/hdfs-site.xml
  
  
  dfs.namenode.secondary.http-address
  172.16.101.58:50090
  
  
  dfs.replication
  1
  
  
  dfs.namenode.name.dir
  file:/usr/local/hadoop-2.9.0/tmp/dfs/name
  
  
  dfs.datanode.data.dir
  file:/usr/local/hadoop-2.9.0/tmp/dfs/data
  
  
  (4)文件mapred-site.xml
  # cat /usr/local/hadoop-2.9.0/etc/hadoop/mapred-site.xml
  
  
  mapreduce.framework.name
  yarn
  
  
  mapreduce.jobhistory.address
  172.16.101.58:10020
  
  
  mapreduce.jobhistory.webapp.address
  172.16.101.58:19888
  
  
  (5)文件yarn-site.xml
  # cat /usr/local/hadoop-2.9.0/etc/yarn-site.xml
  
  
  yarn.resourcemanager.hostname
  172.16.101.58
  
  
  yarn.nodemanager.aux-services
  mapreduce_shuffle
  
  
  配置好后,将 Master上的 /usr/local/hadoop-2.9.0文件复制到各个节点上。因为之前有跑过伪分布式模式,建议在切换到集群模式前先删除之前的临时文件。
  # rm -rf ./hadoop-2.9.0/tmp
  # rm -rf ./hadoop-2.9.0/logs
  # tar -zcfhadoop-2.9.0.master.tar.gz   /usr/local/hadoop-2.9.0
  # scp hadoop-2.9.0.master.tar.gz sht-sgmhadoopdn-02:/usr/local/
  在 Slave节点上执行
  # tar -zxf hadoop-2.9.0.master.tar.gz
  启动hadoop集群
  在 master节点上执行:
  #第一次启动需要格式化HDFS,以后再启动不需要
  # hdfs namenode -format
  # start-dfs.sh
  # start-yarn.sh
  # mr-jobhistory-daemon.sh start historyserver
  # jps
  20289 JobHistoryServer
  19730 ResourceManager
  18934 NameNode
  19163 SecondaryNameNode
  20366 Jps
  在 Slave节点上执行:
  # jps
  32147 DataNode
  535 Jps
  32559 NodeManager
  在 master节点上执行:
  # hdfs dfsadmin -report
  Configured Capacity: 75831140352 (70.62 GB)
  Present Capacity: 21246287872 (19.79 GB)
  DFS Remaining: 21246263296 (19.79 GB)
  DFS Used: 24576 (24 KB)
  DFS Used%: 0.00%
  Under replicated blocks: 0
  Blocks with corrupt replicas: 0
  Missing blocks: 0
  Missing blocks (with replication factor 1): 0
  Pending deletion blocks: 0
  -------------------------------------------------
  Live datanodes (1):                                                             #存活的slave数量
  Name: 172.16.101.59:50010 (sht-sgmhadoopdn-02)
  Hostname: sht-sgmhadoopdn-02
  Decommission Status : Normal
  Configured Capacity: 75831140352 (70.62 GB)
  DFS Used: 24576 (24 KB)
  Non DFS Used: 50732867584 (47.25 GB)
  DFS Remaining: 21246263296 (19.79 GB)
  DFS Used%: 0.00%
  DFS Remaining%: 28.02%
  Configured Cache Capacity: 0 (0 B)
  Cache Used: 0 (0 B)
  Cache Remaining: 0 (0 B)
  Cache Used%: 100.00%
  Cache Remaining%: 0.00%
  Xceivers: 1
  Last contact: Wed Dec 27 11:08:46 CST 2017
  Last Block Report: Wed Dec 27 11:02:01 CST 2017
  Console管理平台
  NameNodehttp://172.16.101.58:50070
  执行分布式实例MapReduce Job
  # hdfs dfs -mkdir -p /user/root/input
  # hdfs dfs -put /usr/local/hadoop-2.9.0/etc/hadoop/*.xmlinput
  # hadoop jar /usr/local/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.0.jar grep input output 'dfs+'

  17/12/27 11:25:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java>  17/12/27 11:25:34 INFO client.RMProxy: Connecting to ResourceManager at /172.16.101.58:8032
  17/12/27 11:25:36 INFO input.FileInputFormat: Total input files to process : 9
  17/12/27 11:25:36 INFO mapreduce.JobSubmitter: number of splits:9
  17/12/27 11:25:37 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
  17/12/27 11:25:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1514343869308_0001
  17/12/27 11:25:38 INFO impl.YarnClientImpl: Submitted application application_1514343869308_0001
  17/12/27 11:25:38 INFO mapreduce.Job: The url to track the job:http://sht-sgmhadoopdn-01:8088/proxy/application_1514343869308_0001/
  17/12/27 11:25:38 INFO mapreduce.Job: Running job: job_1514343869308_0001
  17/12/27 11:25:51 INFO mapreduce.Job: Job job_1514343869308_0001 running in uber mode : false
  17/12/27 11:25:51 INFO mapreduce.Job:map 0% reduce 0%
  17/12/27 11:26:14 INFO mapreduce.Job:map 11% reduce 0%
  17/12/27 11:26:15 INFO mapreduce.Job:map 67% reduce 0%
  17/12/27 11:26:29 INFO mapreduce.Job:map 100% reduce 0%
  17/12/27 11:26:32 INFO mapreduce.Job:map 100% reduce 100%
  17/12/27 11:26:34 INFO mapreduce.Job: Job job_1514343869308_0001 completed successfully
  17/12/27 11:26:34 INFO mapreduce.Job: Counters: 50
  ......
  # hdfs dfs -cat output/*

  17/12/27 11:30:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java>  1    dfsadmin
  1    dfs.replication
  1    dfs.namenode.secondary.http
  1    dfs.namenode.name.dir
  1    dfs.datanode.data.dir
  也可以通过浏览器访问console,查看详细的分析信息:
  ResourceManager -http://172.16.101.58:8088
  停止hadoop集群
  在 master节点上执行:
  #stop-yarn.sh
  #stop-dfs.sh
  #mr-jobhistory-daemon.sh stop historyserver
  参考链接:
  http://www.powerxing.com/install-hadoop-cluster/
  http://hadoop.apache.org/docs/r2.9.0/hadoop-project-dist/hadoop-common/ClusterSetup.html

页: [1]
查看完整版本: 安装hadoop集群(Multi Cluster)