kaola4549 发表于 2017-6-23 11:50:29

hadoop-2.6.0.tar.gz的集群搭建(5节点)

  前言
  本人呕心沥血所写,经过好一段时间反复锤炼和整理修改。感谢所参考的博友们!同时,欢迎前来查阅赏脸的博友们收藏和转载,附上本人的链接
  关于几个疑问和几处心得!
  a. 用NAT,还是桥接,还是only-host模式?

   答: hostonly、桥接和NAT
  b. 用static的ip,还是dhcp的?
  答:static
  c. 别认为快照和克隆不重要,小技巧,比别人灵活用,会很节省时间和大大减少错误。
  d. 重用起来脚本语言的编程,如paython或shell编程。
  对于用scp -r命令或deploy.conf(配置文件),deploy.sh(实现文件复制的shell脚本文件),runRemoteCdm.sh(在远程节点上执行命令的shell脚本文件)。
  e. 重要Vmare Tools增强工具,或者,rz上传、sz下载。
  f. 大多数人常用




   Xmanager Enterprise *安装步骤
  用到的所需:
  1、VMware-workstation-full-11.1.2.61471.1437365244.exe
  2、CentOS-6.5-x86_64-bin-DVD1.iso
  3、jdk-7u69-linux-x64.tar.gz
  4、hadoop-2.6.0.tar.gz
   机器规划:
  192.168.80.31   ----------------master
  192.168.80.32   ----------------slave1
  192.168.80.33   ----------------slave1
  目录规划:
  所有namenode节点产生的日志                                                                        /data/dfs/name
  所有datanode节点产生的日志                                                                         /data/dfs/data
  第一步:安装VMware-workstation虚拟机,我这里是VMware-workstation11版本
  详细见 ->

    VMware workstation 11 的下载      

          VMWare Workstation 11的安装

          VMware Workstation 11安装之后的一些配置
  第二步:安装CentOS系统,我这里是6.6版本。推荐(生产环境中常用)
  详细见 ->  

          CentOS 6.5的安装详解

          CentOS 6.5安装之后的网络配置

          CentOS 6.5静态IP的设置(NAT和桥接都适用) 

          CentOS 命令行界面与图形界面切换

          网卡eth0、eth1...ethn  

          Centos 6.5下的OPENJDK卸载和SUN的JDK安装、环境变量配置
  第三步:VMware Tools增强工具安装
  详细见 ->

      VMware里Ubuntukylin-14.04-desktop的VMware Tools安装图文详解
  第四步:准备小修改(学会用快照和克隆,根据自身要求情况,合理位置快照) 
  详细见 ->   

   CentOS常用命令、快照、克隆大揭秘

    新建用户组、用户、用户密码、删除用户组、用户(适合CentOS、Ubuntu)
  步骤流程(本博文):
  1、 搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 分配1G及以上的状况)
  2 、搭建一个5节点的hadoop分布式小集群--预备工作(djt11、djt12、djt13、djt14、djt15的网络连接、ip地址静态、拍照)
  3、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 远程)
  4、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 主机规划、软件规划、目录规划)
  补充: 若是用户规划和目录规划,执行反了,则出现什么结果呢?请看---强烈建议不要这样干
  5、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的环境检查 )
  6 、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的SSH免密码通信配置)
  7 、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的jdk安装)
  8 、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的djt11脚本工具的使用)
  9 、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的Zookeeper安装)
  10、 搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的hadoop集群环境搭建)继续
  1 、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 分配1G及以上的状况)
    对于这步,基础得看,CentOS 6.5的安装详解。





  虚拟机名称:djt11
  位置:D:\SoftWare\Virtual Machines\CentOS\CentOS 6.5\djt11






  若是4G内存的笔记本来安装这5个节点的Hadoop小集群的话, 最好,安装时每个还是1G,这样便于安装。至于使用的时候,就0.5G就好了。

  NAT













  少了??






















http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194014316-842587777.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194017301-1088516306.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194024285-2109522800.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194031332-1257364476.png
http://images2015.cnblogs.com/blog/855959/201703/855959-20170321180756674-1042059041.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194033660-422419661.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194037160-961764737.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194039613-224154508.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194042285-938396919.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194045238-1728619090.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194047973-1652841690.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194050832-316931483.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908194053457-1570354273.png
  2 搭建一个5节点的hadoop分布式小集群--预备工作(djt11、djt12、djt13、djt14、djt15的网络连接、ip地址静态、拍照)
  需要注意的是,5节点各自的MAC和UUID地址是不一样。
  2.1 、对djt11
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908203845144-1293291786.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908203855848-1480280132.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908203854348-574014247.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908203905863-857176936.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908203910379-1567440645.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908203914285-825256363.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908203919066-700486454.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908203922863-2067360426.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908203952551-740319150.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908203957004-1319265119.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204006066-558424094.png
  即,成功由原来的192.168.80.137(是动态获取的)成功地,改变成了192.168.80.11(静态的)
  以上是djt11 的   192.168.80.11。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204021754-1235066818.png



vi/etc/hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204029223-1617946733.png



vi/etc/resolv.conf
  2.2、对djt12
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204036894-633598623.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204040066-1512508851.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204043691-1736129424.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204048426-620640668.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204052894-611828582.png
  即,成功由原来的192.168.80.**(是动态获取的)成功地,改变成了192.168.80.12(静态的)
  以上是djt12 的   192.168.80.12
  即,djt12的步骤变得更精华了。笔记就是越做越清楚和简单啊。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204107504-1777932661.png



vi/etc/hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204115863-1237407755.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204122926-661262182.png



vi/etc/resolv.conf
  2.3、对djt13
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204133551-2007824398.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204138363-438564353.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204142535-225049760.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204152957-274124993.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204156707-1712643474.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204216519-70114421.png



vi/etc/hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204223301-830820118.png



vi/etc/resolv.conf
  2.4、对djt14
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204230082-1699858650.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204233504-1574410057.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204240598-988218502.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204244894-1044167562.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204251801-1399231849.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204300879-1597449290.png



vi/etc/hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204306629-590105311.png



vi/etc/resolv.conf
  2.5、对djt15
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204314394-1763215776.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204318410-2009814470.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204325191-1799662074.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204514379-462291909.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204519473-209811939.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204526613-1158949243.png



vi /etc/hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204535629-831226555.png



vi   /etc/resolv.conf
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204537191-2014626863.png
  3、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 远程)
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204614160-1664779940.png



C:\Windows\System32\drivers\etc\hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204625879-1427406278.png



192.168.80.11djt11
192.168.80.12djt12
192.168.80.13djt13
192.168.80.14djt14
192.168.80.15djt15
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204630348-74131977.png
  3.1、对djt11
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204638723-830813984.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204644066-309864353.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204648723-1806561822.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204653785-1123298356.png
  3.2、对djt12
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204701723-57221365.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204705301-1050634271.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204709254-204385021.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204713066-802230993.png
  3.3、对djt13
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204722019-1696378729.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204725926-1236529221.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204730519-1168154023.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204734129-1609010138.png
  3.4、对djt14
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204746676-154863603.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204751332-170168418.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204754519-1822788752.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204803660-26362549.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204806879-1621981427.png
  3.5、对djt15
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204815191-1481687961.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204819551-1161957855.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204824504-1079081097.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204828301-851761571.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204832848-623702811.png
  总的如下
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908204841082-717804638.png
  *****************************可以跳过这个阶段***********************
  补充:若是用户规划和目录规划,执行反了(即目录规划在前,用户规划在后),则出现什么结果呢?请看---强烈建议不要这样干
  强烈建议,不要这样干!!!!
  =>   所以,记住,先是用户规划,再目录规划!!!

    用户规划
  每个节点的hadoop用户组和用户需要大家自己创建,单节点已经讲过,这里就不耽误大家时间。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908205234551-193762529.png
  依次,对djt11、djt12、djt13、djt14、 djt15进行用户规划,hadoop用户组,hadoop用户。

 目录规划
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908205333723-945948007.png
  首先,
  直接上来,就是 目录规划,再用户规划。(这是犯了最低级错误啊!zhouls)(以下这是错误的,博友们,看看就好)(直接跳到步骤4去)
  刚开始,只要root用户,都还创建hadoop用户,在/home下只有lost+found。直接上来,就是:mkdir hadoop,然后在里面就是mkdir app data。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908205638644-1646035184.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908205644129-2035952658.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908205651848-1833452008.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211026801-1558804148.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211033363-552493419.png
  然后,
  是报错误,但是,这不分明就是犯了错误吗?最后导致如下问题。  
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211047316-183063389.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211051598-123321078.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211055801-124214762.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211044238-253561231.png
  所以,
  那么如何来解决呢??
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211114426-1290251110.png
    http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211126738-1417392050.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211135113-717709594.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211144816-1059174215.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211150473-1246392053.png
  虽然,这样可以挽救没有事先创建hadoop用户,来达到挽救。但是,密码呢?
  这一步,依然还是出现错误。因为,没有密码。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211207707-2044069084.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211211519-1495417183.png
  *****************************
   4、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 主机规划、软件规划、目录规划)
  主机规划
    http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211431723-1449412404.png
  若是条件有限。则就只搭建3个吧。最开始啊,namenode是存在单点故障问题。从hadoop2.0之后,就设置了热备,防止宕机。
  这里我们使用5 台主机来配置Hadoop集群。
  Journalnode和ZooKeeper保持奇数个,这点大家要有个概念,最少不少于 3 个节点。

  软件规划
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211645801-1497258175.png
  这里啊,我们实现namenode热备和resourcemanger热备。在hadoop2.0之前啊,是没有实现这个功能。hadoop2.2.0只实现namenode热备。在hadoop2.4.0实现了namenode热备和resourcemanger热备,但是不是很稳定,所以,我们这里啊,就使用hadoop2.6.0。
  用户规划
     http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211659316-1470105708.png
  依次,对djt11、djt12、djt13、djt14、 djt15进行用户规划,hadoop用户组,hadoop用户。
  groupadd hadoop
  //这是新建hadoop用户组
  useradd –g hadoop hadoop
  //这是新建hadoop用户,并增加到hadoop用户组
  passwd hadoop
  //这是创建hadoop用户的密码
  扩展知识:
  usermod -a -g 用户组 用户
  若是,直接来。
  useraddhadoop,
  passwd hadoop ,
  则会自动创建hadoop用户组。
  以至于,会出现如下。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211751144-314447271.png
  或者,
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211807160-442645456.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211811238-356528084.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211856566-1683716995.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211900176-1108474220.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211904379-526624194.png



先新建用户组,再来新建用户 。
# groupadd hadoop
# useradd -g hadoop hadoop
# passwd hadoop
Changing password for user hadoop.
New password: (输入hadoop用户想设置的密码)
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: (输入hadoop用户想设置的密码)
passwd: all authentication tokens updated successfully.
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211916254-1814859122.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211919488-1023951309.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211922613-1795788769.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211926754-294837029.png



先新建用户组,再来新建用户 。
# groupadd hadoop
# useradd -g hadoop hadoop
# passwd hadoop
Changing password for user hadoop.
New password: (输入hadoop用户想设置的密码)
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: (输入hadoop用户想设置的密码)
passwd: all authentication tokens updated successfully.
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211937113-1643977543.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908211943176-1804772959.png



# groupadd hadoop
# useradd -g hadoop hadoop
# passwd hadoop
Changing password for user hadoop.
New password: (输入hadoop用户想设置的密码)
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: (输入hadoop用户想设置的密码)
passwd: all authentication tokens updated successfully.
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212043191-1654060432.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212047301-871731047.png



# groupadd hadoop
# useradd -g hadoop hadoop
# passwd hadoop
Changing password for user hadoop.
New password: (输入hadoop用户想设置的密码)
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: (输入hadoop用户想设置的密码)
passwd: all authentication tokens updated successfully.
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212056613-1756939192.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212101785-2047012667.png



# groupadd hadoop
# useradd -g hadoop hadoop
# passwd hadoop
Changing password for user hadoop.
New password: (输入hadoop用户想设置的密码)
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: (输入hadoop用户想设置的密码)
passwd: all authentication tokens updated successfully.

目录规划
  对于单节点的hadoop集群:
  所有的软件目录是在/usr/java/下。
  数据目录是在/daa/dfs/name,/data/dfs/name,
  /data/tmp。
  日志目录是在/usr/java/hadoop/logs/下。
  其中,启动日志是在/usr/java/hadoop/logs
  作业运行日志是在/usr/java/hadoop/logs/userlogs/下
  对于5节点的hadoop集群:
  新建hadoop用户后,自动生成/home/hadoop。
  所有的软件目录是在/home/hadoop/app/下。
  所有的数据和日志目录是在/home/hadoop/data
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212114957-1239452193.png
  以前,是在/usr/java/hadoop下,现在此刻,开始,在/home/hadoop/app/hadoop(这才叫正规)
  以及,在/home/hadoop/app/zookeeper , /home/hadoop/app/jdk1.7.0_79
  对djt11而言,
  新建hadoop用户后,自动生成/home/hadoop。
  所有的软件目录是在/home/hadoop/app/下。
  所有的数据和日志目录是在/home/hadoop/data
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212301676-194332313.png
  对djt12而言,
  新建hadoop用户后,自动生成/home/hadoop。
  所有的软件目录是在/home/hadoop/app/下。
  所有的数据和日志目录是在/home/hadoop/data
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212435941-252419137.png
  对djt13而言,
  新建hadoop用户后,自动生成/home/hadoop。
  所有的软件目录是在/home/hadoop/app/下。
  所有的数据和日志目录是在/home/hadoop/data
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212444238-482017633.png
  对djt14而言,
  新建hadoop用户后,自动生成/home/hadoop。
  所有的软件目录是在/home/hadoop/app/下。
  所有的数据和日志目录是在/home/hadoop/data
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212734051-1230505312.png
  对djt15而言,
  新建hadoop用户后,自动生成/home/hadoop。
  所有的软件目录是在/home/hadoop/app/下。
  所有的数据和日志目录是在/home/hadoop/data
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212747551-1202984337.png
     5、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的环境检查 )

集群安装前的环境检查
  在集群安装之前,我们需要一个对其环境的一个检查。

时钟同步
  所有节点的系统时间要与当前时间保持一致,查看当前系统时间。
  如果系统时间与当前时间不一致,进行以下操作。
  对djt11
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908212955473-2066165000.png



# date
# cd /usr/share/zoneinfo/
# ls
# cd Asia/
#
  //当前时区替换为上海
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213011754-2144068039.png



#cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
cp: overwrite `/etc/localtime'? y
#
  需要ntp来实现时间的同步。
  依次,对djt11、djt12、djt13、djt14、 djt15进行时钟同步和安装ntp命令。
  我们可以同步当前系统时间和日期与NTP(网络时间协议)一致。
  # yum install -y ntp
  //如果ntp命令不存在,在线安装ntp
https://common.cnblogs.com/images/loading.gif
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213001691-148837837.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213052676-482604091.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213110816-699665344.png
  # ntpdate pool.ntp.org
  //执行此命令同步日期时间
  # date
  //查看当前系统时间
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213121988-1771795166.png
  对djt12
  # cd /usr/share/zoneinfo/Asia/
  # ls
  # cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  cp: overwrite `/etc/localtime'? y
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213131379-531275237.png
  # pwd
  # yum -y install ntp
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213143098-922457457.png
  对djt12的ntp命令,安装成功。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213154269-962800602.png
  # ntpdate pool.ntp.org
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213213379-384100309.png
  对djt13
  # cd /usr/share/zoneinfo/Asia/
  # ls
  # cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  cp: overwrite `/etc/localtime'? y
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213225269-1807726711.png
  # pwd
  # yum -y install ntp
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213233019-552809549.png
  对djt13的ntp命令,安装成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213241738-1294855641.png
  # ntpdate pool.ntp.org
  # date
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213256082-486960969.png
  对djt14
  # cd /usr/share/zoneinfo/Asia/
  # ls
  # cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  cp: overwrite `/etc/localtime'? y
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213315019-481983915.png
  # pwd
  # yum -y install ntp
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213333769-1214731608.png
  对djt14的ntp命令的成功安装
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213342894-882075152.png
  # ntpdate pool.ntp.org
  # date
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213352066-863138884.png
  对djt15
  # cd /usr/share/zoneinfo/Asia/
  # ls
  # cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  cp: overwrite `/etc/localtime'? y
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213402426-765022426.png
  # pwd
  # yum -y install ntp
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213411207-913164447.png
  对djt15的ntp命令的成功安装
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213418832-1766073798.png
  # ntpdate pool.ntp.org
  # date
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213443348-107521020.png

hosts文件检查
  所有节点的hosts文件都要配置静态ip与hostname之间的对应关系。
  依次,对djt11、djt12、djt13、djt14、 djt15进行host与IP配置。

对djt11
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213547644-1836147295.png



# ifconfig
# vi /etc/hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213559816-993060922.png



#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.80.11djt11
192.168.80.12djt12
192.168.80.13djt13
192.168.80.14djt14
192.168.80.15djt15

对djt12
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213623629-2099159229.png



# ifconfig
# vi /etc/hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213642363-1600714275.png



#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.80.12djt12
192.168.80.11djt11
192.168.80.13djt13
192.168.80.14djt14
192.168.80.15djt15

对djt13
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213654691-716316054.png



# ifconfig
# vi /etc/hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213705176-1733601946.png



#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.80.13djt13
192.168.80.11djt11
192.168.80.12djt12
192.168.80.14djt14
192.168.80.15djt15

对djt14
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213716738-1793958332.png



# ifconfig
# vi /etc/hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213724269-2108485850.png



#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.80.14djt14
192.168.80.11djt11
192.168.80.12djt12
192.168.80.13djt13
192.168.80.15djt15

对djt15
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213739363-1585317006.png



# ifconfig
# vi /etc/hosts
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213746457-364468871.png



#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.80.15djt15
192.168.80.11djt11
192.168.80.12djt12
192.168.80.13djt13
192.168.80.14djt14

禁用防火墙
  所有节点的防火墙都要关闭。
  依次,对djt11、djt12、djt13、djt14、 djt15进行禁用防火墙。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213804035-983241134.png



# service iptables status
# chkconfig iptables off
//永久关闭防火墙
# service iptables stop   //临时关闭防火墙
# service iptables status
iptables: Firewall is not running.
//查看防火墙状态
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213816957-96771114.png



# service iptables status
# chkconfig iptables off
//永久关闭防火墙
# service iptables stop   //临时关闭防火墙
# service iptables status
iptables: Firewall is not running.
//查看防火墙状态
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213828301-501834925.png



# service iptables status
# chkconfig iptables off
//永久关闭防火墙
# service iptables stop   //临时关闭防火墙
# service iptables status
iptables: Firewall is not running.
//查看防火墙状态
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213837941-846366278.png



# service iptables status
# chkconfig iptables off
//永久关闭防火墙
# service iptables stop   //临时关闭防火墙
# service iptables status
iptables: Firewall is not running.
//查看防火墙状态
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908213847051-1252743280.png



# service iptables status
# chkconfig iptables off
//永久关闭防火墙
# service iptables stop   //临时关闭防火墙
# service iptables status
iptables: Firewall is not running.
//查看防火墙状态
  6、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的SSH免密码通信配置)

配置SSH免密码通信
  1、每台机器的各自本身的无密码访问
  对djt11
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214024769-1617156102.png



# su hadoop
$ cd
$ cd .ssh
$ mkdir .ssh
$ ssh-keygen -t rsa
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter键
Enter passphrase (empty for no passphrase): Enter键
Enter same passphrase again: Enter键
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214035426-445833666.png



$ pwd
$ cd .ssh
$ ls
$ cat id_rsa.pub >> authorized_keys
$ ls
$ cat authorized_keys
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214043504-8231820.png



$ cd ..
$ chmod 700 .ssh
$ chmod 600 .ssh/*
$ ls -al
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214051816-903323568.png



$ ssh djt11
$ su root
# yum -y install openssh-clients
  对djt11安装ssh命令成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214131285-1130525582.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214142519-425616534.png



# su hadoop
$ ssh djt11
Are you sure you want to continue connecting (yes/no)? yes
$ ssh djt11
  对djt12
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214202441-387200962.png



# su hadoop
$ cd
$ cd .ssh
$ mkdir .ssh
$ ssh-keygen -t rsa
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): Enter
Enter passphrase (empty for no passphrase): Enter键
Enter same passphrase again: Enter键
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214212644-1853531681.png



$ pwd
$ cd .ssh
$ ls
$ cat id_rsa.pub >> authorized_keys
$ ls
$ cat authorized_keys
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214221098-1350244202.png



$ cd ..
$ chmod 700 .ssh
$ chmod 600 .ssh/*
$ ls -al
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214229988-132875249.png
  对djt12进行ssh命令,安装成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214238894-333040852.png



$ ssh djt12
$ su root
# yum -y install openssh-clients
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214258832-2101684376.png



# su hadoop
$ ssh djt12
Are you sure you want to continue connecting (yes/no)? yes
$ ssh djt12
  对djt13
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214319473-1950035680.png



# su hadoop
$ cd
$ cd .ssh
$ mkdir .ssh
$ ssh-keygen -t rsa
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214329348-2100360079.png



$ pwd
$ cd .ssh
$ ls
$ cat id_rsa.pub >> authorized_keys
$ ls
$ cat authorized_keys
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214341863-1906984483.png



$ cd ..
$ chmod 700 .ssh
$ chmod 600 .ssh/*
$ ls -al
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214350691-1804380899.png



$ ssh djt13
$ su root
# yum -y install openssh-clients
  对djt13进行ssh命令,安装成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214530926-1050574379.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214540316-1068949550.png



# su hadoop
$ ssh djt13
Are you sure you want to continue connecting (yes/no)? yes
$ ssh djt13
  对djt14
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214751144-318348697.png



# su hadoop
$ cd
$ cd .ssh
$ mkdir .ssh
$ ssh-keygen -t rsa
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214800191-850706056.png



$ pwd
$ cd .ssh
$ ls
$ cat id_rsa.pub >> authorized_keys
$ ls
$ cat authorized_keys
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214807316-339558738.png



$ cd ..
$ chmod 700 .ssh
$ chmod 600 .ssh/*
$ ls -al
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214814879-1584182800.png



$ ssh djt14
$ su root
# yum -y install openssh-clients
  对djt14进行ssh命令,安装成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214822457-1127590856.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214831410-1273314255.png



# su hadoop
$ ssh djt14
Are you sure you want to continue connecting (yes/no)? yes
$ ssh djt14
  对djt15
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214909098-489680730.png



# su hadoop
$ cd
$ cd .ssh
$ mkdir .ssh
$ ssh-keygen -t rsa
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214916848-1991867590.png



$ pwd
$ cd .ssh
$ ls
$ cat id_rsa.pub >> authorized_keys
$ ls
$ cat authorized_keys
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214922769-2030189976.png



$ cd ..
$ chmod 700 .ssh
$ chmod 600 .ssh/*
$ ls -al
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214929019-2085479519.png



$ ssh djt14
$ su root
# yum -y install openssh-clients
  对djt15进行ssh命令,安装成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214942113-1264768372.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908214948582-1070705975.png



# su hadoop
$ ssh djt15
Are you sure you want to continue connecting (yes/no)? yes
$ ssh djt15
  集群所有节点都要行上面的操作。依次,对djt11、djt12、djt13、djt14、 djt15进行SSH。
  即,此刻,1、每台机器的各自本身的无密码访问已经成功设置好了。
  2、现在来实现每台机器的之间无密码访问的设置?
  首先,将所有节点中的共钥id_ras.pub拷贝到djt11中的authorized_keys文件中。
  cat ~/.ssh/id_rsa.pub | ssh hadoop@djt11 'cat >> ~/.ssh/authorized_keys'所有节点都需要执行这条命令。
  2.1 完成djt12与djt11,djt13与djt11,djt14与djt11,djt15与djt11链接
  2.1.1 djt12与djt11实现无密码访问
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215016082-132205082.png



$ cat ~/.ssh/id_rsa.pub | ssh hadoop@djt11 'cat >> ~/.ssh/authorized_keys'
Are you sure you want to continue connecting (yes/no)? yes
hadoop@djt11's password:(是djt11用户hadoop的密码是,hadoop)
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215024754-1383672635.png



$ cd .ssh
$ ls
authorized_keysid_rsaid_rsa.pubknown_hosts
$ cat authorized_keys
  known_hostss是,该档案是纪录连到对方时,对方给的 host key。每次连线时都会检查目前对方给的 host key 与纪录的 host key 是否相同,可以简单验证连结是否又被诈骗等相关事宜。
  2.1.2 djt13与djt11实现无密码访问
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215037223-1503907909.png



$ cat ~/.ssh/id_rsa.pub | ssh hadoop@djt11 'cat >> ~/.ssh/authorized_keys'
Are you sure you want to continue connecting (yes/no)? yes
hadoop@djt11's password:(是djt11用户hadoop的密码是,hadoop)
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215055051-1494636384.png



$ cd .ssh
$ ls
authorized_keysid_rsaid_rsa.pubknown_hosts
$ cat authorized_keys
  known_hostss是,该档案是纪录连到对方时,对方给的 host key。每次连线时都会检查目前对方给的 host key 与纪录的 host key 是否相同,可以简单验证连结是否又被诈骗等相关事宜。
  2.1.3djt14 与djt11实现无密码访问
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215121519-472705139.png



$ cat ~/.ssh/id_rsa.pub | ssh hadoop@djt11 'cat >> ~/.ssh/authorized_keys'
Are you sure you want to continue connecting (yes/no)? yes
hadoop@djt11's password:(是djt11用户hadoop的密码是,hadoop)
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215129379-1621134679.png



$ cd .ssh
$ ls
authorized_keysid_rsaid_rsa.pubknown_hosts
$ cat authorized_keys
  2.1.4 djt15与djt11实现无密码访问
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215138488-1699662755.png



$ cat ~/.ssh/id_rsa.pub | ssh hadoop@djt11 'cat >> ~/.ssh/authorized_keys'
Are you sure you want to continue connecting (yes/no)? yes
hadoop@djt11's password:(是djt11用户hadoop的密码是,hadoop)
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215146082-1341026597.png



$ cd .ssh
$ ls
authorized_keysid_rsaid_rsa.pubknown_hosts
$ cat authorized_keys
  即,以上是完成如下的工作。
  djt12与djt11链接,djt13与djt11链接,djt14与djt11链接,djt15与djt11链接
  这里是重点!!!
  2.2将djt11中的authorized_keys文件分发到所有节点上面
  2.2.1将djt11中的authorized_keys文件分发到djt12节点上面
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215204644-2131563974.png



$ ls
$ scp -r authorized_keys hadoop@djt12:~/.ssh/
Are you sure you want to continue connecting (yes/no)? yes
hadoop@djt12's password:(djt12的hadoop用户的密码,是hadoop)
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215212644-657525036.png



$ cd .ssh
$ ls
$ cat authorized_keys
  2.2.2将djt11中的authorized_keys文件分发到djt13节点上面
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215230879-1870091153.png



$ ls
$ scp -r authorized_keys hadoop@djt13:~/.ssh/
Are you sure you want to continue connecting (yes/no)? yes
hadoop@djt13's password:(djt13的hadoop用户的密码,是hadoop)
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215242894-1077520679.png



$ cd .ssh
$ ls
$ cat authorized_keys
  2.2.3将djt11中的authorized_keys文件分发到djt14节点上面
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215259316-732681775.png



$ ls
$ scp -r authorized_keys hadoop@djt14:~/.ssh/
Are you sure you want to continue connecting (yes/no)? yes
hadoop@djt14's password:(djt14的hadoop用户的密码,是hadoop)
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215307379-717442671.png



$ cd .ssh
$ ls
$ cat authorized_keys
  2.2.4将djt11中的authorized_keys文件分发到djt15节点上面
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215351691-1882456071.png



$ ls
$ scp -r authorized_keys hadoop@djt15:~/.ssh/
Are you sure you want to continue connecting (yes/no)? yes
hadoop@djt15's password:(djt15的hadoop用户的密码,是hadoop)
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215400082-1902065705.png



$ cd .ssh
$ ls
$ cat authorized_keys
  大家通过ssh 相互访问,如果都能无密码访问,代表ssh配置成功。
  现在,再来之间访问,则可实现无密码访问了。
  从djt11出发,
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215524066-1620970044.png



$ ssh djt12
$ exit
$ ssh djt13
$ exit
$ ssh djt14
$ exit
$ ssh djt15
$ exit
  从djt12出发,
  注意:因为,在这里djt12与djt11是第一次
  注意:因为,在这里djt12与djt13是第一次
  注意:因为,在这里djt12与djt14是第一次
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215538426-1220677318.png



$ ssh djt11
$ exit
Connection to djt11 closed.
$ ssh djt13
Are you sure you want to continue connecting (yes/no)? yes
$ exit
Connection to djt13 closed.
$ ssh djt14
Are you sure you want to continue connecting (yes/no)? yes
$ exit
Connection to djt14 closed.
$
  注意:因为,在这里djt12与djt15是第一次
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215720160-819991922.png



$ ssh djt15
Are you sure you want to continue connecting (yes/no)? yes
Connection to djt15 closed.
  从djt13出发,
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215749754-1411812763.png



$ ssh djt11
$ exit
$ ssh djt12
$ exit
$ ssh djt12
$ exit
$ ssh djt14
$ exit
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215801332-671044103.png



$ ssh djt15
$ exit
  从djt14出发,
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215812613-1510101593.png



$ ssh djt11
$ exit
$ ssh djt12
$ exit
$ ssh djt13
$ exit
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215834754-1484999027.png



$ ssh djt15
$ exit
  从djt15出发,
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908215950363-1518886793.png



$ ssh djt11
$ exit
$ ssh djt12
$ exit
$ ssh djt13
$ exit
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220047644-2028637111.png



$ ssh djt13
$ exit
  7 、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的jdk安装)
  jdk安装
  其实啊,jdk安装这一步,只需要对djt11进行jdk上传,用后面的命令,如下



deploy.sh jdk1.7.0_79/ /home/hadoop/app/ slave
  或



sh deploy.sh jdk1.7.0_79/ /home/hadoop/app/ slave
  即可。即完成,等价于分别rz上传jdk工作,之后,再各自进行环境变量配置。
  对djt11
  1、将本地下载好的jdk1.7,上传至djt11节点下的/home/hadoop/app目录。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220338394-781749168.png



$ cd
$ file /bin/ls
$ cd /home/hadoop/app/
$ ls
$ su root
# yum -y install lrzsz
  对rzsz命令,安装成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220624098-1019961433.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220634160-938505330.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220633941-1031207939.png



# su hadoop
$ rz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220646551-1088783103.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220652316-231023067.png



$ ls
$ tar zxvf jdk-7u79-linux-x64.tar.gz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220700832-870429423.png



$ ls
$ rm jdk-7u79-linux-x64.tar.gz
$ ls
$ pwd
  2、添加jdk环境变量。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220713707-447790173.png



$ su root
# vi /etc/pro
# vi /etc/profile
  一直往下走。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220720691-1462291629.png
  在最尾端,追加。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220734723-1874947190.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220745144-1935425149.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
  3、查看jdk是否安装成功。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220759973-1780112597.png



# source /etc/profile
# java -version
  出现以上结果就说明djt11节点上的jdk安装成功。
  4、然后将djt11下的jdk安装包复制到其他节点上。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220815894-1971851369.png



$ deploy.sh jdk1.7.0_79 /home/hadoop/app/ slave
  其中,slave是djt12,djt13,djt14,djt15的标签。
  即,说明的是,djt11是master。       djt12,djt13,djt14,djt15 是slave。
  因为,考虑到后续,deploy.sh是,在创建/home/hadoop/tools目录下。
  所以,4这一小步,就放到后面去。
  djt12,djt13,djt14,djt15节点重复djt11节点上的jdk配置即可。重复!!!
  对djt12
  1、将本地下载好的jdk1.7,上传至djt12节点下的/home/hadoop/app目录。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220832176-1487849455.png



$ cd
$ file /bin/ls
$ cd /home/hadoop/app/
$ ls
$ su root
# yum -y install lrzsz
  对djt12的rzsz命令,安装成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220840519-1531977753.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220854176-136456598.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220859582-1567551570.png



# su hadoop
$ rz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220906473-1073889697.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220912019-2127988549.png



$ ls
$ tar zxvf jdk-7u79-linux-x64.tar.gz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220921348-2030504090.png



$ ls
$ rm jdk-7u79-linux-x64.tar.gz
$ ls
$ pwd
  2、添加jdk环境变量。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220937519-1194000180.png



$ su root
# vi /etc/profile
  一直往下走
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220944082-1931945273.png
  在最尾端,追加
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908220952691-1741495175.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221000738-1245474431.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
  出现以上结果就说明djt12节点上的jdk安装成功。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221008004-425670274.png



# source /etc/profile
# java -version
  djt13,djt14,djt15节点重复djt11节点上的jdk配置即可。重复!!!
  对djt13
  1、将本地下载好的jdk1.7,上传至djt13节点下的/home/hadoop/app目录。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221025207-1990347424.png



$ cd
$ file /bin/ls
$ cd /home/hadoop/app/
$ ls
$ su root
# yum -y install lrzsz
  对djt13的rzsz命令,安装成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221032754-865221549.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221041269-127895345.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221049035-1962258071.png



# su hadoop
$ rz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221057582-597800136.png



$ ls
# su hadoop
$ tar zxvf jdk-7u79-linux-x64.tar.gz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221107598-369955062.png



$ ls
$ rm jdk-7u79-linux-x64.tar.gz
$ ls
$ pwd
  2、添加jdk环境变量。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221121410-1045547118.png



$ su root
# vi /etc/profile
  一直往下走
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221131129-1021668519.png
  在最尾端,追加
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221209410-2127860698.png
  JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
  CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
  PATH=$JAVA_HOME/bin:$PATH
  export JAVA_HOME CLASSPATH PATH
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221229598-608079892.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221422082-544409825.png



# source /etc/profile
# java -version
  出现以上结果就说明djt13节点上的jdk安装成功。
  djt14、djt15节点重复djt11节点上的jdk配置即可。重复!!!
  对djt14
  1、将本地下载好的jdk1.7,上传至djt14节点下的/home/hadoop/app目录。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221443223-864599140.png



$ cd
$ file /bin/ls
$ cd /home/hadoop/app/
$ ls
$ su root
# yum -y install lrzsz
  对djt14的rzsz命令,安装成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221607894-137482408.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221618019-92952979.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221621863-749571104.png



# su hadoop
$ rz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221631457-312696425.png



$ ls
$ tar zxvf jdk-7u79-linux-x64.tar.gz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221640113-849172626.png



$ ls
$ rm jdk-7u79-linux-x64.tar.gz
$ ls
$ pwd
  2、添加jdk环境变量。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908221716894-1482165692.png



$ su root
# vi /etc/profile
  一直往下走
https://common.cnblogs.com/images/loading.gif
  在最尾端,追加
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908222511629-562848446.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908222520160-1797290000.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908222724504-508425732.png
  出现以上结果就说明djt14节点上的jdk安装成功。
  djt15节点重复djt11节点上的jdk配置即可。重复!!!
  对djt15
  1、将本地下载好的jdk1.7,上传至djt15节点下的/home/hadoop/app目录。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908222940316-191869064.png



$ cd
$ file /bin/ls
$ cd /home/hadoop/app/
$ ls
$ su root
# yum -y install lrzsz
  对djt15的rzsz命令,安装成功
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908222954426-586399018.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908223701348-1217392937.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908223702863-1238878926.png



# su hadoop
$ rz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908223737473-1651520545.png



$ ls
$ tar zxvf jdk-7u79-linux-x64.tar.gz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908223745707-118200490.png



$ ls
$ rm jdk-7u79-linux-x64.tar.gz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908223754160-823676127.png



$ ls
$ pwd
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908223805379-1247387085.png



$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908223933769-735634388.png
  在最尾端,追加
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908223943707-546038487.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908223954254-2087715774.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224005519-1910448873.png



# source /etc/profile
# java -version
  出现以上结果就说明djt15节点上的jdk安装成功。
  至此,对djt11、djt12、djt13、djt14、djt15 集群安装前的jdk安装,成功。
  8、 搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的djt11脚本工具的使用)
  脚本工具的使用
  1、在djt11节点上创建/home/hadoop/tools目录。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224056332-1526062733.png



$ cd
$ ls
$ mkdir tools
$ ls
$ pwd
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224105457-1839762170.png



$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224113535-1280119144.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:/home/hadoop/tools:$PATH
export JAVA_HOME CLASSPATH PATH
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224122269-42386840.png



$ source /etc/profile
  将本地脚本文件上传至/home/hadoop/tools目录下。这些脚本大家如果能看懂也可以自己写, 如果看不懂直接使用就可以,后面慢慢补补Linux相关的知识。
  现在,自己编写runRemoteCmd.sh    deploy.sh    deploy.conf的内容。
  deploy.conf的内容:



djt11,all,namenode,zookeeper,resourcemanager,
djt12,all,slave,namenode,zookeeper,resourcemanager,
djt13,all,slave,datanode,zookeeper,
djt14,all,slave,datanode,zookeeper,
djt15,all,slave,datanode,zookeeper,

  deploy.sh的内容:




#!/bin/bash
#set -x
if [ $# -lt 3 ]
then
echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag"
echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag confFile"
exit
fi
src=$1
dest=$2
tag=$3
if [ 'a'$4'a' == 'aa' ]
then
confFile=/home/hadoop/tools/deploy.conf
else
confFile=$4
fi
if [ -f $confFile ]
then
if [ -f $src ]
then
for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`
do
scp $src $server":"${dest}
done
elif [ -d $src ]
then
for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`
do
scp -r $src $server":"${dest}
done
else
echo "Error: No source file exist"
fi
else
echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi


runRemoteCmd.sh的内容:



#!/bin/bash
#set -x
if [ $# -lt 2 ]
then
echo "Usage: ./runRemoteCmd.sh Command MachineTag"
echo "Usage: ./runRemoteCmd.sh Command MachineTag confFile"
exit
fi
cmd=$1
tag=$2
if [ 'a'$3'a' == 'aa' ]
then
confFile=/home/hadoop/tools/deploy.conf
else
confFile=$3
fi
if [ -f $confFile ]
then
for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`
do
echo "*******************$server***************************"
ssh $server "source /etc/profile; $cmd"
done
else
echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"
fi
  
  如何来创建上述文件。
  在这里,后来,推荐使用软件Notepad++软件。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224158566-1493887038.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224233582-993175146.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224237816-1856167601.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224249441-1081957693.png
  在deploy.conf里。
  取个别名all,是所有节点。
  可以看出,我们是将djt11和djt12作为namenode。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224300910-1249021317.png



$ cat deploy.conf
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224308879-519206252.png



$ cat deploy.sh
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224313769-269297680.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224325785-2069070560.png



$ cat runRemoteCmd.sh
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224329676-944791313.png
  以上三个文件,方便我们搭建hadoop分布式集群。具体如何使用看后面如何操作。
  如果我们想直接使用脚本,还需要给脚本添加执行权限。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224351582-1472623232.png



$ ls
$ deploy.sh
$ chmod u+x deploy.sh
$ chmod u+x runRemoteCmd.sh
  查看下
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224432832-1403882082.png



$ ls -al
  5、同时我们需要将/home/hadoop/tools目录配置到PATH路径中。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224448551-867998459.png
  追加
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224634504-891341845.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224656160-467711390.png



PATH=/home/hadoop/tools:$PATH
export PATH
  或者
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224714113-370699248.png



PATH=$JAVA_HOME/bin:/home/hadoop/tools:$PATH
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224728504-643428470.png



# source /etc/profile
# su hadoop
$ ls
  这些Tools,只需要在djt11里有就可以了。
  即vi /etc/profile下的环境变量里自然就少了。
  djt11
  PATH=$JAVA_HOME/bin:/home/hadoop/tools:$PATH
  djt11, djt12,djt13,djt14,djt15
  PATH=$JAVA_HOME/bin: $PATH
  测试下
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224745926-748875715.png



$ deploy.sh
$ runRemoteCmd.sh
  因为,我们之前每个节点都单独创建了,软件安装目录/home/hadoop/app。这是一个快速创建的方法,很值得学习。
  我们在djt11节点上,通过runRemoteCmd.sh脚本,一键创建所有节点的软件安装目录/home/hadoop/app。



$runRemoteCmd.sh "mkdir /home/hadoop/app" slave
或者
$ sh runRemoteCmd.sh "mkdir /home/hadoop/app" slave
  我们可以在所有节点查看到/home/hadoop/app目录已经创建成功。






  因为,djt11已结创建了,所以,作为slave的djt12,djt13,djt14,djt15。


  因为,djt11已结创建了,所以,作为slave的djt12,djt13,djt14,djt15。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224831879-645091313.png
  如果,之前没有创建的话,则是如下:
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224845363-190140110.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224852160-2114054196.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224858098-1765764814.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224903254-276175522.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224908176-1673236696.png



$ scp -r jdk1.7.0_79/ hadoop@djt12:/home/hadoop/app/
  因为,这样也是可以,在djt11里直接将jdk1.7.0_79复制过去到djt12,djt13,djt14,djt15。
  当然咯,之前,我们早就已经分别都弄好了,这里,假设的是djt12,djt13,djt14,djt15没有弄好jdk。
  但是,这样操作,缺点是速度比较慢。
  如,djt11对djt13,djt11对djt14,djt11对djt15传jdk就不赘述了。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224918988-450445597.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224926379-1136373417.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224936301-1710205772.png



$ deploy.sh jdk1.7.0_79/ /home/hadoop/app/ slave
  或者,
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224944113-2112965327.png



$ sh deploy.sh jdk1.7.0_79/ /home/hadoop/app/ slave
  通常,是采用脚本,来进行分发。速度很快。
  会有4个这样的分发状态界面。因为,是分别对djt12,djt13,djt14,djt15。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908224952816-1211326325.png
  分发完毕
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225003488-15015734.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225010426-145541131.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225016004-1688603168.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225021160-1160373981.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225025410-1585613342.png
  然后,对其,脚本工具的环境变量进行设置
  对djt11
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225045519-831512916.png



$ ls
$ su root
$ vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225053035-2139052972.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:/home/hadoop/tools:$PATH
export JAVA_HOME CLASSPATH PATH
  对djt12
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225107598-396095327.png



$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225115738-767826033.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
  对djt13
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225131863-521083199.png



$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225210441-460137438.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
  对djt14
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225224441-1444517300.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225234410-398058913.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
  对djt15
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225244207-169605262.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225252269-1527465608.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
  这里,统一查看对djt11,djt12,djt13,djt14,djt15的jdk版本信息。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225307676-712614274.png



$ runRemoteCmd.sh "java -version" all
  或者
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225326941-2044702541.png



$ sh runRemoteCmd.sh "java -version" all
  9 、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的Zookeeper安装)
  Zookeeper安装
  1、将本地下载好的zookeeper-3.4.6.tar.gz安装包,上传至djt11节点下的/home/hadoop/app目录下。



$ ls
$ rz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225433910-1926550821.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225446676-1765969320.png



$ tar zxvf zookeeper-3.4.6.tar.gz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225452035-1300077313.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225522535-1078255454.png



$ ls
$ rm zookeeper-3.4.6.tar.gz
$ mv zookeeper-3.4.6 zookeeper
$ ls
  2、修改Zookeeper中的配置文件。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225538207-90317982.png



$ ls
$ cd zookeeper/
$ pwd
$ ls
$ cd conf/
$ ls
$ cp zoo_sample.cfg zoo.cfg
$ vi zoo.cfg
  这是,zoo_sample.cfg的范例。我们要修改成我们自己集群信息的。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225548926-628436902.png



# example sakes.
dataDir=/home/hadoop/data/zookeeper/zkdata
dataLogDir=/home/hadoop/data/zookeeper/zkdatalog
# the port at which the clients will connect
clientPort=2181
#server.
server.1=djt11:2888:3888
server.2=djt12:2888:3888
server.3=djt13:2888:3888
server.4=djt14:2888:3888
server.5=djt15:2888:3888
  解读:
  http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225602879-386498518.png



dataDir=/home/hadoop/data/zookeeper/zkdata      //数据文件目录
dataLogDir=/home/hadoop/data/zookeeper/zkdatalog      //日志目录
# the port at which the clients will connect
clientPort=2181   //默认端口号
#server.服务编号=主机名称:Zookeeper不同节点之间同步和通信的端口:选举端口(选举leader)
server.1=djt11:2888:3888
server.2=djt12:2888:3888
server.3=djt13:2888:3888
server.4=djt14:2888:3888
server.5=djt15:2888:3888
  3、通过远程命令deploy.sh将Zookeeper安装目录拷贝到其他节点上面。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225636707-1589995938.png



$ deploy.sh zookeeper /home/hadoop/app/slave
slave是djt12、djt13、djt14、djt15
$ pwd
$ cd /home/hadoop/app/
$ ls
$ deploy.sh zookeeper /home/hadoop/app/slave
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225642082-1377356365.png
  djt11对djt12、djt13、djt14、djt15的zookeeper安装目录发放完毕。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225656410-54731705.png
  查看,被接收djt11发放zookeeper的djt12
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225705269-1657288713.png
  查看,被接收djt11发放zookeeper的djt13
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225742957-1762432640.png
  查看,被接收djt11发放zookeeper的djt14
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225803176-358858557.png
  查看,被接收djt11发放zookeeper的djt15
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225812988-167658751.png
  4、通过远程命令runRemoteCmd.sh在所有的节点上面创建目录:
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225827176-840523486.png



$ runRemoteCmd.sh "mkdir -p /home/hadoop/data/zookeeper/zkdata" all
//创建数据目录
$ runRemoteCmd.sh "mkdir -p /home/hadoop/data/zookeeper/zkdata" all
$ cd ..
$ ls
$ cd data/
$ ls
$ cd zookeeper/
$ ls
$ cd zkdata/
$ ls
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225835707-1161284804.png



# ls
# cd /home/hadoop/data/zookeeper/zkdata
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225844223-519777223.png



# ls
# cd /home/hadoop/data/zookeeper/zkdata
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225852207-2040103321.png



# ls
# cd /home/hadoop/data/zookeeper/zkdata
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225900676-237465159.png



# ls
# cd /home/hadoop/data/zookeeper/zkdata
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225917707-2121034965.png



$ runRemoteCmd.sh "mkdir -p /home/hadoop/data/zookeeper/zkdatalog" all
//创建日志目录
$ pwd
$ cd
$ ls
$ cd app/
$ ls
$ pwd
$ runRemoteCmd.sh "mkdir -p /home/hadoop/data/zookeeper/zkdatalog" all
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225926426-2136637072.png



# pwd
# cd /home/hadoop/data/zookeeper/zkdatalog
# ls
# cd /home/hadoop/data/zookeeper/
# ls
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225934441-1380298139.png



# pwd
# cd /home/hadoop/data/zookeeper/zkdatalog
# ls
# cd /home/hadoop/data/zookeeper/
# ls
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908225942316-591797325.png



# pwd
# cd /home/hadoop/data/zookeeper/zkdatalog
# ls
# cd /home/hadoop/data/zookeeper/
# ls
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230004473-1515317257.png



# pwd
# cd /home/hadoop/data/zookeeper/zkdatalog
# ls
# cd /home/hadoop/data/zookeeper/
# ls
  5、然后分别在djt11、djt12、djt13、djt14、djt15上面,进入zkdata目录下,创建文件myid,里面的内容分别填充为:1、2、3、4、5, 这里我们以djt11为例。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230021348-408104517.png



$ pwd
$ cd
$ ls
$ cd data/
$ ls
$ cd zookeeper/
$ ls
$ cd zkdata
$ pwd
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230029644-1224521038.png



$ pwd
$ vi myid
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230035504-1819577177.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230052035-1443667657.png



1   //输入数字1
对应,
djt11对应的zookeeper1编号
  或者 echo "1" > myid

http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230057613-2082677509.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230106769-1863820308.png



$ pwd
$ ls
$ cd zkdata
$ ls
$ vi myid
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230117894-1825630784.png



2   //输入数字2
对应,
djt12对应的zookeeper2编号
  或者 echo "2" > myid

http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230123738-927670487.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230132441-731413215.png



$ pwd
$ ls
$ cd zkdata
$ ls
$ vi myid
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230145191-1006515479.png



3   //输入数字3
对应,
djt13对应的zookeeper3编号
  或者 echo "3" > myid

http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230154926-1747569313.png



$ pwd
$ ls
$ cd zkdata
$ ls
$ vi myid
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230203223-344299342.png



$ pwd
$ ls
$ cd zkdata
$ ls
$ vi myid
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230211441-1370546984.png



4   //输入数字4
对应,
djt14对应的zookeeper4编号
  或者 echo "4" > myid

http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230217051-866212579.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230225504-1931611092.png



$ pwd
$ ls
$ cd zkdata
$ ls
$ vi myid
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230234316-853670060.png



5   //输入数字5
对应,
djt15对应的zookeeper5编号
  或者 echo "1" > myid

http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230242129-1041601061.png
  6、配置Zookeeper环境变量。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230332566-1744428885.png



$ pwd
$ cd /home/hadoop/app/
$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230405629-677538134.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230423754-1423262408.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:/home/hadoop/tools:$PATH
export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230431988-1761023243.png



# source /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230442254-1512920878.png



$ pwd
$ cd /home/hadoop/app/
$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230447223-234277407.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230505410-1416096268.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230515144-1354298839.png



# source /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230525394-682987550.png



$ pwd
$ cd /home/hadoop/app/
$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230530801-1367097643.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230540801-1149819538.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230549301-2039902453.png



# source /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230557957-295169436.png



$ pwd
$ cd /home/hadoop/app/
$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230606879-1979993644.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230614301-2095337041.png



# source /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230623941-1432589425.png



$ pwd
$ cd /home/hadoop/app/
$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230629223-1594927560.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230640816-1889148804.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230649207-1447490781.png



# source /etc/profile
  7、在djt11节点上面启动Zookeeper。
  在djt11节点上面启动与Zookeeper
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230707988-1442577767.png



# pwd
$ ls
$ cd zookeeper/
$ pwd
$ ls
$ bin/zkServer.sh start
//启动Zookeeper
$ jps
$ bin/zkServer.sh status
//查看Zookeeper运行状态
$ bin/zkServer.sh stop
//启动Zookeeper
$ jps
  由此可知,QuorumPeerMain是zookeeper的进程。
  8、使用runRemoteCmd.sh 脚本,启动所有节点上面的Zookeeper。
  启动所有节点djt11,djt12,djt13,djt14,djt15上的zookeepr
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230733113-1516948176.png



$ pwd
$ runRemoteCmd.sh "/home/hadoop/app/zookeeper/bin/zkServer.sh start" zookeeper
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230744691-1653062469.png



# ls
# su hadoop
$ cd
$ ls
$ cd data/
$ ls
$ cd zookeeper/
$ ls
$ cd zkdata
$ ls
$ jps
  由此可知,QuorumPeerMain是zookeeper的进程。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230829051-585370156.png



# ls
# su hadoop
$ cd
$ ls
$ cd data/
$ ls
$ cd zookeeper/
$ ls
$ cd zkdata
$ ls
$ jps
  由此可知,QuorumPeerMain是zookeeper的进程。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908230940113-23082702.png



# ls
# su hadoop
$ cd
$ ls
$ cd data/
$ ls
$ cd zookeeper/
$ ls
$ cd zkdata
$ ls
$ jps
  由此可知,QuorumPeerMain是zookeeper的进程。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908231051051-917526494.png



# ls
# su hadoop
$ cd
$ ls
$ cd data/
$ ls
$ cd zookeeper/
$ ls
$ cd zkdata
$ ls
$ jps
  由此可知,QuorumPeerMain是zookeeper的进程。
  9、查看所有节点上面的QuorumPeerMain进程是否启动。
  查看所有节点上面的QuorumPeerMain进程是否启动,即是各节点上的zookeeper进程。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908231144410-1905103355.png



$ pwd
$ runRemoteCmd.sh "jps" zookeeper
10、查看所有Zookeeper节点状态。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908231157941-1939538160.png



$ runRemoteCmd.sh "/home/hadoop/app/zookeeper/bin/zkServer.sh status" zookeeper

  10 、搭建一个5节点的hadoop分布式小集群--预备工作(对djt11、djt12、djt13、djt14、djt15 集群安装前的hadoop集群环境搭建)继续

hadoop集群环境搭建
  1 HDFS安装配置
  1、将下载好的apache hadoop-2.6.0.tar.gz安装包,上传至djt11节点下的/home/hadoop/app目录下。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908231323144-1164295143.png



$ pwd
$ cd ..
$ ls
$ pwd
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908231333113-372955805.png



$ tar zxvf hadoop-2.6.0.tar.gz
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908231338691-141751382.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908231350082-468044015.png



$ ls
$ rm hadoop-2.6.0.tar.gz
$ ls
$ mv hadoop-2.6.0 hadoop
$ ls
$



2、切换到/home/hadoop/app/hadoop/etc/hadoop/目录下,修改配置文件。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908231416629-940388194.png



$ ls
$ cd hadoop/
$ pwd
$ ls
$ cd etc/
$ cd hadoop/

或者
$ cd /home/hadoop/app/hadoop/etc/hadoop/

配置HDFS
  1、配置hadoop-env.sh
  http://images2015.cnblogs.com/blog/855959/201609/855959-20160908231439285-224766885.png



$ vi hadoop-env.sh
export JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908231545269-1714477340.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232335269-1969765406.png



export JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
  或者,自己直接先创建,学会自己动手,结合自带的脚本进行修改,来提升动手能力。这个很重要!
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232346004-1170528526.png
  然后,通过rz来上传。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232350910-128129221.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232357613-28358248.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232401254-1569995015.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232411285-100575041.png
  2、配置core-site.xml
  http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232425160-1312284253.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232435285-1018794524.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232450363-62734265.png
  解读



$ vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://cluster1</value>
</property>
< 这里的值指的是默认的HDFS路径 ,取名为cluster1>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/data/tmp</value>
</property>
< hadoop的临时目录,如果需要配置多个目录,需要逗号隔开,data目录需要我们自己创建>
<property>
<name>ha.zookeeper.quorum</name>
<value>djt11:2181,djt12:2181,djt13:2181,djt14:2181,djt15:2181</value>
</property>
< 配置Zookeeper 管理HDFS>
</configuration>
  请用下面这个



<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://cluster1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/data/tmp</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>       <value>djt11:2181,djt12:2181,djt13:2181,djt14:2181,djt15:2181</value>
</property>
</configuration>

需要注意,不要带文字
2181是zookeeper是其默认端口。

  或者,自己直接先创建,学会自己动手,结合自带的脚本进行修改,来提升动手能力。这个很重要!
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232501410-798355339.png
  然后,通过rz来上传。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232509363-193390626.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232516957-2132158186.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232522723-852866718.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232528910-535497423.png
  3、配置hdfs-site.xml
  http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232547691-1978835134.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232552551-1806126948.png
  解读



$ vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
< 数据块副本数为3>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
< 权限默认配置为false>
<property>
<name>dfs.nameservices</name>
<value>cluster1</value>
</property>
< 命名空间,它的值与fs.defaultFS的值要对应,namenode高可用之后有两个namenode,cluster1是对外提供的统一入口>
<property>
<name>dfs.ha.namenodes.cluster1</name>
<value>djt11,djt12</value>
</property>
< 指定 nameService 是 cluster1 时的nameNode有哪些,这里的值也是逻辑名称,名字随便起,相互不重复即可>
<property>
<name>dfs.namenode.rpc-address.cluster1.djt11</name>
<value>djt11:9000</value>
</property>
< djt11 rpc地址>
<property>
<name>dfs.namenode.http-address.cluster1.djt11</name>
<value>djt11:50070</value>
</property>
< djt11 http地址>
<property>
<name>dfs.namenode.rpc-address.cluster1.djt12</name>
<value>djt12:9000</value>
</property>
< djt12 rpc地址>
<property>
<name>dfs.namenode.http-address.cluster1.djt12</name>
<value>djt12:50070</value>
</property>
< djt12 http地址>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
< 启动故障自动恢复>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://djt11:8485;djt12:8485;djt13:8485;djt14:8485;djt15:8485/cluster1</value>
</property>
< 指定journal>
<property>
<name>dfs.client.failover.proxy.provider.cluster1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
< 指定 cluster1 出故障时,哪个实现类负责执行故障切换>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/data/journaldata/jn</value>
</property>
< 指定JournalNode集群在对nameNode的目录进行共享时,自己存储数据的磁盘路径 >
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>10000</value>
</property>
< 脑裂默认配置>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
</configuration>
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232614285-1060662955.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232621223-894747661.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232627035-781007044.png
  请用下面的这个,



<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>cluster1</value>
</property>
<property>
<name>dfs.ha.namenodes.cluster1</name>
<value>djt11,djt12</value>
</property>
<property>
<name>dfs.namenode.rpc-address.cluster1.djt11</name>
<value>djt11:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster1.djt11</name>
<value>djt11:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.cluster1.djt12</name>
<value>djt12:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.cluster1.djt12</name>
<value>djt12:50070</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://djt11:8485;djt12:8485;djt13:8485;djt14:8485;djt15:8485/cluster1</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.cluster1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/data/journaldata/jn</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>10000</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
</configuration>
  或者,自己直接先创建,学会自己动手,结合自带的脚本进行修改,来提升动手能力。这个很重要!
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232634629-466999594.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232640785-772496108.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232645504-80288316.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232655566-1489846808.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232700926-1578510370.png
  4、配置 slave
  $ vi slaves
  http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232714285-1533736303.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232718832-1631335995.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232731926-1905155009.png



djt13
djt14
djt15
  djt11和djt12作为namenode。
  djt13,djt14,djt15作为datanode。
  或者,自己直接先创建,学会自己动手,结合自带的脚本进行修改,来提升动手能力。这个很重要!
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232741738-1298890017.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232747269-1596201774.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232750894-489723893.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232755644-1143979534.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232800207-225910930.png
  5、向所有节点分发hadoop安装包。
  会做4个同样的状态界面,需要一段时间,请耐心等待…
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232813863-1843081880.png



$ deploy.sh hadoop /home/hadoop/app/ slave
$ pwd
$ cd /home/hadoop/app/
$ ls
$ deploy.sh hadoop /home/hadoop/app/ slave
  发放完毕
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232824254-1361078743.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232833410-944393163.png



$jps
$ cd /home/hadoop/app/
$ ls
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232842941-660826218.png



$jps
$ cd /home/hadoop/app/
$ ls
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232923160-658615826.png



$jps
$ cd /home/hadoop/app/
$ ls
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232931301-317706361.png



$jps
$ cd /home/hadoop/app/
$ ls
  5、设置hadoop安装包的环境变量。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232944676-1340600911.png



$ vi /etc/profile
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908232945223-1034752172.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233007160-1394898301.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
HADOOP_HOME=/home/hadoop/app/hadoop
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:/home/hadoop/tools:$PATH
export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME HADOOP_HOME
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233012879-2066509380.png



# source /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233025785-1995172609.png



$ pwd
$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233031129-885396210.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233044832-2008175380.png




JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
HADOOP_HOME=/home/hadoop/app/hadoop
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME HADOOP_HOME
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233054238-2133318420.png



# source /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233101926-1946793146.png



$ pwd
$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233107723-1276319954.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233118098-818739697.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
HADOOP_HOME=/home/hadoop/app/hadoop
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME HADOOP_HOME
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233127707-360717417.png



# source /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233137535-1255090627.png



$ pwd
$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233142879-2087192133.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233156457-863325144.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
HADOOP_HOME=/home/hadoop/app/hadoop
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME HADOOP_HOME
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233209988-21846050.png



# source /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233217832-862392817.png



$ pwd
$ ls
$ su root
# vi /etc/profile
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233223191-1433327200.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233319613-553417788.png



JAVA_HOME=/home/hadoop/app/jdk1.7.0_79
ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
HADOOP_HOME=/home/hadoop/app/hadoop
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH ZOOKEEPER_HOME HADOOP_HOME
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233330238-1361740099.png



# source /etc/profile

hdfs配置完毕后启动顺序
  1、启动所有Zookeeper
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233345051-1103983789.png



$
runRemoteCmd.sh "/home/hadoop/app/zookeeper/bin/zkServer.sh start" zookeeper
# su hadoop
$ ls
$ runRemoteCmd.sh "/home/hadoop/app/zookeeper/bin/zkServer.sh start" zookeeper
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233355285-1110842007.png



runRemoteCmd.sh "/home/hadoop/app/zookeeper/bin/zkServer.sh status" zookeeper
  查看下各节点的zookeeper的启动状态
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233409863-1996326918.png



# su hadoop
$ jps
  可以看出,QuorumPeerMain是zookeeper的进程。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233417410-2144095231.png



# su hadoop
$ jps
  可以看出,QuorumPeerMain是zookeeper的进程。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233425801-1416819701.png



# su hadoop
$ jps
  可以看出,QuorumPeerMain是zookeeper的进程。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233434519-391590036.png



# su hadoop
$ jps
  可以看出,QuorumPeerMain是zookeeper的进程。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233515660-2122835354.png



# su hadoop
$ jps
  可以看出,QuorumPeerMain是zookeeper的进程。
  2、每个节点分别启动journalnode
  查看djt11的journalnode进程
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233532160-1991397409.png



$
runRemoteCmd.sh "/home/hadoop/app/hadoop/sbin/hadoop-daemon.sh start journalnode" all
$ ls
$ runRemoteCmd.sh "/home/hadoop/app/hadoop/sbin/hadoop-daemon.sh start journalnode" all

或者,sbin/hadoop-daemon.sh start journalnode
  查看djt12的journalnode进程
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233525598-246339688.png
  查看djt13的journalnode进程
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233541238-1916084613.png
  查看djt14的journalnode进程
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233550644-1058287794.png
  查看djt15的journalnode进程
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233600691-1975703190.png
  3、主节点执行
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233617941-2114563044.png



$ bin/hdfs namenode -format//namenode 格式化
$ jps
$ bin/hdfs namenode -format
  djt11的namenode格式化完毕
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233641019-396306688.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233703176-799845924.png



$ bin/hdfs zkfc -formatZK //格式化高可用

  格式化高可用完毕
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233711707-949190979.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233746785-368198155.png



$ bin/hdfs namenode //启动namenode
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908233751832-2101634228.png
  这里,别怕,它会一直停在这里,是正确的!!!因为要等待下一步的操作。
  4、备节点执行
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234002707-1017515739.png



$ bin/hdfs namenode -bootstrapStandby   //同步主节点和备节点之间的元数据,也是namenode格式化
$ bin/hdfs namenode -bootstrapStandby
  djt12的namenode格式化完毕
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234012598-1115490895.png
  5、停掉hadoop,在djt11按下ctrl+c结束namenode
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234023441-1899160740.png



$ runRemoteCmd.sh "/home/hadoop/app/hadoop/sbin/hadoop-daemon.sh stop journalnode" all      
//然后停掉各节点的journalnode

  想说的是,为何,在还有QuorumPeerMain进程下,可以一键启动hdfs相关进程。
  因为,hdfs与zookeeper是独立的。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234112644-342252427.png
  6、一键启动hdfs相关进程
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234133285-2112918588.png



$ sbin/start-dfs.sh
启动成功之后,关闭其中一个namenode ,然后在启动namenode 观察切换的状况。
$ pwd
$ jps
$ sbin/start-dfs.sh
$ jps
4853 DFSZKFailoverController
4601 JournalNode
4403 NameNode
这才是hdfs启动的进程。


2478 QuorumPeerMain
这才是zookeeper启动的进程。

4926 Jps
这本来就有的进程。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234128691-1455761066.png



$ pwd
$ jps
3092 Jps
2851 JournalNode
2779 NameNode
3045 DFSZKFailoverController
1793 QuorumPeerMain
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234152098-22707249.png



$ pwd
$ jps
2273 Jps
2205 JournalNode
2119 DataNode
2205 QuorumPeerMain
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234205394-1758330440.png



$ pwd
$ jps
2140 Jps
2074 JournalNode
1988 DataNode
1522 QuorumPeerMain
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234214504-511640891.png



$ pwd
$ jps
2134 Jps
2066 JournalNode
1980 DataNode
1514 QuorumPeerMain
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234219129-1879610758.png
  7、验证是否启动成功
  通过web界面查看namenode启动情况。
  http://djt11:50070
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234234113-1258103318.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234238535-1945136795.png
  http://djt11:50070/dfshealth.html#tab-overview
  http://djt12:50070
  http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234253723-401685556.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234259457-1766054436.png
  http://djt12:50070/dfshealth.html#tab-overview
  或者
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234313723-1480855486.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234320394-1596088124.png
  上传文件至hdfs
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234349598-103994918.png



$ hdfs dfs –ls /    //本地创建一个djt.txt文件
$ vi djt.txt    //本地创建一个djt.txt文件
hadoop dajiangtai
hadoop dajiangtai
hadoop dajiangtai
$ hdfs dfs -mkdir /test   //在hdfs上创建一个文件目录
$ hdfs dfs -put djt.txt /test   //向hdfs上传一个文件
$ hdfs dfs -ls /test    //查看djt.txt是否上传成功
  如果上面操作没有问题说明hdfs配置成功。



$ hdfs dfs –ls /   
$ hdfs dfs -mkdir /test   //在hdfs上创建一个文件目录
$ hdfs dfs –ls /   
$ ls
$ vi djt.txt    //本地创建一个djt.txt文件
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234359488-533259059.png



hadoop dajiangtai
hadoop dajiangtai
hadoop dajiangtai
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234410707-761797084.png



$ hdfs dfs -ls /test    //查看djt.txt是否上传成功
如果上面操作没有问题说明hdfs配置成功。



http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234420019-1727475197.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234427082-301942693.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234434129-13809213.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234440019-117127103.png
  在这里,我想说的是,哪个是active,哪个是standby是随机的 。这是由选举决定的。
  若,djt12是active,djt11是standby。则想变成,djt11是active,djt12是standby。
  做法:把djt12的namenode 进程杀死。试试
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234456394-1156561296.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234502269-1685072932.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234506926-1496797089.png
  再去,
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234517426-97271064.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234521785-131859051.png
  即可,djt11是active,djt12是standby。

2 YARN安装配置
  1、配置mapred-site.xml
  http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234543113-464012358.png



$ pwd
$ ls
$ cd etc/hadoop/
$ ls
$ cp mapred-site.xml.template mapred-site.xml
$ ls
  解读



$ vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<指定运行mapreduce的环境是Yarn,与hadoop1不同的地方>
</configuration>
  
  http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234559144-675460420.png



$ pwd
$ vi mapred-site.xml

http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234607691-2133103046.png



<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
  2、配置yarn-site.xml
  解读



$ vi yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.connect.retry-interval.ms</name>
<value>2000</value>
</property>
< 超时的周期>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
< 打开高可用>
<property>
<name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<启动故障自动恢复>
<property>
<name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yarn-rm-cluster</value>
</property>
<给yarn cluster 取个名字yarn-rm-cluster>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<给ResourceManager 取个名字 rm1,rm2>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>djt11</value>
</property>
<配置ResourceManager rm1 hostname>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>djt12</value>
</property>
<配置ResourceManager rm2 hostname>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<启用resourcemanager 自动恢复>
<property>
<name>yarn.resourcemanager.zk.state-store.address</name>
<value>djt11:2181,djt12:2181,djt13:2181,djt14:2181,djt15:2181</value>
</property>
<配置Zookeeper地址>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>djt11:2181,djt12:2181,djt13:2181,djt14:2181,djt15:2181</value>
</property>
<配置Zookeeper地址>
<property>
<name>yarn.resourcemanager.address.rm1</name>
<value>djt11:8032</value>
</property>
< rm1端口号>
<property>
<name>yarn.resourcemanager.scheduler.address.rm1</name>
<value>djt11:8034</value>
</property>
< rm1调度器的端口号>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>djt11:8088</value>
</property>
< rm1 webapp端口号>
<property>
<name>yarn.resourcemanager.address.rm2</name>
<value>djt12:8032</value>
</property>
< rm2端口号>
<property>
<name>yarn.resourcemanager.scheduler.address.rm2</name>
<value>djt12:8034</value>
</property>
< rm2调度器的端口号>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>djt12:8088</value>
</property>
< rm2 webapp端口号>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<执行MapReduce需要配置的shuffle过程>
</configuration>
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234628316-1915731467.png



$ pwd
$ vi yarn-site.xml
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234714926-1247909065.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234722910-783430575.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234728848-1892689253.png
  请用下面的这个,



<configuration>
<property>
<name>yarn.resourcemanager.connect.retry-interval.ms</name>
<value>2000</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yarn-rm-cluster</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>djt11</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>djt12</value>
</property>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.zk.state-store.address</name>
<value>djt11:2181,djt12:2181,djt13:2181,djt14:2181,djt15:2181</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>djt11:2181,djt12:2181,djt13:2181,djt14:2181,djt15:2181</value>
</property>
<property>
<name>yarn.resourcemanager.address.rm1</name>
<value>djt11:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address.rm1</name>
<value>djt11:8034</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>djt11:8088</value>
</property>
<property>
<name>yarn.resourcemanager.address.rm2</name>
<value>djt12:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address.rm2</name>
<value>djt12:8034</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>djt12:8088</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
将djt11刚创好的mapred-site.xml和yarn-site.xml 分发到djt12,djt13,djt14,djt15
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234718723-67485726.png



$ pwd
$ deploy.sh mapred-site.xml /home/hadoop/app/hadoop/etc/hadoop/ slave
$ deploy.sh yarn-site.xml /home/hadoop/app/hadoop/etc/hadoop/ slave



  3、启动YARN
  1、在djt11节点上执行。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234801629-574284580.png



$ pwd
$ cd ..
$ cd ..
$ pwd
$ ls
$ sbin/start-yarn.sh

$ jps
4403 NameNode
4601 JournalNode
9380 Jps
2478 QuorumPeerMain
9318 ResourceManager
4853 DFSZKFailoverController
$

Yarn的进程是 ResourceManager
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234810457-2030339459.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234819066-609836087.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234824269-94036011.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234829926-1034701249.png
  2、在djt12节点上面执行。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234847769-1644988437.png



$ sbin/yarn-daemon.sh start resourcemanager
$ jps
6018 NameNode
2851 JournalNode
6411 Jps
3045 DFSZKFailoverController
1793 QuorumPeerMain
6384 ResourceManager
$
  3、检查一下ResourceManager状态
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234911973-2095044137.png



$ bin/yarn rmadmin -getServiceState rm1
$ bin/yarn rmadmin -getServiceState rm2

$ jps
12615 Jps
4601 JournalNode
11548 NameNode
2478 QuorumPeerMain
12499 DFSZKFailoverController
11067 ResourceManager
$ bin/yarn rmadmin -getServiceState rm1
$ bin/yarn rmadmin -getServiceState rm2
$
  即djt11的ResourceManager,即rm1,是active,
  djt11的ResourceManager,即rm2,是standby
  同时打开一下web界面。
  http://djt11:8088
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234923426-480941694.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234928457-849045593.png
  
  http://djt12:8088
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234936301-961795858.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234943863-414015358.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908234949910-1111994561.png
  即djt11的ResourceManager,即rm1,是active,
  djt11的ResourceManager,即rm2,是standby
  关闭其中一个resourcemanager,然后再启动,看看这个过程的web界面变化。
  比如,我们将djt11的resourcemanager干掉,则
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235019301-1299083240.png



$ sbin/yarn-daemon.sh stop resourcemanager
$ jps
4601 JournalNode
12744 Jps
11548 NameNode
2478 QuorumPeerMain
12499 DFSZKFailoverController
11067 ResourceManager
$ pwd
/home/hadoop/app/hadoop
$ ls
$ sbin/yarn-daemon.sh stop resourcemanager
stopping resourcemanager
resourcemanager did not stop gracefully after 5 seconds: killing with kill -9
$ jps
4601 JournalNode
11548 NameNode
2478 QuorumPeerMain
12813 Jps
12499 DFSZKFailoverController
$



http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235030613-1935298539.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235037644-489736269.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235041879-341477008.png

  即,把djt11关掉,则djt12起来。
  把djt12关掉,则djt11起来。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235055348-584139046.png
  这里,我们为了后续方便,把djt11起来,djt12关掉。都无所谓啦!
  4、Wordcount示例测试
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235116160-1396587133.png



$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /test/djt.txt /test/out/
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235121988-689103753.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235129723-2146569813.png

此刻,需将djt12的namenode杀死,来启动djt11的avtive,djt12的standby。


http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235140629-2142360552.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235147254-835888332.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235155223-430226039.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235200316-1738648616.png
  如果上面执行没有异常,说明YARN安装成功。
  ! !!   至此,hadoop 5节点分布式集群搭建完毕!!!





那么,hadoop的5节点集群搭建完毕,我们使用zookeeper来管理hadoop集群,同时,实现了namenode热备和ResourceManager热备。
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908235222207-259992939.png
http://images2015.cnblogs.com/blog/855959/201609/855959-20160908205232363-241473072.png
  3、启动YARN
  1、在djt11节点上执行。
$ sbin/start-yarn.sh
页: [1]
查看完整版本: hadoop-2.6.0.tar.gz的集群搭建(5节点)