2018-07-09期 Hadoop单节点伪分布式扩展为多节点分布式
备注:本期承接自 2018-07-08期 Hadoop单节点伪分布式集群配置【本人整合多方资料并亲自反复验证通过分享】一、服务器准备
--额外增加2台服务器
IP地址
主机名称
备注
192.168.1.201
hadoop-server01
现有
192.168.1.202
hadoop-server02
新增
192.168.1.203
hadoop-server03
新增
二、配置JDK
# mkdir -p /usr/local/apps
# ll /usr/local/apps/
total 4
drwxr-xr-x. 8 uucp 143 4096 Apr 102015 jdk1.7.0_80
# pwd
/usr/local/apps/jdk1.7.0_80/bin
#vi /etc/profile
export JAVA_HOME=/usr/local/apps/jdk1.7.0_80
export PATH=$PATH:$JAVA_HOME/bin
# source /etc/profile
三、配置免密登录
在hadoop-server01上执行如下命令
# ssh-copy-id hadoop-server02
# ssh-copy-id hadoop-server03
四、拷贝hadoop安装目录到其它服务器
# scp -r /usr/local/apps/hadoop-2.4.1/ hadoop-server02:/usr/local/apps/
# scp -r /usr/local/apps/hadoop-2.4.1/ hadoop-server03:/usr/local/apps/
--在hadoop-server02和hadoop-server03上将/tmp目录删掉
# cd /usr/local/apps/hadoop-2.4.1
# rm -rf tmp/
# cd /usr/local/apps/hadoop-2.4.1/
# rm -rf tmp/
五、修改slave文件
--在hadoop-server01节点
# cd /usr/local/apps/hadoop-2.4.1/etc/hadoop/
# cat slaves
hadoop-server01
hadoop-server02
hadoop-server03
六、配置启动脚本
--在节点hadoop-server01上
# vi /etc/profile
export JAVA_HOME=/usr/local/apps/jdk1.7.0_80
export PATH=$PATH:$JAVA_HOME/bin
unset i
unset -f pathmunge
export HADOOP_HOME=/usr/local/apps/hadoop-2.4.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
# source /etc/profile
七、启动服务
--节点1
# start-dfs.sh
# start-yarn.sh
# jps
4352 ResourceManager
4209 SecondaryNameNode
4062 DataNode
4634 NodeManager
3943 NameNode
--节点2
# jps
2731 NodeManager
2631 DataNode
--节点3
# jps
2703 NodeManager
2603 DataNode
页:
[1]