hadoop0.20.2完全分布式环境搭建
三台服务器分别配置IP为:192.168.11.131
192.168.11.132
192.168.11.133
分别配置主机名
master:
# hostnamectl set-hostname master
其它两台分别配置为slave1和slave2
各服务器关闭selinux和防火墙:
# vi /etc/sysconfig/selinux
SELINUX=enforcing --> SELINUX=disabled
# setenforce 0
# systemctl stop firewalld
# systemctl disable firewalld
替换yum源:
# mkdir apps
上传包
wget-1.14-15.el7.x86_64.rpm
# rpm -ivh wget-1.14-15.el7.x86_64.rpm
# cd /etc/yum.repos.d/
# wgethttp://mirrors.aliyun.com/repo/Centos-7.repo
# mv Centos-7.repo CentOS-Base.repo
# scp CentOS-Base.repo root@192.168.11.132:/etc/yum.repos.d/
# scp CentOS-Base.repo root@192.168.11.133:/etc/yum.repos.d/
各服务器执行
# yum clean all
# yum makecache
# yum update
ntp时间同步:
master作为ntp服务端,配置如下
# yum install -y ntp
ntpserver:
master作为ntp主服务器修改时间
# date -s "2018-05-27 23:03:30"
# vi /etc/ntp.conf
在注释下添加两行
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
server 127.127.1.0
fudge 127.127.1.0 stratum 11
注释下面
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
# systemctl start ntpd.service
# systemctl enable ntpd.service
slave1和slave2作为ntp客户端,配置如下
# vi /etc/ntp.conf
同样注释下添加两行
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
server 192.168.11.131
fudge 127.127.1.0 stratum 11
四行添加注释
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
# systemctl start ntpd.service
# systemctl enable ntpd.service
同步时间出错
# ntpdate 192.168.11.131
25 Jun 07:39:15 ntpdate: the NTP socket is in use, exiting
解决:
# lsof -i:123
-bash: lsof: command not found
# yum install -y lsof
# lsof -i:123
COMMANDPID USER FD TYPE DEVICE> ntpd 1819ntp 16uIPv433404 0t0UDP *:ntp
ntpd 1819ntp 17uIPv633405 0t0UDP *:ntp
ntpd 1819ntp 18uIPv433410 0t0UDP localhost:ntp
ntpd 1819ntp 19uIPv433411 0t0UDP slave1:ntp
ntpd 1819ntp 20uIPv633412 0t0UDP localhost:ntp
ntpd 1819ntp 21uIPv633413 0t0UDP slave1:ntp
# kill -9 1819
再次更新时间
# ntpdate 192.168.11.131
24 Jun 23:37:27 ntpdate: step time server 192.168.11.131 offset -28828.363808 sec
# date
Sun Jun 24 23:37:32 CST 2018
useradd:
# groupadd hduser
# useradd -g hduser hduser
# passwd hduser
ssh免密码认证:
所有的节点生成authorized_keys:
# su hduser
$ cd
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your> Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:KfyLZTsN3U89CbFAoOsrkI9YRz3rdKR4vr/75R1A7eE hduser@master
The key's randomart image is:
+-------+
| .o. |
| .. ..|
| .. ..oo |
| o o.o.oo .|
| o +.S. . ..E.|
| + o.B... . oo.|
|o = =.=o + ..|
| . . o *oo. o o .|
| oo==+. . . |
+---------+
$ cd .ssh/
$ cp> 所有节点互相认证:
master:
$ ssh-copy-id -i>
$ ssh-copy-id -i> 验证:
$ ssh slave1
Last failed login: Wed Jun 27 04:55:44 CST 2018 from 192.168.11.131 on ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Wed Jun 27 04:50:05 2018
$ exit
logout
Connection to slave1 closed.
$ ssh slave2
Last login: Wed Jun 27 04:51:53 2018
$
slave1:
$ ssh-copy-id -i>
$ ssh-copy-id -i> slave2:
$ ssh-copy-id -i>
$ ssh-copy-id -i> 上传包:
$ cd src
$ ll
total 356128
-rw-r--r-- 1 root root44575568 Jun 16 17:24 hadoop-0.20.2.tar.gz
-rw-r--r-- 1 root root 288430080 Mar 162016 jdk1.7.0_79.tar
配置jdk:
$ tar -xf jdk1.7.0_79.tar -C ..
$ cd ..
$ vi .bashrc
添加
export JAVA_HOME=/home/hadoop/jdk1.7.0_79
export JRE_HOME=$JAVA_HOME/jre
export> export PATH=$PATH:$JAVA_HOME/bin
$ source .bashrc
$ java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
其它两节点配置同上
配置hadoop:
各节点解压
tar -zxf hadoop-0.20.2.tar.gz -C ..
master:
$ pwd
/home/hduser/hadoop-0.20.2/conf
$ vi hadoop-env.sh
export JAVA_HOME=/home/hduser/jdk1.7.0_79
$ vi core-site.xml
fs.default.name
hdfs://master:9000
$ vi hdfs-site.xml
dfs.replication
2
$ vi mapred-site.xml
mapred.job.tracker
master:9001
$ vi masters
#localhost
master
$ vi slaves
#localhost
slave1
slave2
拷贝配置文件到其它两个节点
$ scp hadoop-env.sh slave1:~/hadoop-0.20.2/conf/
$ scp core-site.xml slave1:~/hadoop-0.20.2/conf/
$ scp hdfs-site.xml slave1:~/hadoop-0.20.2/conf/
$ scp mapred-site.xml slave1:~/hadoop-0.20.2/conf/
$ scp masters slave1:~/hadoop-0.20.2/conf/
$ scp slaves slave1:~/hadoop-0.20.2/conf/
$ scp hadoop-env.sh slave2:~/hadoop-0.20.2/conf/
$ scp core-site.xml slave2:~/hadoop-0.20.2/conf/
$ scp hdfs-site.xml slave2:~/hadoop-0.20.2/conf/
$ scp mapred-site.xml slave2:~/hadoop-0.20.2/conf/
$ scp masters slave2:~/hadoop-0.20.2/conf/
$ scp slaves slave2:~/hadoop-0.20.2/conf/
格式化文件系统
$ cd ../bin
$ ./hadoop namenode -format
启动服务
$ ./start-all.sh
$ jps
1681 JobTracker
1780 Jps
1618 SecondaryNameNode
1480 NameNode
$ jps
1544 Jps
1403 DataNode
1483 TaskTracker
$ jps
1494 TaskTracker
1414 DataNode
1555 Jps
页:
[1]