hadoop2.7.2的详细安装文档
1.host文件,三个机器上都需要设置,修改/etc/hostname文件。Master 192.168.0.182
Slave1 192.168.0.183
Slave2 192.168.0.184
2. SSH免密码登录
生成公钥:ssh-keygen –t rsa –P ''
推送公钥:ssh-copy-id user@ip_address
先做主到从的
3.安装jdk,直接解压下载的JDK并配置变量即可(主机从机都得做)
(1)下载“jdk-7u79-linux-x64.gz”,放到/home/java目录下(自定义)
(2)解压,输入命令,tar -zxvf jdk-7u79-linux-x64.gz
(3)编辑/etc/profile
export JAVA_HOME=/home/java/jdk1.7.0_79
export> export PATH=$PATH:$JAVA_HOME/bin
(4)使配置生效,输入命令,source /etc/profile
(5)输入命令,java -version,完成
4、安装Hadoop2.7,只在Master服务器解压,再复制到Slave服务器
(1)下载“hadoop-2.7.0.tar.gz”,放到/home/hadoop目录下
(2)解压,输入命令,tar -xzvf hadoop-2.7.0.tar.gz
(3)在/home/hadoop目录下创建数据存放的文件夹,tmp、hdfs、hdfs/data、hdfs/name
5、配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的core-site.xml
fs.defaultFS
hdfs://192.168.0.182:9000
hadoop.tmp.dir
file:/home/hadoop/tmp
io.file.buffer.size
131702
6、配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的hdfs-site.xml
dfs.namenode.name.dir
file:/home/hadoop/dfs/name
dfs.datanode.data.dir
file:/home/hadoop/dfs/data
dfs.replication
2
dfs.namenode.secondary.http-address
192.168.0.182:9001
dfs.webhdfs.enabled
true
7、配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
192.168.0.182:10020
mapreduce.jobhistory.webapp.address
192.168.0.182:19888
8、配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的mapred-site.xml
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.auxservices.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.address
192.168.0.182:8032
yarn.resourcemanager.scheduler.address
192.168.0.182:8030
yarn.resourcemanager.resource-tracker.address
192.168.0.182:8031
yarn.resourcemanager.admin.address
192.168.0.182:8033
yarn.resourcemanager.webapp.address
192.168.0.182:8088
yarn.nodemanager.resource.memory-mb
768
9、配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下hadoop-env.sh、yarn-env.sh的JAVA_HOME,不设置的话,启动不了,
export JAVA_HOME=/home/java/jdk1.7.0_79
10、配置/home/hadoop/hadoop-2.7.0/etc/hadoop目录下的slaves,删除默认的localhost,增加2个从节点,
192.168.0.183
192.168.0.184
11、将配置好的Hadoop复制到各个节点对应位置上,通过scp传送,
scp -r /home/hadoop 192.168.0.183:/home/
scp -r /home/hadoop 192.168.0.184:/home/
12、在Master服务器启动hadoop,从节点会自动启动,进入/home/hadoop/hadoop-2.7.0目录
(1)初始化,输入命令,bin/hdfs namenode -format
(2)全部启动sbin/start-all.sh,也可以分开sbin/start-dfs.sh、sbin/start-yarn.sh
(3)停止的话,输入命令,sbin/stop-all.sh
(4)输入命令,jps,可以看到相关信息
13、Web访问,要先开放端口或者直接关闭防火墙
(1)输入命令,systemctl stop firewalld.service
(2)浏览器打开http://192.168.0.182:8088/
(3)浏览器打开http://192.168.0.182:50070/
页:
[1]