设为首页 收藏本站
查看: 672|回复: 0

[经验分享] hadoop详细安装和配置

[复制链接]

尚未签到

发表于 2016-12-3 11:23:14 | 显示全部楼层 |阅读模式
hadoop版本:hadoop-2.2.0-cdh5.0.0-beta-1
jdk版本:jdk-7u40-linux-x64
环境准备:
10.95.3.100 master1
10.95.3.101 master2
10.95.3.103 slave1
10.95.3.104 slave2
说明:本安装没有进行hdfs HA配置,NameNode和SecondaryNameNode都在master1上,其他三个节点作为DN节点。
安装步骤(JDK的安装省略):
1、添加hadoop用户
   root身份登录master1
   添加用户:useradd –d /home/hadoop -m hadoop
   给hadoop用户设置密码:passwd hadoop
2、配置hosts
   以root用户登录各台机器,vi /etc/hosts文件,修改如下:

#127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.95.3.100              master1
10.95.3.101              master2
10.95.3.103              slave1
10.95.3.104              slave2


3、配置master1到各台机器的免密码登录
   以hadoop用户登录master1,进入/home/hadoop目录下,执行:
   ssh-keygen -t rsa
   注意一定不要输入密码,这样,在/home/hadoop/.ssh目录下将会生成id_rsa,  id_rsa.pub两个文件
   然后再分别执行:
   ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@master1
   ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@master2
   ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@slave1
   ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@slave2
这样,就把master1的公钥加入到了master的authorized_keys文件中。可以测试下从master1 ssh到其他各台机器是否不需要输入密码。
   注意:文件夹的权限
   sudo chmod 755 .ssh/
   sudo chmod 644 .ssh/authorized_keys
4、解压hadoop-2.2.0-cdh5.0.0-beta-1.tar.gz,设置hadoop的安装目录为/dp/hadoop
   设置环境变量:vi ~/.bashrc

export HADOOP_HOME=/dp/hadoop
export HADOOP_PID_DIR=/dp/hadoop_pid_dir
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=/dp/hadoop
export PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$ANT_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$ZOOKEEPER_HOME/bin:$FLUME_HOME/bin:$SQOOP_HOME/bin:$OOZIE_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:.


5、进入/dp/hadoop/etc/hadoop目录,修改配置文件
core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master1</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>10080</value>
</property>
<property>
<name>fs.trash.checkpoint.interval</name>
<value>10080</value>
</property>
<property>
<name>topology.script.file.name</name>
<value>/dp/hadoop/etc/hadoop/rack.py</value>
</property>
<property>
<name>topology.script.number.args</name>
<value>6</value>
</property>
<property>
<name>hadoop.security.group.mapping</name>
<value>org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback</value>
</property>
<!--
<property>
<name>hadoop.native.lib</name>
<value>false</value>
<description>Should native hadoop libraries, if present, be used.</description>
</property>
-->
<!--
<property>
<name>ha.zookeeper.quorum</name>
<value>master:2181,slave2:2181,slave6:2181</value>
</property>      
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
-->
</configuration>


hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>16m</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/dp/data/hadoop</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>master1:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master1:50090</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>1000000</value>
</property>
<property>
<name>dfs.balance.bandwidthPerSec</name>
<value>104857600</value>
<description>
Specifies the maximum amount of bandwidth that each datanode
can utilize for the balancing purpose in term of
the number of bytes per second.
</description>
</property>
<property>
<name>dfs.hosts.exclude</name>
<value>/dp/hadoop/etc/hadoop/excludes</value>
<description>Names a file that contains a list of hosts that are
not permitted to connect to the namenode.  The full pathname of the
file must be specified.  If the value is empty, no hosts are
excluded.</description>
</property>

上述的配置要手工创建excludes文件。
map-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master1:19888</value>
</property>
<!--        <property>
<name>mapreduce.history.server.delegationtoken.renewer</name>
<value>true</value>
</property>
-->
<property>
<name>mapreduce.output.fileoutputformat.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress.type</name>
<value>BLOCK</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
<property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<property>
<name>mapreduce.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>


yarn-site.xml

<configuration>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master1:8031</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master1:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master1:8088</value>
</property>
<property>
<name>yarn.nm.liveness-monitor.expiry-interval-ms</name>
<value>10000</value>
</property>
<property>
<description>Classpath for typical applications.</description>
<name>yarn.application.classpath</name>
<value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,
$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,
$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/dp/data/yarn/local</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/dp/data/yarn/logs</value>
</property>
<property>
<description>Where to aggregate logs</description>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/dp/data/yarn/logs</value>
</property>
<property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/user</value>
</property>
<property>
<description>Amount of physical memory, in MB, that can be allocated
for containers.</description>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>


6、配置机架感知,在/dp/hadoop/etc/hadoop下添加rack.data rack.py文件
rack.data

default /rack/default
10.95.3.101 master2 /rack/rack1
10.95.3.103 slave1  /rack/rack1
10.95.3.104 slave2  /rack/rack1


rack.py

#!/bin/env python
import sys,os,time
pwd = os.path.realpath( __file__ )
rack_file = os.path.dirname(pwd) + "/rack.data"
rack_list = [ l.strip().split() for l in open(rack_file).readlines() if len(l.strip().split()) > 1 ]
rack_map = {}
for item in rack_list:
for host in item[:-1]:
rack_map[host] = item[-1]
rack_map['default'] = 'default' in rack_map and rack_map['default'] or '/default/rack'
rack_result = [av in rack_map and rack_map[av] or rack_map['default'] for av in sys.argv[1:]]
#print rack_map, rack_result
print ' '.join( rack_result )
f = open('/tmp/rack.log','a+')
f.writelines( "[%s] %sn" % (time.strftime("%F %T"),str(sys.argv)))
f.close()


7、修改slaves文件:

master2
slave1
slave


8、将master1上配置好的hadoop拷贝到其他节点上去
9、执行start-dfs.sh启动hadoop,注意如果防火墙没关闭,有可能造成DN连接不上NN
10、执行hadoop dfsadmin -report命令,查看hadoop是否正常启动

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.iyunv.com/thread-309106-1-1.html 上篇帖子: Hadoop FS Shell命令讲解 下篇帖子: Hadoop集群与Hadoop性能优化
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表