设为首页 收藏本站
查看: 772|回复: 0

[经验分享] Hadoop SecondaryNameNode备份及恢复

[复制链接]

尚未签到

发表于 2016-12-7 06:01:07 | 显示全部楼层 |阅读模式
  1、同步各个服务器时间
  yum install ntp
  ntpdate ntp.fudan.edu.cn
  hdfs-site.xml配置
  如果没有配置这一项,hadoop默认是0.0.0.0:50090,如果你的namenode和secondarynamenode配置在同一台服务器上,是没有问题的。如果分开部署没有指定该值,则hadoop会在namenode本机上找,就会出错
  <property>
        <name>dfs.secondary.http.address</name>
        <value>snn0001:50090</value>
</property>
  
<property>
        <name>dfs.http.address</name>
        <value>nn0001:50070</value>
</property>
  1、edits和fsimage
  NameNode会把用户对FileSystem的操作保存在日志文件edits中
  每次NameNode重启时,首先从镜像文件fsimage中读取HDFS数据,并把日志文件合并到fsimage中。
  2、checkpoint
  有两个参数控制SecondaryNameNode checkpoint
  fs.checkpoint.period表示两次checkpoint的时间间隔,默认为3600s
  fs.checkpoint.size规定edits文件最大值,超过该值即checkpoint,默认64M
  可以通过NameNode的start-dfs.sh启动SecondaryNameNode
  也可以通过./hadoop secondarynamenode -checkpoint或者./hadoop secondarynamenode -checkpoint force
  3、恢复数据:
  配置一台和NameNode一样的服务器
  创建dfs.name.dir文件夹,注意:该文件夹不能包含合法的fsimage,否则会执行失败。因为NameNode会检查fs.checkpoint.dir目录下镜像的一致性,但是不会做任何改动。
  注意:可以使用nfs备份dfs.name.dir和${hadoop.tmp.dir}/dfs/namesecondary
  新建目录/hadoop/dfs/namenode和/hadoop/dfs/secondarynamenode
  执行命令:./hadoop namenode -importCheckpoint,NameNode会读取checkpoint文件,保存到dfs.name.dir
  出现以下错误:
  12/01/24 00:02:56 WARN mortbay.log: /getimage: java.io.IOException: GetImage failed. java.net.ConnectException: Connection refused
  这是因为没有在hdfs-site.xml配置dfs.secondary.http.address,上面已经配置了
  下面的错误是由于数据块完整率没有达到hadoop规定的0.9990要求,所以namenode处于安全状态
DSC0000.png
 
  把dfs.repliation值设置为2,重新格式化namenode,并上传数据,再次执行./hadoop namenode -importCheckpoint
  
DSC0001.jpg
 
  在0.21.0中,可以通过Checkpoint Node和Backup Node做checkpoint
  以下是官网对SecondaryNameNode的说明
  The NameNode stores modifications to the file system as a log appended to a native file system file (edits). When a NameNode starts up, it reads HDFS state from an image file (fsimage) and then applies edits from the edits log file. It then writes new HDFS state to the fsimage and starts normal operation with an empty edits file. Since NameNode merges fsimage and edits files only during start up, the edits log file could get very large over time on a busy cluster. Another side effect of a larger edits file is that next restart of NameNode takes longer.
  The secondary NameNode merges the fsimage and the edits log files periodically and keeps edits log size within a limit. It is usually run on a different machine than the primary NameNode since its memory requirements are on the same order as the primary NameNode. The secondary NameNode is started by bin/start-dfs.sh on the nodes specified in conf/masters file.
  The start of the checkpoint process on the secondary NameNode is controlled by two configuration parameters.

  • fs.checkpoint.period, set to 1 hour by default, specifies the maximum delay between two consecutive checkpoints, and
  • fs.checkpoint.size, set to 64MB by default, defines the size of the edits log file that forces an urgent checkpoint even if the maximum checkpoint delay is not reached.
  The secondary NameNode stores the latest checkpoint in a directory which is structured the same way as the primary NameNode's directory. So that the check pointed image is always ready to be read by the primary NameNode if necessary.
  The latest checkpoint can be imported to the primary NameNode if all other copies of the image and the edits files are lost. In order to do that one should:

  • Create an empty directory specified in the dfs.name.dir configuration variable;
  • Specify the location of the checkpoint directory in the configuration variable fs.checkpoint.dir;
  • and start the NameNode with -importCheckpoint option.
  The NameNode will upload the checkpoint from the fs.checkpoint.dir directory and then save it to the NameNode directory(s) set in dfs.name.dir. The NameNode will fail if a legal image is contained in dfs.name.dir. The NameNode verifies that the image in fs.checkpoint.dir is consistent, but does not modify it in any way.

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.iyunv.com/thread-310531-1-1.html 上篇帖子: hadoop中RPC小析 下篇帖子: 浅析Hadoop文件格式
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表