设为首页 收藏本站
查看: 1252|回复: 0

[经验分享] Hadoop学习三十三:Hadoop-HBase Bulk Load使用翻译

[复制链接]

尚未签到

发表于 2016-12-8 07:18:34 | 显示全部楼层 |阅读模式
一. 地址
  http://hbase.apache.org/book.html#arch.bulk.load
写道
9.8. Bulk Loading
9.8.1. Overview
HBase includes several methods of loading data into tables. The most straightforward method is to either use the TableOutputFormat class from a MapReduce job, or use the normal client APIs; however, these are not always the most efficient methods.

The bulk load feature uses a MapReduce job to output table data in HBase's internal data format, and then directly loads the generated StoreFiles into a running cluster. Using bulk load will use less CPU and network resources than simply using the HBase API.

9.8.2. Bulk Load Limitations
As bulk loading bypasses the write path, the WAL doesn’t get written to as part of the process. Replication works by reading the WAL files so it won’t see the bulk loaded data – and the same goes for the edits that use Put.setWriteToWAL(true). One way to handle that is to ship the raw files or the HFiles to the other cluster and do the other processing there.

9.8.3. Bulk Load Architecture
The HBase bulk load process consists of two main steps.

9.8.3.1. Preparing data via a MapReduce job
The first step of a bulk load is to generate HBase data files (StoreFiles) from a MapReduce job using HFileOutputFormat. This output format writes out data in HBase's internal storage format so that they can be later loaded very efficiently into the cluster.

In order to function efficiently, HFileOutputFormat must be configured such that each output HFile fits within a single region. In order to do this, jobs whose output will be bulk loaded into HBase use Hadoop's TotalOrderPartitioner class to partition the map output into disjoint ranges of the key space, corresponding to the key ranges of the regions in the table.

HFileOutputFormat includes a convenience function, configureIncrementalLoad(), which automatically sets up a TotalOrderPartitioner based on the current region boundaries of a table.

9.8.3.2. Completing the data load
After the data has been prepared using HFileOutputFormat, it is loaded into the cluster using completebulkload. This command line tool iterates through the prepared data files, and for each one determines the region the file belongs to. It then contacts the appropriate Region Server which adopts the HFile, moving it into its storage directory and making the data available to clients.

If the region boundaries have changed during the course of bulk load preparation, or between the preparation and completion steps, the completebulkloads utility will automatically split the data files into pieces corresponding to the new boundaries. This process is not optimally efficient, so users should take care to minimize the delay between preparing a bulk load and importing it into the cluster, especially if other clients are simultaneously loading data through other means.

9.8.4. Importing the prepared data using the completebulkload tool
After a data import has been prepared, either by using the importtsv tool with the "importtsv.bulk.output" option or by some other MapReduce job using the HFileOutputFormat, the completebulkload tool is used to import the data into the running cluster.

The completebulkload tool simply takes the output path where importtsv or your MapReduce job put its results, and the table name to import into. For example:

$ hadoop jar hbase-VERSION.jar completebulkload [-c /path/to/hbase/config/hbase-site.xml] /user/todd/myoutput mytable
The -c config-file option can be used to specify a file containing the appropriate hbase parameters (e.g., hbase-site.xml) if not supplied already on the CLASSPATH (In addition, the CLASSPATH must contain the directory that has the zookeeper configuration file if zookeeper is NOT managed by HBase).

Note: If the target table does not already exist in HBase, this tool will create the table automatically.

This tool will run quickly, after which point the new data will be visible in the cluster.

9.8.5. See Also
For more information about the referenced utilities, see Section 15.1.11, “ImportTsv” and Section 15.1.12, “CompleteBulkLoad”.

See How-to: Use HBase Bulk Loading, and Why for a recent blog on current state of bulk loading.

9.8.6. Advanced Usage
Although the importtsv tool is useful in many cases, advanced users may want to generate data programatically, or import data from other formats. To get started doing so, dig into ImportTsv.java and check the JavaDoc for HFileOutputFormat.

The import step of the bulk load can also be done programatically. See the LoadIncrementalHFiles class for more information.
二. 翻译
  9.8. Bluk Loading
9.8.1. 概述
Hbase有多种方式将数据导入到表中,最直接的方式就是通过MapReduce调用TableOutputFormat,或者使用普通HBase Client Apis。
但这些都不是最有效的方式。
Bluk Load将数据以HBase内部的组织格式输出成文件,然后将数据文件加载到已运行的集群中。
9.8.2. Bluk Load限制
因为Bulk Load绕过了传统的写入Memstore和WAL,再从Memstore刷新到HFile的过程。当需要从WAL恢复数据的时候,WAL看不到任何通过Bulk Load产生的数据。
9.8.3. Bluk Load工作方式
包含以下两步
9.8.3.1
通过MapReduce使用HFileOutputFormat来生成HBase的数据文件格式(StoreFiles)。这样格式的数据文件就是HBase内部的文件组织格式,并且在将数据写入到集群的过程中是相当容易的。
为了使该方法更有效,HFileOutputFormat必须通过配置,每个输出的HFile必须适应单个的region。
为了实现此功能,MapReduce的Job采用了Hadoop的TotalOrderPartitioner类,通过进行分区操作用以对应表中各个region。
同时,HFileOutputFormat包含有一个非常方便的方法,configureIncrementalLoad(), 这个方法会基于表的当前区域边界自动设置一个TotalOrderPartitioner。
9.8.3.2
通过HFileOutputFormat准备好数据之后,使用命令行工具completebulkload将数据加载到集群中。这个命令行工具遍历准备好的数据文件,并确定每一个文件所属的region。然后,当连接到对应的Region Server,移动HFile到存储目录为用户提供数据。
如果在数据准备或者数据载入的时候,region边界发生了变化,那么HBase将自动进行块分割,用以适应新的边界变化。这个过程效率是很低下的,特别是有其他的client在做数据录入操作。所以需要注意,尽量使用少的时间去创造数据文件以及录入该数据文件进入集群。 
9.8.4
但数据准备好的时候,无论是通过importtsv还是过MapReduce的HFileOutputFormat,completebulkload用来将数据导入到集群中。
completebulkload就是采用与importtsv 相同的输出路径和表的名称来执行。
例如:$ hadoop jar hbase-VERSION.jar completebulkload /user/todd/myoutput mytable
这个命令会执行的非常快,完成之后在集群中就能看到新的数据。
三. 总结
  1.使用HBase提供的importtsv工具或者MapReduce的HFileOutputFormat将Hdfs文件转换成StoreFiles。
  2.使用completebulkload将StoreFiles以HFile的格式加载到HBase集群中。

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.iyunv.com/thread-311082-1-1.html 上篇帖子: HDFP——The Hadoop Distributed File System(Hadoop分布式 下篇帖子: 《hadoop权威指南》学习笔记
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表