设为首页 收藏本站
查看: 1058|回复: 0

[经验分享] hadoop面试小结

[复制链接]

尚未签到

发表于 2016-12-4 08:53:18 | 显示全部楼层 |阅读模式
入门:

知道MapReduce大致流程,map, shuffle, reduce

知道combiner, partition作用,设置compression

搭建hadoop集群,master/slave都运行那些服务

HDFS,replica如何定位

版本0.20.2->0.20.203->0.20.205, 0.21, 0.23, 1.0
. 1

新旧API不同


进阶:
.

Hadoop 参数调优,cluster level: JVM, map/reduce slots, job level: reducer #,

memory, use combiner? use compression?

pig latin, Hive 简单语法

HBase, zookeeper 搭建


最新:

关注cloudera, hortonworks blog

next generation MR2框架

高可靠性, namenode: avoid single point of failure
.

数据流系统:streaming storm(twitter).


演练算法:

wordcount

字典同位词

  
翻译sql语句 select count(x) from a group by b;

  


  
经典的一道题:

  
现有1亿个整数均匀分布,如果要得到前1K个最大的数,求最优的
算法
。­

(先不考虑内存的限制,也不考虑读写外存,时间复杂度最少的算法即为最优算法)­


我先说下我的想法:分块,比如分1W块,每块1W个,然后分别找出每块最大值,从这最大的1W个值中找最大1K个,那么其他的9K个最大值所在的块即可扔掉,从剩下的最大的1K个值所在的块中找前1K个即可。那么原
问题
的规模就缩小到了1/10。­


问题:­

1.这种分块方法的最优时间复杂度。­

2.
如何
分块达到最优。比如也可分10W块,每块1000个数。则问题规模可降到原来1/100。但事实上复杂度并没降低。­

3.还有没更好更优的方法解决这个问题。­


  


  

Q1. Name the most common InputFormats defined in Hadoop? Which one is default ?
Following 2 are most common InputFormats defined in Hadoop
- TextInputFormat
- KeyValueInputFormat
- SequenceFileInputFormat

Q2. What is the difference between TextInputFormatand KeyValueInputFormat class
TextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper
KeyValueInputFormat: Reads text file and parses lines into key, val pairs. Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to themapper.

Q3. What is InputSplit in Hadoop
When a hadoop job is run, it splits input files into chunks and assign each split to a mapper to process. This is called Input Split

Q4. How is the splitting of file invoked in Hadoop Framework
It is invoked by the Hadoop framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user

Q5. Consider case scenario: In M/R system,
  - HDFS block size is 64 MB
  - Input format is FileInputFormat
  - We have 3 files of size 64K, 65Mb and 127Mb
then how many input splits will be made by Hadoop framework?
Hadoop will make 5 splits as follows
- 1 split for 64K files
- 2 splits for 65Mb files
- 2 splits for 127Mb file

Q6. What is the purpose of RecordReader in Hadoop
The InputSplithas defined a slice of work, but does not describe how to access it. The RecordReaderclass actually loads the data from its source and converts it into (key, value) pairs suitable forreading by the Mapper. The RecordReader instance is defined by the InputFormat

Q7. After the Map phase finishes, the hadoop framework does "Partitioning, Shuffle and sort". Explain what happens in this phase?
- Partitioning
Partitioning is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducerwill receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same

- Shuffle
After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they arerequired by the reducers. This process of moving map outputs to the reducers is known as shuffling.

- Sort
Each reduce task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presentedto the Reducer

Q9. If no custom partitioner is defined in the hadoop then how is data partitioned before its sent to the reducer
The default partitioner computes a hash value for the key and assigns the partition based on this result

Q10. What is a Combiner
The Combiner is a "mini-reduce" process which operates only on data generated by a mapper. The Combiner will receive as input all data emitted by the Mapper instances on a given node. The output fromthe Combiner is then sent to the Reducers, instead of the output from the Mappers.
Q11. Give an example scenario where a cobiner can be used and where it cannot be used
There can be several examples following are the most common ones
- Scenario where you can use combiner
Getting list of distinct words in a file

- Scenario where you cannot use a combiner
Calculating mean of a list of numbers
Q12. What is job tracker
Job Tracker is the service within Hadoop that runs Map Reduce jobs on the cluster

Q13. What are some typical functions of Job Tracker
The following are some typical tasks of Job Tracker
- Accepts jobs from clients
- It talks to the NameNode to determine the location of the data
- It locates TaskTracker nodes with available slots at or near the data
- It submits the work to the chosen Task Tracker nodes and monitors progress of each task by receiving heartbeat signals from Task tracker

Q14. What is task tracker
Task Tracker is a node in the cluster that accepts tasks like Map, Reduce and Shuffle operations - from a JobTracker

Q15. Whats the relationship between Jobs and Tasks in Hadoop
One job is broken down into one or many tasks in Hadoop.

Q16. Suppose Hadoop spawned 100 tasks for a job and one of the task failed. What willhadoop do ?
It will restart the task again on some other task tracker and only if the task fails more than 4 (default setting and can be changed) times will it kill the job

Q17. Hadoop achieves parallelism by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program and slow down the program. What mechanism Hadoopprovides to combat this
Speculative Execution

Q18. How does speculative execution works in Hadoop
Job tracker makes different task trackers process same input. When tasks complete, they announce this fact to the Job Tracker. Whichever copy of a task finishes first becomes the definitive copy. Ifother copies were executing speculatively, Hadoop tells the Task Trackers to abandon the tasks and discard their outputs. The Reducers then receive their inputs from whichever Mapper completed successfully, first.

Q19. Using command line in Linux, how will you
- see all jobs running in the hadoop cluster
- kill a job
- hadoop job -list
- hadoop job -kill jobid

Q20. What is Hadoop Streaming
Streaming is a generic API that allows programs written in virtually any language to be used asHadoop Mapper and Reducer implementations


Q21. What is the characteristic of streaming API that makes it flexible run map reduce jobs in languages like perl, ruby, awk etc.
Hadoop Streaming allows to use arbitrary programs for the Mapper and Reducer phases of a Map Reduce job by having both Mappers and Reducers receive their input on stdin and emit output (key, value)pairs on stdout.
Q22. Whats is Distributed Cache in Hadoop
Distributed Cache is a facility provided by the Map/Reduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy thenecessary files to the slave node before any tasks for the job are executed on that node.
Q23. What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it
This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 mappers or reducer, it will use the same copy ofdistributed cache. On the other hand, if you put code in file to read it from HDFS in the MR job then every mapper will try to access it from HDFS hence if a task tracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS isnot very efficient when used like this.

Q.24 What mechanism does Hadoop framework provides to synchronize changes made in Distribution Cache during runtime of the application
This is a trick questions. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution

Q25. Have you ever used Counters in Hadoop. Give us an example scenario
Anybody who claims to have worked on a Hadoop project is expected to use counters

Q26. Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job
Yes, The input format class provides methods to add multiple directories as input to a Hadoop job

Q27. Is it possible to have Hadoop job output in multiple directories. If yes then how
Yes, by using Multiple Outputs class

Q28. What will a hadoop job do if you try to run it with an output directory that is already present? Will it
- overwrite it
- warn you and continue
- throw an exception and exit
The hadoop job will throw an exception and exit.

Q29. How can you set an arbitary number of mappers to be created for a job in Hadoop
This is a trick question. You cannot set it

Q30. How can you set an arbitary number of reducers to be created for a job in Hadoop
You can either do it progamatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting


1、编写一只爬虫

要求:1、可
配置
要爬取的网页URL格式

   2、可定制要爬取的深度

    3、对爬取下来的页面可由后期调用的程序进行
存储
(即事件)

2、现有大批量url需要爬取,其中url的解析以及n层抓取已有
服务
端实现(多级深度),现在给定若干台
服务器
以及不断增加的客户机,各服务端的url任务已有机制保证平衡,爬虫url任务由客户机向服务器请求并完成。

请设计一个分布式框架,以完成单层的ur抓取,并且每个服务器都能尽可能平均的获取客户机资源。

注意:服务器可能当机;




1、设计一套系统,使之能够从不断增加的不同的数据源中,提取指定格式的数据。

要求:1、运行结果要能大致得知提取效果,并可据此持续改进提取方法;

   
2、由于数据来源的差异性,请给出可弹性配置的程序框架;

   
3、数据来源可能有Mysql,sqlserver等;

   
4、该系统具备持续挖掘的能力,即,可重复提取更多信息;

2、编写一个工具,该工具能够根据不同的文档模板,生成提取格式化数据的正则表达式


运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.iyunv.com/thread-309335-1-1.html 上篇帖子: hadoop的常用命令指南 下篇帖子: Hadoop搭建指南
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表