ienki 发表于 2017-12-18 14:30:32

二、基于hadoop的nginx访问日志分析

代码:
  

# pv_day.py  #
!/usr/bin/env python  
#
coding=utf-8  

  
from mrjob.job import MRJob
  
from nginx_accesslog_parser import NginxLineParser
  

  
class PvDay(MRJob):
  

  nginx_line_parser = NginxLineParser()
  

  def mapper(self, _, line):
  

  self.nginx_line_parser.parse(line)
  day, _ = str(self.nginx_line_parser.time_local).split()
  yield day, 1 # 每一天的
  

  def reducer(self, key, values):
  yield key, sum(values)
  

  
def main():
  PvDay.run()
  

  
if __name__ == '__main__':
  main()
  


代码解释:
  定义了一个集成MRJob类的job类,这个类包含定义好的steps。
  一个‘step’包含一个mapper,combiner和一个reducer,这些是可选的,但是必须使用至少一个。
  mapper()方法有两个参数key,value(这个例子中,key被忽略,每行日志作为一个value),并生成key-value对。
  reduce()方法接受一个key和一个可迭代的values,并生成许多key-value对(这个例子中,计算每个key对应values值得和,也就是每天对应的PV)。

以不同方式执行job:
  基本方式:
  

# python3 pv_day.py access_all.log-20161227  
No configs found; falling back on auto-configuration
  
Creating temp directory /tmp/pv_day.root.20161228.022837.113256
  
Running step 1 of 1...
  
Streaming final output from /tmp/pv_day.root.20161228.022837.113256/output...
  
"2016-12-27"    47783
  
"2016-12-26"    299427
  
Removing temp directory /tmp/pv_day.root.20161228.022837.113256...
  

  标准输入stdin方式,这种方式只接受第一个文件
  

# python3 pv_day.py < access_all.log-20161227  
No configs found; falling back on auto-configuration
  
Creating temp directory /tmp/pv_day.root.20161228.024431.884434
  
Running step 1 of 1...
  
reading from STDIN
  
Streaming final output from /tmp/pv_day.root.20161228.024431.884434/output...
  
"2016-12-27"    47783
  
"2016-12-26"    299427
  
Removing temp directory /tmp/pv_day.root.20161228.024431.884434...
  

  混合方式:
  

python3 pv_day.py input1.txt input2.txt - < input3.txt  

  分布式:
  默认情况下,mrjob执行job使用单个Python进程,这里只是调试,并不是精确的分布式计算!
  如果使用分布式计算,可以使用 -r/--runner选项。 使用 -r inline(默认), -r local, -r hadoop, -r emr
  

# python pv_day.py -r hadoop hdfs://my_home/input.txt  

  另一种方法:
  

# cat pv_day1.py  
#
!/usr/bin/env python  
#
coding=utf-8  

  
from mrjob.job import MRJob
  
from mrjob.step import MRStep
  
from nginx_accesslog_parser import NginxLineParser
  

  
class PvDay(MRJob):
  

  nginx_line_parser = NginxLineParser()
  

  def mapper(self, _, line):
  

  self.nginx_line_parser.parse(line)
  day, _ = str(self.nginx_line_parser.time_local).split()
  yield day, 1 # 每一天的
  yield 'total', 1 # 总的
  

  def reducer_sum(self, key, values):
  yield None, (sum(values), key)
  

  def reducer_sort(self, _, values):
  

  for count, dt in sorted(values, reverse=True):
  yield dt, count
  

  def steps(self):
  

  return (
  MRStep(mapper=self.mapper,
  reducer=self.reducer_sum),
  MRStep(reducer=self.reducer_sort)
  )
  

  
def main():
  PvDay.run()
  

  
if __name__ == '__main__':
  main()
  

  执行结果:
  

# python3 pv_day1.py access_all.log-20161227  
No configs found; falling back on auto-configuration
  
Creating temp directory /tmp/pv_day1.root.20161228.061455.974823
  
Running step 1 of 2...
  
Running step 2 of 2...
  
Streaming final output from /tmp/pv_day1.root.20161228.061455.974823/output...
  
"total"    347210
  
"2016-12-26"    299427
  
"2016-12-27"    47783
  
Removing temp directory /tmp/pv_day1.root.20161228.061455.974823...
  
页: [1]
查看完整版本: 二、基于hadoop的nginx访问日志分析