hailai 发表于 2018-8-31 09:24:18

shell以及perl简单测试hadoop-Biotech

  为了简单测试一下hadoop,我用了2台机器,一台作为namenode,datanode另外一台作为datanode
  任务:
  统计日志中访问ip的次数
  日志内容如下:
  61.135.189.75 - - "GET /robots.txt HTTP/1.1" 200 556 "-" "-"
  61.135.189.75 - - "GET /portal.php HTTP/1.1" 200 13929 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)"
  121.14.98.63 - - "POST /bbs/api/manyou/my.php HTTP/1.0" 200 150 "http://cless.bnu.edu.cn/bbs/api/manyou/my.php" "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9.1.9) Gecko/20100315 Firefox/3.5.9"
  66.249.66.73 - - "GET /syshtml/?action-tag-tagname-%B3%CC%D0%F2 HTTP/1.1" 200 797 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
  157.55.33.19 - - "GET /dp3.php HTTP/1.1" 200 12219 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)"
  大概84M,嘿嘿数据较小啊。。。
  1用perl测试写了一个mapper,reducer如下:
  mapper.pl
  #!/usr/bin/perl -w
  my %hash;
  while()
  {
  chomp;
  my $ip=(split /\s+/);
  if(!exists($hash{$ip}))
  {
  $hash{$ip}=1;
  }
  else{
  $hash{$ip}=$hash{$ip}+1;
  }
  }
  while(my($a,$b)=each %hash)
  {
  print "$a\t$b\n";
  }
  reducer.pl
  #!/usr/bin/perl -w
  my %last;
  while()
  {
  chomp;
  my($ip,$num)=split /\s+/;
  if(!exists($last{$ip}))
  {
  $last{$ip}=$num;
  }
  else{
  $last{$ip}=$last{$ip}+$num;
  }
  }
  my @aa=sort { $last{$b}$last{$a} } keys %last;
  foreach my $a(@aa)
  {
  print "$a\t$last{$a}\n";
  }
  运行以后大概跑了2min+8s
  2.用shell写个mapper.sh ,reducer还是用上面的reducer.pl
  mapper.sh
  #!/bin/sh
  awk '{!a[$1]?a[$1]=1:a[$1]=a[$1]+1}END{for(i in a){print i,a}}'
  大概花了55s
  3. 用shell写mapper.sh以及rducer.sh
  不过reducer.sh没有按访问次数大小排序哈。。。
  mapper.sh
  #!/bin/sh
  awk '{!a[$1]?a[$1]=1:a[$1]=a[$1]+1}END{for(i in a){print i,a}}'
  reducer.sh
  #!/bin/sh
  awk '{!a[$1]?a[$1]=$2:a[$1]=a[$1]+$2}END{for(i in a){print i,a}}'
  结果用了54s,差不多啊。。
  所以感觉perl还是效率啊。。

页: [1]
查看完整版本: shell以及perl简单测试hadoop-Biotech