设为首页 收藏本站
查看: 502|回复: 0

[经验分享] 通过LogStash收集nginx日志

[复制链接]

尚未签到

发表于 2016-12-24 08:56:26 | 显示全部楼层 |阅读模式
  参考: https://medium.com/devops-programming/b01bd0876e82

http://onexin.iyunv.com/source/plugin/onexin_bigdata/https://d262ilb51hltx0.cloudfront.net/max/700/0*_FsumXZhb9N9Hs2L.png

KIBANA WEB INTERFACE



Shipping nginx access logs to LogStash

A centralized web interface for grepping and filtering logs.





At Commando.io, we’ve always wanted a web interface to allow us to grep and filter through our nginx access logs in a friendly manner. After researching a bit, we decided to go with LogStash and use Kibana as the web front-end for ElasticSearch.
LogStash is a free and open source tool for managing events and logs. You can use it to collect logs, parse them, and store them for later.
First, let’s setup our centralized log server. This server will listen for events using Redis as a broker and send the events to ElasticSearch.
The following guide assumes that you are running CentOS 6.4 x64.

Centralized Log Server

cd $HOME
# Get ElasticSearch 0.9.1, add as a service, and autostart
sudo yum -y install java-1.7.0-openjdk
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.1.zip
unzip elasticsearch-0.90.1.zip
rm -rf elasticsearch-0.90.1.zip
mv elasticsearch-0.90.1 elasticsearch
sudo mv elasticsearch /usr/local/share
cd /usr/local/share
sudo chmod 755 elasticsearch
cd $HOME
curl -L http://github.com/elasticsearch/elasticsearch-servicewrapper/tarball/master | tar -xz
sudo mv *servicewrapper*/service /usr/local/share/elasticsearch/bin/
rm -Rf *servicewrapper*
sudo /usr/local/share/elasticsearch/bin/service/elasticsearch install
sudo service elasticsearch start
sudo chkconfig elasticsearch on
# Add the required prerequisite remi yum repository
sudo rpm —import http://rpms.famillecollet.com/RPM-GPG-KEY-remi
sudo rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
sed -i ‘0,/enabled=0/s//enabled=1/’ /etc/yum.repos.d/remi.repo
# Install Redis and autostart
sudo yum -y install redis
sudo service redis start
sudo chkconfig redis on
# Install LogStash
wget http://logstash.objects.dreamhost.com/release/logstash-1.1.13-flatjar.jar
sudo mkdir —-parents /usr/local/bin/logstash
sudo mv logstash-1.1.13-flatjar.jar /usr/local/bin/logstash/logstash.jar
# Create LogStash configuration file
cd /etc
sudo touch logstash.conf
Use the following LogStash configuration for the centralized server:

# Contents of /etc/logstash.conf
input {
redis {
host => “127.0.0.1"
port => 6379
type => “redis-input”
data_type => “list”
key => “logstash”
format => “json_event”
}
}
output {
elasticsearch {
host => “127.0.0.1"
}
}
Finally, let’s start LogStash on the centralized server:

/usr/bin/java -jar /usr/local/bin/logstash/logstash.jar agent —config /etc/logstash.conf -w 1
In production, you’ll most likely want to setup a service for LogStash instead of starting it manually each time. The following init.d service script should do the trick (it is what we use).
Woo Hoo, if you’ve made it this far, give yourself a big round of applause. Maybe grab a frosty adult beverage.
Now, let’s setup each nginx web server.

Nginx Servers

cd $HOME
# Install Java
sudo yum -y install java-1.7.0-openjdk
# Install LogStash
wget http://logstash.objects.dreamhost.com/release/logstash-1.1.13-flatjar.jar
sudo mkdir —-parents /usr/local/bin/logstash
sudo mv logstash-1.1.13-flatjar.jar /usr/local/bin/logstash/logstash.jar
# Create LogStash configuration file
cd /etc
sudo touch logstash.conf
Use the following LogStash configuration for each nginx server:

# Contents of /etc/logstash.conf
input {
file {
type => “nginx_access”
path => [“/var/log/nginx/**”]
exclude => [“*.gz”, “error.*”]
discover_interval => 10
}
}
filter {
grok {
type => nginx_access
pattern => “%{COMBINEDAPACHELOG}”
}
}
output {
redis { host => “hostname-of-centralized-log-server” data_type => “list” key => “logstash” }
}
Start LogStash on each nginx server:

/usr/bin/java -jar /usr/local/bin/logstash/logstash.jar agent —config /etc/logstash.conf -w 2
Kibana - A Beautiful Web Interface

At this point, you’ve got your nginx web servers shipping their access logs to a centralized log server via Redis. The centralized log server is churning away, processing the events from Redis and storing them into ElasticSearch.
All that is left is to setup a web interface to interact with the data in ElasticSearch. The clear choice for this is Kibana. Even though LogStash comes with its own web interface, it is highly recommended to use Kibana instead. In-fact, the folks that maintain LogStash recommend Kibana and are going to be deprecating their web interface in the near future. Moral of the story… Use Kibana.
On your centralized log server, get and install Kibana.

cd $HOME
# Install Ruby
yum -y install ruby
# Install Kibana
wget https://github.com/rashidkpc/Kibana/archive/v0.2.0.zip
unzip v0.2.0
rm -rf v0.2.0
sudo mv Kibana-0.2.0 /srv/kibana
# Edit Kibana configuration file
cd /srv/kibana
sudo nano KibanaConfig.rb
# Set Elasticsearch = “localhost:9200"
sudo gem install bundler
sudo bundle install
# Start Kibana
ruby kibana.rb
Simply open up your browser and navigate to http://hostname-of-centralized-log-server:5601 and you should see the Kibana interface load right up.

http://onexin.iyunv.com/source/plugin/onexin_bigdata/https://d262ilb51hltx0.cloudfront.net/max/800/0*Pe0knKlhlVtmCBNv.pnghttp://onexin.iyunv.com/source/plugin/onexin_bigdata/https://d262ilb51hltx0.cloudfront.net/max/800/0*EWjioQENEZTUJz2u.pngLastly, just like for ElasticSearch, you’ll probably want Kibana to run as a service and autostart. Again, here is our init.d service script that we use.
Congratulations, your now shipping your nginx access logs like a boss to ElasticSearch and using the Kibana web interface to grep and filter them.

Interested in automating this entire install ofElasticSearch, Redis,LogStash and Kibana on your infrastructure? We can help! Commando.io is a web based interface for managing servers and running remote executions over SSH. Request a beta invite today, and start managing servers easily online.

运维网声明 1、欢迎大家加入本站运维交流群:群②:261659950 群⑤:202807635 群⑦870801961 群⑧679858003
2、本站所有主题由该帖子作者发表,该帖子作者与运维网享有帖子相关版权
3、所有作品的著作权均归原作者享有,请您和我们一样尊重他人的著作权等合法权益。如果您对作品感到满意,请购买正版
4、禁止制作、复制、发布和传播具有反动、淫秽、色情、暴力、凶杀等内容的信息,一经发现立即删除。若您因此触犯法律,一切后果自负,我们对此不承担任何责任
5、所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其内容的准确性、可靠性、正当性、安全性、合法性等负责,亦不承担任何法律责任
6、所有作品仅供您个人学习、研究或欣赏,不得用于商业或者其他用途,否则,一切后果均由您自己承担,我们对此不承担任何法律责任
7、如涉及侵犯版权等问题,请您及时通知我们,我们将立即采取措施予以解决
8、联系人Email:admin@iyunv.com 网址:www.yunweiku.com

所有资源均系网友上传或者通过网络收集,我们仅提供一个展示、介绍、观摩学习的平台,我们不对其承担任何法律责任,如涉及侵犯版权等问题,请您及时通知我们,我们将立即处理,联系人Email:kefu@iyunv.com,QQ:1061981298 本贴地址:https://www.iyunv.com/thread-318639-1-1.html 上篇帖子: nginx 配置ssl双向证书验证 下篇帖子: nginx.conf配置详解
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

扫码加入运维网微信交流群X

扫码加入运维网微信交流群

扫描二维码加入运维网微信交流群,最新一手资源尽在官方微信交流群!快快加入我们吧...

扫描微信二维码查看详情

客服E-mail:kefu@iyunv.com 客服QQ:1061981298


QQ群⑦:运维网交流群⑦ QQ群⑧:运维网交流群⑧ k8s群:运维网kubernetes交流群


提醒:禁止发布任何违反国家法律、法规的言论与图片等内容;本站内容均来自个人观点与网络等信息,非本站认同之观点.


本站大部分资源是网友从网上搜集分享而来,其版权均归原作者及其网站所有,我们尊重他人的合法权益,如有内容侵犯您的合法权益,请及时与我们联系进行核实删除!



合作伙伴: 青云cloud

快速回复 返回顶部 返回列表