shilang 发表于 2019-1-28 12:53:33

在CentOS7中部署ELK日志分析系统

在CentOS7中部署ELK日志分析系统

ELK原理介绍

什么是ELK
  ELK是三个开源软件的缩写,分别表示:Elasticsearch , Logstash, Kibana , 它们都是开源软件。新增了一个FileBeat,它是一个轻量级的日志收集处理工具(Agent),Filebeat占用资源少,适合于在各个服务器上搜集日志后传输给Logstash,官方也推荐此工具。
  Elasticsearch是实时全文搜索和分析引擎,提供搜集、分析、存储数据三大功能;是一套开放REST和JAVA API等结构提供高效搜索功能,可扩展的分布式系统。它构建于Apache Lucene搜索引擎库之上。
  Logstash是一个用来搜集、分析、过滤日志的工具。它支持几乎任何类型的日志,包括系统日志、错误日志和自定义应用程序日志。它可以从许多来源接收日志,这些来源包括 syslog、消息传递(例如 RabbitMQ)和JMX,它能够以多种方式输出数据,包括电子邮件、websockets和Elasticsearch。
  Kibana是一个基于Web的图形界面,用于搜索、分析和可视化存储在 Elasticsearch指标中的日志数据。它利用Elasticsearch的REST接口来检索数据,不仅允许用户创建他们自己的数据的定制仪表板视图,还允许他们以特殊的方式查询和过滤数据。
  ELK架构:
http://i2.运维网.com/images/blog/201808/21/acd0a9559782cca412daca8fced8a3c8.png
可以看到首先由logstash负责搜集各个节点服务器相关服务的日志,如Nginx、系统日志以及Redis的运行日志等,然后通过logstash过滤(可以基于正则表达式),将最终的结果输出到elasticsearch中,elasticsearch将日志信息建立相关的index,最终通过kibana将结果更加条理化地展现出来,这就是ELK的基本流程。

实验环境




IP
相关软件




192.168.58.147
elasticsearch、logstash、kibana、httpd


192.168.58.147
elasticsearch


192.168.58.157
logstash



实验实施

安装elasticsearch
  我们这次做的是搭建两个elasticsearch节点,做分布式搜索及存储,首先修改yum源,使用yum安装elasticsearch,注意elasticsearch服务器内存需要大于2G

# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
#导入GPG校验密钥
# vim /etc/yum.repos.d/elasticsearch.repo
#创建repo的源文件,代码如下

name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enable=1
# yum install elasticsearch -y
#使用yum安装elasticsearch软件包
  安装java环境,直接使用yum安装

# yum install java -y
#使用java -version测试java环境是否搭建好
# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
#可以看到java已经更新到最新版本
  修改elasticsearch配置文件

# cd /etc/elasticsearch/
# vim elasticsearch.yml
http://i2.运维网.com/images/blog/201808/21/c4e025ab763bbd60a93550d2fec62fd0.png
http://i2.运维网.com/images/blog/201808/21/38082ab4a4d4a7e2a7b334592c0b01c7.png
创建数据目录,及修改目录权限

# mkdir -p /data/es-data
# chown -R elasticsearch:elasticsearch /data/es-data/
  启动服务,并查看9200端口是否开启

# systemctl start elasticsearch.service
# netstat -ntap | grep 9200
tcp6       0      0 :::9200               :::*                  LISTEN      90165/java
#可以看到9200端口已经开启
  有时会碰到es服务无法启动的情况,查看/var/log/elasticsearch/下面的日志会发现
http://i2.运维网.com/images/blog/201808/21/54babaa9e0f22f1b86719a0600426226.png
这个时候需要修改/etc/security/limits.conf文件
http://i2.运维网.com/images/blog/201808/21/389d20bc05ee6225fb96dc33beaeb133.png
测试访问http://192.168.58.147:9200
http://i2.运维网.com/images/blog/201808/21/f6e33c7d49356ab702cceea0f09ae425.png
我们使用json格式进行交互测试

# curl -i -XGET 'http://192.168.58.147:9200/_count?pretty' -d '{> "query": {
>   "match_all": {}
> }
> }'
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95
{
"count" : 0,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
}
}
#测试成功
  可以看到上面两种交互方式并不友好,我们可以通过安装head插件,进行更加友好的访问。

# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
-> Installing mobz/elasticsearch-head...
Trying https://github.com/mobz/elasticsearch-head/archive/master.zip ...
....省略
Verifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums if available ...
NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
Installed head into /usr/share/elasticsearch/plugins/head
  安装好head插件后,我们继续进行访问测试http://192.168.58.147:9200/_plugin/head/
http://i2.运维网.com/images/blog/201808/21/273463a98ea22cc8dd6a6eadebcdb271.png
http://i2.运维网.com/images/blog/201808/21/42f874d8d7a127386ef820899031d36c.png
http://i2.运维网.com/images/blog/201808/21/add59dce8508771bb191c704a9003137.png
http://i2.运维网.com/images/blog/201808/21/2e53310896988f67d0fa43d2bc170053.png
  下面我们创建另外一个elasticsearch节点,从而构建es群集。在另外一台虚拟机上安装elasticsearch及java环境,最后修改配置文件。最后启动节点2的es服务。
http://i2.运维网.com/images/blog/201808/21/1ecf3aed59e4d2b796c4f92e351cb40f.png
http://i2.运维网.com/images/blog/201808/21/a55ce2f8fa6125aac4a6ea1cd5234edb.png

# systemctl start elasticsearch.service
# netstat -ntap | grep 9200
tcp6       0      0 :::9200               :::*                  LISTEN      2194/java         
  这个时候再访问http://192.168.58.147:9200/_plugin/head/,我们会发现会有两个节点。
http://i2.运维网.com/images/blog/201808/21/01c0121f8171d29a3ab740fa478ea25d.png
这里我们再介绍一个插件kopf

# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
-> Installing lmenezes/elasticsearch-kopf...
Trying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip ...
....省略
Verifying https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip checksums if available ...
NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)
Installed kopf into /usr/share/elasticsearch/plugins/kopf
  安装完后我们访问http://192.168.58.147:9200/_plugin/kopf
http://i2.运维网.com/images/blog/201808/21/6a8263cf667665a4b946c09704f6979f.png

安装logstash
  配置yum源文件

# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
#导入软件包校验密钥
# vim /etc/yum.repos.d/logstash.repo

name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enable=1
# yum install logstash -y
#安装logstash服务
  可以测试logstash

# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
abc123
2018-08-21T14:07:37.666Z www1.yx.com abc123
test
2018-08-21T14:07:46.156Z www1.yx.com test
#可以看到我们输入什么,后面就会直接输出什么内容
  按住Ctrl+c退出后,换一种格式输入输出

# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug } }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
abc123
{
"message" => "abc123",
"@version" => "1",
"@timestamp" => "2018-08-21T14:09:18.094Z",
"host" => "www1.yx.com"
}
#这是详细格式输出,可以看到更加详细的内容
  同样,我们可以将输入内容输出到elasticsearch中。

# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.58.147:9200"] } }'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Settings: Default filter workers: 1
Logstash startup completed
abc123
test123
123456
  然后我们到http://192.168.58.147:9200/_plugin/head/
http://i2.运维网.com/images/blog/201808/21/393db6bbd53ae95f23351569bfc37172.png
使用logstash收集系统日志

# ln -s /opt/logstash/bin/logstash /usr/bin/
# vim file.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.58.147:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
  启动logstash后,我们再来访问http://192.168.58.147:9200/_plugin/head/。
http://i2.运维网.com/images/blog/201808/21/ceedbebcbdda52ae553719959b2648a4.png
  下面我们尝试多个服务日志,修改file.conf.

input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/httpd/access_log"
type => "httpd"
start_position => "beginning"
}
}
output {
if == "system" {
elasticsearch {
hosts => ["192.168.58.147:9200"]
index => "system-%{+YYYY.MMdd}"
}
}
if == "httpd" {
elasticsearch {
hosts => ["192.168.58.147:9200"]
index => "httpd-%{+YYYY.MMdd}"
}
}
}
  我们再来访问http://192.168.58.147:9200/_plugin/head/。
http://i2.运维网.com/images/blog/201808/21/79a14526367506adf683d1e725ab2dde.png

安装kibana
  下载kibana

# wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
--2018-08-21 23:02:18--https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
正在解析主机 download.elastic.co (download.elastic.co)... 54.235.171.120, 54.225.214.74, 54.225.221.128, ...
正在连接 download.elastic.co (download.elastic.co)|54.235.171.120|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:30408272 (29M)
正在保存至: “kibana-4.3.1-linux-x64.tar.gz”
100%[==================================================>] 30,408,272   512KB/s 用时 82s   
2018-08-21 23:03:43 (361 KB/s) - 已保存 “kibana-4.3.1-linux-x64.tar.gz”
  解压kibana到指定目录

# tar zxvf kibana-4.3.1-linux-x64.tar.gz -C /opt/
  将解压的目录重命名为kibana

# mv /opt/kibana-4.3.1-linux-x64/ /opt/kibana/
  修改kibana配置文件

# vim /opt/kibana/config/kibana.yml
http://i2.运维网.com/images/blog/201808/21/066a9f9fb5b01190647b99cd61a725f6.png
启动kibana

# /opt/kibana/bin/kibana
  访问http://192.168.58.147:5601地址。
http://i2.运维网.com/images/blog/201808/21/ad04810f53ae4aa016872355ee183e47.png
http://i2.运维网.com/images/blog/201808/21/38a48b37f590b10ee9e993e379495c49.png
http://i2.运维网.com/images/blog/201808/21/113d31923db0fb457a92d2ae0a1d9d3e.png
http://i2.运维网.com/images/blog/201808/21/ddb3d7fd82a15e7e44c2200487cfc9db.png



页: [1]
查看完整版本: 在CentOS7中部署ELK日志分析系统