elk(elasticsearch、logstast,kibana)filebeat部署与实践
1、elk说明
elk全称:
elasticsearch:
是一个分布式、高扩展、高实时的搜索与数据分析引擎;简称es
logstash:
是开源的服务器端数据处理管道,能够同时从多个来源采集数据,转换数据,然后将数据发送到您最喜欢的"存储库"中;如elasticsearch中
kibana:
是为 Elasticsearch设计的开源分析和可视化平台。你可以使用 Kibana 来搜索,查看存储在 Elasticsearch 索引中的数据并与之交互。你可以很容易实现高级的数据分析和可视化,以图标的形式展现出来。
以上三个组件就是常说的elk~
2、快速部署配置elk
1)部署环境:
Centos7,本文基于7.x部署
172.16.0.213 elasticsearch
172.16.0.217 elasticsearch
172.16.0.219 elasticsearch kibana
kibana只要在其中一台部署即可;
2)配置官方yum源
三台均配置repo源
$ cat /etc/yum.repos.d/elast.repo[elasticsearch-7.x]name=Elasticsearch repository for 7.x packagesbaseurl=https://artifacts.elastic.co/packages/7.x/yumgpgcheck=1gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearchenabled=1autorefresh=1type=rpm-md
3)安装
$ cat /etc/hosts
172.16.0.213 ickey-elk-213
172.16.0.217 ickey-elk-217
172.16.0.219 ickey-elk-219
$ yum install elasticsearch -y
4)配置
$ cat /etc/elasticsearch/elasticsearch.ymlcluster.name: elk_test ### 集群名node.name: ickey-elk-217 ### 节点名需要按节点配置node.master: truenode.data: truepath.data: /var/log/elasticsearch/datapath.logs: /var/log/elasticsearch/logsnetwork.host: 172.16.0.217 ### 节点iptransport.tcp.port: 9300transport.tcp.compress: truehttp.port: 9200http.max_content_length: 100mbbootstrap.memory_lock: truediscovery.seed_hosts: ["172.16.0.213","172.16.0.217","172.16.0.219"]cluster.initial_master_nodes: ["172.16.0.213","172.16.0.217","172.16.0.219"]gateway.recover_after_nodes: 2gateway.recover_after_time: 5mgateway.expected_nodes: 3
修改elasticsearch启动内存分配:
$ /etc/elasticsearch/jvm.options 中
-Xms4g
-Xmx4g
内在一般是系统内存80%左右;分别表示预加载内存和最高使用内存
此时启动elasticsearch
$ systemctl elasticsearch start
5)安装kibana
就在219上安装
$ yum install kinbana -y
配置
$ cat /etc/kibana/kibana.yml|egrep -v "(^$|^#)"server.port: 5601server.host: "172.16.0.219"server.name: "ickey-elk-219"elasticsearch.hosts: ["http://172.16.0.213:9200","http://172.16.0.217:9200","http://172.16.0.219:9200"]elasticsearch.username: "kibana"elasticsearch.password: "pass"elasticsearch.requestTimeout: 40000logging.dest: /var/log/kibana/kibana.log # 日志输出,默认输出到了/var/log/messagei18n.locale: "zh-CN" # 中文界面
详情配置参考:
https://www.elastic.co/guide/cn/kibana/current/settings.html
2、logstash安装配置及实践
上面已经所存储搜索的es和展示及搜索图片化的kibana安装配置完成,数据获取部分就需要logstash和beat这里主要使用到了logstash和filebeat
lostash收集日志比较重量级,配置也相对复杂点;可定制收集的功能也很多,这里除了安装给也常见配置整理:
1)安装
通过yum源安装,安装源同上
yum install logstash -y
logstash需要jdk支持;因此需要先安装配置java jdk版本1.8及以上即可;
这里安装 jdk-8u211-linux-x64.rpm
$cat /etc/profile.d/java.shxport JAVA_HOME=/usr/java/latestexport JAVA_BIN=${JAVA_HOME}/binexport PATH=${PATH}:${JAVA_HOME}/binexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport JAVA_HOME JAVA_BIN PATH CLASSPATHexport JRE_HOME=/usr/java/latest
安装完成后需要执行 /usr/share/logstash/bin/system-install
Centos 6的系统通过以下方式管理服务
initctl status|start|stop|restart logstash
CentOS7:
systemctl restart logstash
2)实践配置
收集nginx日志:(nginx服务器上执行)
$ cat /etc/logstash/conf.d/nginx-172.16.0.14.confinput {file {path => ["/var/log/nginx/test.log"]codec => jsonsincedb_path => "/var/log/logstash/null"discover_interval => 15stat_interval => 1start_position => "beginning"}}filter {date {locale => "en"timezone => "Asia/Shanghai"match => [ "timestamp", "ISO8601" ,"yyyy-MM-dd'T'HH:mm:ssZZ" ]}mutate {convert => [ "upstreamtime", "float" ]}mutate {gsub => ["message", "\x", "\\x"]}if [user_agent] { useragent { prefix => "remote_" source => "user_agent" } } if [request] {ruby {init => "@kname = ['method1','uri1','verb']"code => "new_event = LogStash::Event.new(Hash[@kname.zip(event.get('request').split(' '))])new_event.remove('@timestamp')new_event.remove('method1')event.append(new_event)"remove_field => [ "request" ]}}geoip {source => "clientRealIp"target => "geoip"database => "/tmp/GeoLite2-City.mmdb"add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]}mutate {convert => ["[geoip][coordinates]", "float","upstream_response_time","float","responsetime","float","body_bytes_sent","integer","bytes_sent","integer"]}}output {elasticsearch {hosts => ["172.16.0.219:9200"]index => "logstash-nginx-%{+YYYY.MM.dd}"workers => 1template_overwrite => true}}
注意需要nginx中的日志格式配置为:
log_format logstash '{"@timestamp":"$time_iso8601",''"@version":"1",''"host":"$server_addr",''"size":$body_bytes_sent,''"domain":"$host",''"method":"$request_method",''"url":"$uri",''"request":"$request",''"status":"$status",''"referer":"$http_referer",''"user_agent":"$http_user_agent",''"body_bytes_sent":"$body_bytes_sent",''"bytes_sent":"$bytes_sent",''"clientRealIp":"$clientRealIp",''"forwarded_for":"$http_x_forwarded_for",''"responsetime":"$request_time",''"upstreamhost":"$upstream_addr",''"upstream_response_time":"$upstream_response_time"}';
配置成接收syslog
$ cat /etc/logstash/conf.d/rsyslog-tcp.confinput {syslog {type => "system-syslog"host => "172.16.0.217"port => 1514}}filter {if [type] == "system-syslog" {grok {match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?:%{GREEDYDATA:syslog_message}" }add_field => [ "received_at", "%{@timestamp}" ]add_field => [ "received_from", "%{host}" ]}date {match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]}}}output {if [type] == "system-syslog" {elasticsearch {hosts => ["172.16.0.217:9200"]index => "logstash-%{type}-%{+YYYY.MM.dd}"#workers => 1template_overwrite => true}}}
客户端需要配置:
$ tail -fn 1 /etc/rsyslog.conf. @172.16.0.217:1514
配置收集硬件日志服务器
[yunwei@ickey-elk-217 ~]$ cat /etc/logstash/conf.d/hardware.conf
input {syslog {type => "hardware-syslog"host => "172.16.0.217"port => 514}}filter {if [type] == "hardware-syslog" {grok {match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?:%{GREEDYDATA:syslog_message}" }add_field => [ "received_at", "%{@timestamp}" ]add_field => [ "received_from", "%{host}" ]}date {match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]}}}output {if [type] == "hardware-syslog" {elasticsearch {hosts => ["172.16.0.217:9200"]index => "logstash-%{type}-%{+YYYY.MM.dd}"}}}
3、filebeat安装配置及应用实践
1)说明
filebeat 原先是基于 logstash-forwarder 的源码改造出来的。换句话说:filebeat 就是新版的 logstash-forwarder,也会是 Elastic Stack 在 shipper 端的第一选择。
下图摘自官方,es logstasilebeat.pngh filbeat kafa redis之间的关系;如图:
2)安装
同样基于以上的yum源
$ yum install filebeat -y
3)配置之收集runtime和php-fpm错误日志
[root@ickey-app-api-52 yunwei]# cat /etc/filebeat/filebeat.yml#=========================== Filebeat inputs =============================filebeat.inputs:- type: logenabled: truepaths:- /home/wwwroot/.ickey.cn/runtime/logs/.logfields:type: "runtime"json.message_key: logjson.keys_under_root: true- type: logenabled: truepaths:- /var/log/php-fpm/www-error.logfields:type: "php-fpm"#============================= Filebeat modules ===============================filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: true#==================== Elasticsearch template setting ==========================setup.template.settings:index.number_of_shards: 2#============================== Kibana =====================================setup.kibana:host: "172.16.0.219:5601"#============================= Elastic Cloud ==================================output.elasticsearch:hosts: ["172.16.0.213:9200","172.16.0.217:9200","172.16.0.219:9200"]indices:- index: "php-fpm-log-%{+yyyy.MM.dd}"when.equals:fields.type: "php-fpm"- index: "runtime-log-%{+yyyy.MM.dd}"when.equals:fields.type: "runtime"pipelines:- pipeline: "php-error-pipeline"when.equals:fields.type: "php-fpm"#================================ Processors =====================================processors:- add_host_metadata: ~- add_cloud_metadata: ~#================================ Logging =====================================logging.level: infologging.to_files: truelogging.files:path: /var/log/filebeatname: filebeatkeepfiles: 7permissions: 0644
说明:
php-fpm error.log格式如下:
[29-Oct-2019 11:33:01 PRC] PHP Fatal error: Call to a member function getBSECollection() on null in /var/html/wwwroot/framework/Excel5.php on line 917
由于 我们需要提取其中的时间,PHP Fatal error 及出错的行数;在logstash中收集需要定义 grok,filebeat则需要通过ingest处理,大概过程 是这样的filebeat先获取内容 放到logstash上 通过ingest定义输出成我们相要的样子;
因此需要在logstash上做如下操作:
[root@ickey-elk-213 ~]# cat phperror-pipeline.json{ "description": "php error log pipeline", "processors": [ { "grok": { "field": "message", "patterns": "%{DATA:datatime} PHP .*: %{DATA:errorinfo} in %{DATA:error-url} on line %{NUMBER:error-line}" } } ]}
应用 :
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/_ingest/pipeline/php-error-pipeline' -d@phperror-pipeline.json
查询:curl -H 'Content-Type: application/json' -GET 'http://localhost:9200/_ingest/pipeline/php-error-pipeline'删除:curl -H 'Content-Type: application/json' -XDELETE 'http://localhost:9200/_ingest/pipeline/php-error-pipeline'
收集数据库日志:
filebeat.inputs:- type: log paths: - /var/log/mysql/mysql.err fields: type: "mysqlerr" exclude_files: ['Note'] multiline.pattern: '^[0-9]{4}.*' multiline.negate: true multiline.match: afterfilebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: Truesetup.template.settings: index.number_of_shards: 2setup.kibana: host: "172.16.0.219:5601"output.elasticsearch: hosts: ["172.16.0.213:9200"] indices: - index: "mysql-err-%{+yyyy.MM.dd}" when.equals: fields.type: "mysqlerr"processors: - add_host_metadata: ~ - add_cloud_metadata: ~
4、安装配置elasticsearch-head
elasticsearch-head是开源的,图形化查看操作es中索引web界面;
1)安装
$ git clone https://github.com/mobz/elasticsearch-head.git$ cd elasticsearch-head$ registry=https://registry.npm.taobao.org$ npm install grunt -save --└─┬ grunt@1.0.1.....省略....├── path-is-absolute@1.0.1└── rimraf@2.2.8npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression$ npm install --registry=https://registry.npm.taobao.orgnpm WARN deprecated http2@3.3.7: Use the built-in module in node 9.0.0 or newer, instead[ ............] - fetchMetadata: verb afterAdd /root/.npm/debug/2.6.9/package/package.json written
此步需要等待一段时间
2)配置开机自启服务
$ cat /usr/bin/elasticsearch-head#!/bin/bash# chkconfig: - 25 75# description: starts and stops the elasticsearch-headdata="cd /usr/local/src/elasticsearch-head/; nohup npm run start > /dev/null 2>&1 & "START(){eval $data && echo -e "elasticsearch-head start\033[32m ok\033[0m"}STOP(){ps -ef |grep grunt |grep -v "grep" |awk '{print $2}' |xargs kill -s 9 > /dev/null && echo -e "elasticsearch-head stop\033[32m ok\033[0m"}STATUS(){PID=$(ps aux |grep grunt|grep -v grep|awk '{print $2}')}case "$1" instart)START;;stop)STOP;;restart)STOPsleep 3START;;*)echo "Usage: elasticsearch-head (start|stop|restart)";;esac
访问:
http://172.16.0.219:9100 如图: