千家信息网

ELK7.4-快速入门实现数据收集

发表于:2025-02-01 作者:千家信息网编辑
千家信息网最后更新 2025年02月01日,小生博客:http://xsboke.blog.51cto.com -------谢谢您的参考,如有疑问,欢迎交流目录:本次使用的组件环境WEB配置Elastic
千家信息网最后更新 2025年02月01日ELK7.4-快速入门实现数据收集

小生博客:http://xsboke.blog.51cto.com

                    -------谢谢您的参考,如有疑问,欢迎交流

目录:

  • 本次使用的组件
  • 环境
  • WEB配置
  • Elasticsearch配置
  • 通过nginx访问elasticsearchkibana
  • 扩展:filebeat input 配置
  • 排错方法

组件简介和作用

filebeat收集日志   ->  logstash过滤/格式化  -> elasticsearch存储  ->  kibana展示# 个人理解其实logstash和filebeat都可以收集日志并且直接输出到elasticsearch.只不过logstash功能比filebeat更多,比如:过滤,格式化filebeat比logstash更轻,所以filebeat收集日志速度更快.

环境

#基于ELK7.4,通过收集Nginx日志示例.centos7.2-web               172.16.100.251      nginx/filebeat/logstashcentos7.2-elasticsearch     172.16.100.252      elasticsearch/kibana

WEB配置

1. 安装Nginx
   yum -y install yum-utils   vim /etc/yum.repos.d/nginx.repo    [nginx-stable]    name=nginx stable repo    baseurl=http://nginx.org/packages/centos/$releasever/$basearch/    gpgcheck=1    enabled=1    gpgkey=https://nginx.org/keys/nginx_signing.key    module_hotfixes=true    [nginx-mainline]    name=nginx mainline repo    baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/    gpgcheck=1    enabled=1    gpgkey=https://nginx.org/keys/nginx_signing.key    module_hotfixes=true   yum-config-manager --enable nginx-mainline   yum -y install nginx   nginx
2. 配置JDK
   tar zxf jdk-8u202-linux-x64.tar.gz   mv jdk1.8.0_202 /usr/local/jdk1.8   vim /etc/profile    export JAVA_HOME=/usr/local/jdk1.8    export JRE_HOME=/usr/local/jdk1.8/jre    export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH    export PATH=$JAVE_HOME/bin:$JRE_HOME/bin:$PATH   source /etc/profile   # 如果不做这个软连接logstash依然会报错找不到openSDK   ln -s /usr/local/jdk1.8/bin/java /usr/bin/java   
3. 安装并且配置filebeat
   curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.4.0-x86_64.rpm   rpm -vi filebeat-7.4.0-x86_64.rpm       vim /etc/filebeat/filebeat.yml    filebeat.inputs:     - type: log       enabled: true         paths:           - /var/log/nginx/access.log  # 监控的日志         tags: ["access"]               # 用于实现多日志收集     - type: log       enabled: true         paths:           - /var/log/nginx/error.log         tags: ["error"]    output.logstash:      hosts: ["localhost:5044"] # logstash的配置文件会指定监听这个端口   # 注释: "output.elasticsearch",否则在启用logstash模块时会报错:Error initializing beat: error unpacking config data: more than one namespace configured accessing 'output' (source:'/etc/filebeat/filebeat.yml')   # 启动logstatsh模块,其实修改的是这个文件"/etc/filebeat/modules.d/logstash.yml"   filebeat modules enable logstash
4. 安装logstash
   rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch   vim /etc/yum.repos.d/logstash.repo    [logstash-7.x]    name=Elastic repository for 7.x packages    baseurl=https://artifacts.elastic.co/packages/7.x/yum    gpgcheck=1    gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch    enabled=1    autorefresh=1    type=rpm-md   yum -y install logstash   ln -s /usr/share/logstash/bin/logstash /usr/local/bin/   # logstash.yml部分配置简介   path.data: 数据存放目录   config.reload.automatic: 是否动态加载配置文件   config.reload.interval: 动态加载配置文件间隔   http.host: 监听主机   http.port: 端口   # 在logstash/conf.d/ 下编写你的配置文件   vim /etc/logstash/conf.d/nginx.conf    input {           beats {               port => 5044           }    }    output {             if "access" in [tags] {    # 通过判断标签名,为不同的日志配置不同的index               elasticsearch {                   hosts => ["172.16.100.252:9200"]                   index => "nginx-access-%{+YYYY.MM.dd}" # 索引名不能大写                   sniffing => true                   template_overwrite => true               }           }           if "error" in [tags] {               elasticsearch {                   hosts => ["172.16.100.252:9200"]                   index => "nginx-error-%{+YYYY.MM.dd}"                   sniffing => true                   template_overwrite => true               }           }    }   systemctl daemon-reload   systemctl enable logstashe   systemctl start logstashe
5. 防火墙配置
   firewall-cmd --permanent --add-port=80/tcp   firewall-cmd --reload

Elasticsearch配置

1. 配置JDK
   tar zxf jdk-8u202-linux-x64.tar.gz   mv jdk1.8.0_202 /usr/local/jdk1.8   vim /etc/profile    export JAVA_HOME=/usr/local/jdk1.8    export JRE_HOME=/usr/local/jdk1.8/jre    export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH    export PATH=$JAVE_HOME/bin:$JRE_HOME/bin:$PATH   source /etc/profile
2. 安装elasticsearch
```vim /etc/yum.repos.d/elasticsearch.repo    [elasticsearch-7.x]    name=Elasticsearch repository for 7.x packages    baseurl=https://artifacts.elastic.co/packages/7.x/yum    gpgcheck=1    gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch    enabled=1    autorefresh=1    type=rpm-mdyum -y install elasticsearch# 修改elasticsearch    关键字:        cluster.name:                       群集名字         node.name:                          节点名字        path.data:                          数据存放路径        path.logs:                          日志存放路径        bootstrap.memory_lock:              在启动时侯是否锁定内存        network.host:                       提供服务绑定的ip地址,0.0.0.0代表所有地址        http.port:                          侦听端口        discovery.seed_hosts:               集群主机        cluster.initial_master_nodes:       指定master节点sed -i "/#cluster.name: my-application/a\cluster.name: my-elk-cluster" /etc/elasticsearch/elasticsearch.ymlsed -i "/#node.name: node-1/a\node.name: node-1" /etc/elasticsearch/elasticsearch.ymlsed -i "s/path.data: \/var\/lib\/elasticsearch/path.data: \/data\/elasticsearch/g" /etc/elasticsearch/elasticsearch.ymlsed -i "/#bootstrap.memory_lock: true/a\bootstrap.memory_lock: false" /etc/elasticsearch/elasticsearch.ymlsed -i "/#network.host: 192.168.0.1/a\network.host: 0.0.0.0" /etc/elasticsearch/elasticsearch.ymlsed -i "/#http.port: 9200/a\http.port: 9200" /etc/elasticsearch/elasticsearch.ymlsed -i '/#discovery.seed_hosts: \["host1", "host2"\]/a\discovery.seed_hosts: \["172.16.100.252"\]' /etc/elasticsearch/elasticsearch.ymlsed -i '/#cluster.initial_master_nodes: \["node-1", "node-2"\]/a\cluster.initial_master_nodes: \["node-1"\]' /etc/elasticsearch/elasticsearch.ymlmkdir -p /data/elasticsearchchown  elasticsearch:elasticsearch /data/elasticsearchsystemctl daemon-reloadsystemctl enable elasticsearchsystemctl start elasticsearch```
3. 安装配置Kibana
   rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch   vim /etc/yum.repos.d/kibana.repo    [kibana-7.x]    name=Kibana repository for 7.x packages    baseurl=https://artifacts.elastic.co/packages/7.x/yum    gpgcheck=1    gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch    enabled=1    autorefresh=1    type=rpm-md   yum -y install kibana   sed -i "/#server.port: 5601/a\server.port: 5601" /etc/kibana/kibana.yml   sed -i '/#server.host: "localhost"/a\server.host: "0.0.0.0"' /etc/kibana/kibana.yml   sed -i '/#elasticsearch.hosts: \["http:\/\/localhost:9200"\]/a\elasticsearch.hosts: \["http:\/\/localhost:9200"\]' /etc/kibana/kibana.yml   sed -i '/#kibana.index: ".kibana"/a\kibana.index: ".kibana"' /etc/kibana/kibana.yml   systemctl daemon-reload   systemctl enable kibana   systemctl start kibana
4. 防火墙配置
   firewall-cmd --permanent --add-port=9200/tcp   # firewall-cmd --permanent --add-port=9300/tcp # 集群端口   firewall-cmd --permanent --add-port=5601/tcp   firewall-cmd --reload

通过nginx访问elasticsearchkibana(使用nginx实现elasticsearchkibana的访问限制)

1. 172.16.100.252
   # 修改hosts   vim /etc/hosts   172.16.100.252   elk.elasticsearch   # 安装nginx并且配置   server {       listen       80;       server_name  elk.elasticsearch;       location / {           allow 127.0.0.1/32;           allow 172.16.100.251/32;           deny all;           proxy_pass http://127.0.0.1:9200;       }   }   server {       listen       80;       server_name  elk.kibana;       location / {           allow "可以访问kibana的IP";           deny all;           proxy_pass http://127.0.0.1:5601;       }   }   # 修改elasticsearch配置   network.host: 127.0.0.1   discovery.seed_hosts: ["elk.elasticsearch"]   # 修改kibana配置   server.host: "127.0.0.1"   systemctl restart elasticsearch   systemctl restart kibana
2. 172.16.100.251
   #  修改hosts   vim /etc/hosts   172.16.100.252   elk.elasticsearch   # logstash input output conf   vim /etc/logstash/conf.d/nginx.conf    input {           beats {               port => 5044           }    }    output {             if "access" in [tags] {    # 通过判断标签名,为不同的日志配置不同的index               elasticsearch {                   hosts => ["elk.elasticsearch:80"]    # 必须指定端口,否则默认访问9200                   index => "nginx-access-%{+YYYY.MM.dd}" # 索引名不能大写                   sniffing => false                   template_overwrite => true               }           }           if "error" in [tags] {               elasticsearch {                   hosts => ["elk.elasticsearch:80"]                   index => "nginx-error-%{+YYYY.MM.dd}"                   sniffing => false                   template_overwrite => true               }           }    }   systemctl restart logstash

扩展:filebeat input 配置

# filebeat将多行合并为一行收集filebeat.inputs:     - type: log       enabled: true         paths:           - /var/log/nginx/access.log         tags: ["access"]         multiline.pattern: '^\[[0-9]{4}'   # 指定要匹配的正则表达式模式,匹配以[YYYY 开头的行.         multiline.negate: true             # 不匹配模式的连续行         multiline.match: after             # 追加到不匹配的前一行# filebeat收集指定目录下的日志并且包括子目录filebeat.inputs:- type: log  enabled: true  paths:    - "/var/log/**"  recursive_glob.enabled: true  # 开启递归模式  tags: ["LogAll"]

排错方法

# 启动filebeat并且将信息输出到终端filebeat -e# 启动logstash并且将信息输出到终端logstash /etc/logstash/conf.d/nginx.conf# 随意写入内容到收集的日志中echo "1" >> /var/log/nginx/access.log# 然后通过查看filebeat和logstash的输出来判断错误
0