千家信息网

ELK日志收集demo

发表于:2025-01-23 作者:千家信息网编辑
千家信息网最后更新 2025年01月23日,架构目标说明系统: CentOS Linux release 7.5.1804ELK版本: filebeat-6.8.5-x86_64.rpm, logstash-6.8.5.rpm, elastic
千家信息网最后更新 2025年01月23日ELK日志收集demo

架构目标

说明

系统: CentOS Linux release 7.5.1804

ELK版本: filebeat-6.8.5-x86_64.rpm, logstash-6.8.5.rpm, elasticsearch-6.8.5.rpm, kibana-6.8.5-x86_64.rpm kafka_2.11-2.0.0 zookeeper-3.4.12

地址名称功能, 按图左至右
192.168.9.133test1.xiong.comnginx + 虚拟主机 + filebeat
192.168.9.134test2.xiong.comnginx + 虚拟主机 + filebeat
192.168.9.135test3.xiong.comelasticsearch + kibana + logstash
192.168.9.136test4.xiong.comelasticsearch + kibana + logstash
192.168.9.137test5.xiong.comredis + logstash (这里使用kafka)
192.168.9.138test6.xiong.comredis + logstash (这里使用kafka)

实践并不需要这么多 准备4台即可

1、配置

1.1、主机名

 ~]# cat /etc/hosts192.168.9.133 test1.xiong.com192.168.9.134 test2.xiong.com192.168.9.135 test3.xiong.com192.168.9.136 test4.xiong.com192.168.9.137 test5.xiong.com192.168.9.138 test6.xiong.com# 关闭防火墙 以及selinuxsystemctl stop firewalld  sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config~]# crontab -l   # 时间同步    */1 * * * * /usr/sbin/ntpdate pool.ntp.org &>/dev/null # 安装jdk 135, 136, 137, 138需要安装~]# tar xf jdk-8u181-linux-x64.tar.gz -C /usr/java/    cd /usr/java/    ln -sv jdk1.8.0_181/ default    ln -sv default/ jdk# 设置打开文件的个数echo "* hard nofile 65536" >> /etc/security/limits.confecho "* soft nofile 65536" >> /etc/security/limits.confjava]# cat /etc/profile.d/java.sh     export JAVA_HOME=/usr/java/jdk    export PATH=$JAVA_HOME/bin:$PATHjava]# source /etc/profile.d/java.shjava]# java -versionjava version "1.8.0_181"Java(TM) SE Runtime Environment (build 1.8.0_181-b13)Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

1.2、服务端安装elk

这里配置主机 9.135、9.136

# 安装服务端的ELK~]# rpm -vih elasticsearch-6.8.5.rpm kibana-6.8.5-x86_64.rpm logstash-6.8.5.rpm # 修改配置~]# cd /etc/elasticsearch  # 修改完之后同步,只需要修改Network.host\node.nameelasticsearch]# grep -v "^#" elasticsearch.ymlcluster.name: myElks           # 集群名称node.name: test3.xiong.com     # 根据主机修改主机名path.data: /opt/elasticsearch/data   # 数据目录path.logs: /opt/elasticsearch/logs   # 日志目录network.host: 0.0.0.0network.publish_host: 192.168.9.136  # 监听地址# 发现地址pingdiscovery.zen.ping.unicast.hosts: ["192.168.9.135", "192.168.9.136"]# 最小需要多少个节点  节点数计算 (N/2)+1discovery.zen.minimum_master_nodes: 2# 开启跨域访问支持http.cors.enabled: true http.cors.allow-origin: "*"# 修改数据目录以及日志 注意权限问题elasticsearch]# mkdir /opt/elasticsearch/{data,logs} -pvelasticsearch]# chown elasticsearch.elasticsearch /opt/elasticsearch/ -R# 修改启动文件 elasticsearch]# vim /usr/lib/systemd/system/elasticsearch.service# 在[Service]下添加环境变量    Environment=JAVA_HOME=/usr/java/jdk  # 指定java家目录    LimitMEMELOCK=infinity  # 最大化使用内存elasticsearch]# vim jvm.options  # 修改启动的jvm内存,这里应该为内存的一半或不大于30G    -Xms2g    -Xmx2g# 启动服务   需要注意的是两台主机都需要同样配置, 可以使用ansible之类的工具systemctl daemon-reloadsystemctl enable elasticsearch.servicesystemctl restart elasticsearch# 检查服务端口是否监听成功,  或查看  systemctl status elasticsearchelasticsearch]# ss -tnl | grep 92LISTEN     0      128       ::ffff:192.168.9.136:9200                    :::*LISTEN     0      128       ::ffff:192.168.9.136:9300                    :::*# 查看主机是否加入集群elasticsearch]# curl 192.168.9.135:9200/_cat/nodes192.168.9.136 7 95  1 0.00 0.06 0.11 mdi * test4.xiong.com192.168.9.135 7 97 20 0.45 0.14 0.09 mdi - test3.xiong.com# 查看masterelasticsearch]# curl 192.168.9.135:9200/_cat/masterfVkp7Ld3RDGmWlGpm6t7kg 192.168.9.136 192.168.9.136 test4.xiong.com
1.2.1、安装插件head
# 两台主机 9.135 9.136 安装1、安装nmp    ]# yum -y install epel-release  # 需要先安装epel源    ]# yum -y install npm2、安装elasticsearch-head插件    ]# cd /usr/local/src/    ]# git clone git://github.com/mobz/elasticsearch-head.git    ]# cd /usr/local/src/elasticsearch-head/    elasticsearch-head ]# npm install grunt -save  # 生成执行文件    elasticsearch-head]# ll node_modules/grunt  # 确定文件是否产生    elasticsearch-head ]# npm install3、启动headnode_modules]# nohup npm run start &    ss -tnl | grep 9100  # 查看端口是否存在,存在之后直接访问web9.135:9100 与9.136:9100 可以只配一台

1.2.2、配置kibana
kibana]# grep -v "^#" kibana.yml  | grep -v "^$"server.port: 5601server.host: "0.0.0.0"server.name: "test3.xiong.com"  # 另一台只需要修改主机名elasticsearch.hosts: ["http://192.168.9.135:9200", "http://192.168.9.135:9200"]kibana]# systemctl restart kibanakibana]# ss -tnl | grep 5601   # 检查端口是否监听LISTEN     0      128          *:5601                     *:*                  


1.2.3、配置logstash
logstash]# vim /etc/default/logstash     JAVA_HOME="/usr/java/jdk"   # 增加java环境变量

1.3、nginx+filebeat

主机: 192.168.9.133, 9.134

1.3.1、安装
 ~]# cat /etc/yum.repos.d/nginx.repo   # 配置nginx yum源[nginx-stable]name=nginx stable repobaseurl=http://nginx.org/packages/centos/$releasever/$basearch/gpgcheck=1enabled=1gpgkey=https://nginx.org/keys/nginx_signing.keymodule_hotfixes=true[nginx-mainline]name=nginx mainline repobaseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/gpgcheck=1enabled=0gpgkey=https://nginx.org/keys/nginx_signing.keymodule_hotfixes=true~]# yum -y install nginx~]# rpm -vih filebeat-6.8.5-x86_64.rpm 
1.3.2、将日志修改为json格式
]# vim /etc/nginx/nginx.confhttp {    # 添加日志格式, log_format    log_format access_json '{"@timestamp":"$time_iso8601",'        '"host":"$server_addr",'        '"clientip":"$remote_addr",'        '"size":$body_bytes_sent,'        '"responsetime":$request_time,'        '"upstreamtime":"$upstream_response_time",'        '"upstreamhost":"$upstream_addr",'        '"http_host":"$host",'        '"url":"$uri",'        '"domain":"$host",'        '"xff":"$http_x_forwarded_for",'        '"referer":"$http_referer",'        '"status":"$status"}';}server {   # 在server段中使用    access_log  /var/log/nginx/default_access.log access_json;}~]# vim /etc/nginx/nginx.conf   两台nginx都需要添加# http段中添加, 另外一台当备份,     upstream kibana {       server 192.168.9.135:5601 max_fails=3 fail_timeout=30s;       server 192.168.9.136:5601 backup;    }~]# vim /etc/nginx/conf.d/two.conf server {    listen 5601;    server_name 192.168.9.133;  # 注意修改主机地址    access_log  /var/log/nginx/kinaba_access.log access_json;    location / {       proxy_pass http://kibana;    }}

1.4、logstash+kafka

主机: 192.168.9.137, 9.138

1.4.1、安装kafka
1、 安装jdk 版本1.82、安装kafka与zookeeper    注意: 安装两台机器除了监听地址,其它保持一致    mv kafka_2.11-2.0.0/ zookeeper-3.4.12/ /opt/hadoop/    cd /opt/hadoop/    ln -sv kafka_2.11-2.0.0/ kafka    ln -sv zookeeper-3.4.12/ zookeeper    cd /opt/hadoop/kafka/config    vim server.properties   # 修改监听地址        listeners=PLAINTEXT://192.168.9.138:9092        log.dirs=/opt/logs/kafka_logs    vim zookeeper.properties        dataDir=/opt/logs/zookeeper    将/opt/hadoop/zookeeper/conf/zoo_sample.cfg 复制为zoo.cfg    vim /opt/hadoop/zookeeper/conf/zoo.cfg        dataDir=/opt/logs/zookeeperDataDir    mkdir /opt/logs/{zookeeper,kafka_logs,zookeeperDataDir} -pv    chmod +x /opt/hadoop/zookeeper/bin/*.sh    chmod +x /opt/hadoop/kafka/bin/*.sh 3、自启cat kafka.service     [Unit]    Description=kafka 9092    # 定义kafka.server 应该在zookeeper之后启动    After=zookeeper.service    # 强依赖, 必须zookeeper先启动    Requires=zookeeper.service    [Service]    Type=simple    Environment=JAVA_HOME=/usr/java/default    Environment=KAFKA_PATH=/opt/hadoop/kafka:/opt/hadoop/kafka/bin    ExecStart=/opt/hadoop/kafka/bin/kafka-server-start.sh /opt/hadoop/kafka/config/server.properties    ExecStop=/opt/hadoop/kafka/bin/kafka-server-stop.sh    Restart=always    [Install]    WantedBy=multi-user.targetcat zookeeper.service     [Unit]    Description=Zookeeper Service    After=network.target    ConditionPathExists=/opt/hadoop/zookeeper/conf/zoo.cfg    [Service]    Type=forking    Environment=JAVA_HOME=/usr/java/default    ExecStart=/opt/hadoop/zookeeper/bin/zkServer.sh  start    ExecStop=/opt/hadoop/zookeeper/bin/zkServer.sh  stop    Restart=always    [Install]    WantedBy=multi-user.target4、启动mv kafka.service zookeeper.service /usr/lib/systemd/systemsystemctl restart zookeeper kafkasystemctl status zookeepersystemctl status kafkass -tnlLISTEN      0      50      ::ffff:192.168.9.138:9092        :::*LISTEN      0      50                        :::2181        :::*LISTEN      0      50      ::ffff:192.168.9.137:9092        :::*
1.4.2、安装logstash
1、安装logstash    rpm -ivh logstash-6.8.5.rpm     # 或直接yum安装\ 创建repo仓库    ]# vim /etc/yum.repos.d/logstash.repo        [logstash-6.x]        name=Elastic repository for 6.x packages        baseurl=https://artifacts.elastic.co/packages/6.x/yum        gpgcheck=1        gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch        enabled=1        autorefresh=1        type=rpm-md2、配置logstash启动文件   sed -i "1a\JAVA_HOME="/usr/java/jdk"" /etc/default/logstash 

2、日志收集

2.1、配置nginx-filebeat

# 查看nginx上  filebeat配置    地址: 192.168.9.133~]# grep -v "#" /etc/filebeat/filebeat.yml | grep -v "^$"filebeat.inputs:- type: log  enabled: true  paths:    - /var/log/nginx/kinaba_access.log  # 注意这个文件需要给755权限  exclude_lines: ['^DBG']  exclude_files: ['.gz$']  fields:    type: kinaba-access-9133    ip: 192.168.9.133filebeat.config.modules:  path: ${path.config}/modules.d/*.yml  reload.enabled: falsesetup.template.settings:  index.number_of_shards: 3output.logstash:  hosts: ["192.168.9.137:5044"]  worker: 2     # 开启两个工作线程

2.2、配置logstash

~]# cat /etc/logstash/conf.d/nginx-filebeats.conf     input {      beats {        port => 5044        codec => "json"      }    }    output {    #  stdout {   # 养成习惯 先打印 rubydebug输出至屏幕,然后在添加kafka    #    codec => "rubydebug"    #  }      kafka {         bootstrap_servers => "192.168.9.137:9092"         codec => "json"         topic_id => "logstash-kinaba-nginx-access"      }    }# 屏幕输出:  /usr/share/logstash/bin/logstash -f nginx-filebeats.conf# 检查:     /usr/share/logstash/bin/logstash -f nginx-filebeats.conf -t# 重启logstash # 查看日志:tailf /var/log/logstash/logstash-plain.log# 查看主题~]# /opt/hadoop/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.9.137:2181    logstash-kinaba-nginx-access# 查看主题内容~]# /opt/hadoop/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.9.137:9092 --topic logstash-kinaba-nginx-access --from-beginning    {"host":{"architecture":"x86_64","containerized":false,"os":{"version":"7 (Core)","codename":"Core","platform":"centos","family":"redhat","name":"CentOS Linux"},"name":"test1.xiong.com","id":"e70c4e18a6f243c69211533f14283599"},"@timestamp":"2019-12-27T02:06:17.326Z","log":{"file":{"path":"/var/log/nginx/kinaba_access.log"}},"fields":{"type":"kinaba-access-9133","ip":"192.168.9.133"},"message":"{\"@timestamp\":\:\"-\",\"referer\":\"http://192.168.9.133:5601/app/timelion\",\"status\":\"304\"}","source":"/var/log/nginx/kinaba_access.log","@version":"1","offset":83382,"beat":{"version":"6.8.5","hostname":"test1.xiong.com","name":"test1.xiong.com"},"prospector":{"type":"log"},"input":{"type":"log"},"tags":["beats_input_codec_plain_applied"]}

2.3、ELK上的logstash

# 主机:   192.168.9.135]# cat /etc/logstash/conf.d/logstash-kinaba-nginx.conf input {  kafka {    bootstrap_servers => "192.168.9.137:9092"    decorate_events => true    consumer_threads => 2    topics => "logstash-kinaba-nginx-access"    auto_offset_reset => "latest"  }}output {#  stdout {   # 养成好习惯,每次都必打印#    codec => "rubydebug"#  }  if [fields][type] == "kinaba-access-9133" {    elasticsearch {      hosts => ["192.168.9.135:9200"]      codec => "json"      index => "logstash-kinaba-access-%{+YYYY.MM.dd}"    }  }}# 屏幕输出:  /usr/share/logstash/bin/logstash -f logstash-kinaba-nginx.conf # 检查:     /usr/share/logstash/bin/logstash -f logstash-kinaba-nginx.conf  -t# 查看日志:tailf /var/log/logstash/logstash-plain.log# 重启logstash# 静待一会, 多访问几次web, 然后在查看索引 ~]# curl  http://192.168.9.135:9200/_cat/indicesgreen open logstash-kinaba-access-2019.12.27 AcCjLtCPTryt6DZkl5KbPw 5 1 100 0 327.7kb 131.8kb

0