千家信息网

filebeat收集k8s日志

发表于:2024-11-17 作者:千家信息网编辑
千家信息网最后更新 2024年11月17日,1、Filebeat概述Filebeat是用于转发和集中日志数据的轻量级传送程序。作为服务器上的代理安装,Filebeat监视您指定的日志文件或位置,收集日志事件,并将其转发给[Elasticsear
千家信息网最后更新 2024年11月17日filebeat收集k8s日志

1、Filebeat概述

Filebeat是用于转发和集中日志数据的轻量级传送程序。作为服务器上的代理安装,Filebeat监视您指定的日志文件或位置,收集日志事件,并将其转发给[Elasticsearch]或 [Logstash]进行索引。

Filebeat的工作方式如下:启动Filebeat时,它将启动一个或多个输入,这些输入将在为日志数据指定的位置中查找。对于Filebeat所找到的每个日志,Filebeat都会启动收集器。每个收割机都读取单个日志以获

取新内容,并将新日志数据发送到libbeat,libbeat将聚集事件并将聚集的数据发送到为Filebeat配置的输出。

2、在Kubernetes上运行Filebeat

将Filebeat部署为DaemonSet,以确保集群的每个节点上都有一个正在运行的实例。Docker日志主机文件夹(/var/lib/docker/containers)安装在Filebeat容器上。Filebeat会开始输入文件,并在文件出现在

文件夹中后立即开始收集它们。

这里使用官方提供的方式部署:

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.5/deploy/kubernetes/filebeat-kubernetes.yaml

3、设置

默认情况下,Filebeat将事件发送到现有的Elasticsearch部署(如果存在)。要指定其他目标,请在清单文件中更改以下参数:

env:- name: ELASTICSEARCH_HOST  value: elasticsearch- name: ELASTICSEARCH_PORT  value: "9200"- name: ELASTICSEARCH_USERNAME  value: elastic- name: ELASTICSEARCH_PASSWORD  value: changeme- name: ELASTIC_CLOUD_ID  value:- name: ELASTIC_CLOUD_AUTH  value:

输出到logstash:

---apiVersion: v1kind: ConfigMapmetadata:  name: filebeat-config  namespace: kube-system  labels:    k8s-app: filebeatdata:  filebeat.yml: |-    filebeat.config:      inputs:        # Mounted `filebeat-inputs` configmap:        path: ${path.config}/inputs.d/*.yml        # Reload inputs configs as they change:        reload.enabled: false      modules:        path: ${path.config}/modules.d/*.yml        # Reload module configs as they change:        reload.enabled: false    # To enable hints based autodiscover, remove `filebeat.config.inputs` configuration and uncomment this:    #filebeat.autodiscover:    #  providers:    #    - type: kubernetes    #      hints.enabled: true    processors:      - add_cloud_metadata:    #cloud.id: ${ELASTIC_CLOUD_ID}    #cloud.auth: ${ELASTIC_CLOUD_AUTH}    #output.elasticsearch:      #hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']      #username: ${ELASTICSEARCH_USERNAME}      #password: ${ELASTICSEARCH_PASSWORD}    output.logstash:      hosts: ["192.168.0.104:5044"]---apiVersion: v1kind: ConfigMapmetadata:  name: filebeat-inputs  namespace: kube-system  labels:    k8s-app: filebeatdata:  kubernetes.yml: |-    - type: log    #设置类型为log        paths:          - /var/lib/docker/containers/*/*.log        #fields:          #app: k8s          #type: docker-log        fields_under_root: true        json.keys_under_root: true        json.overwrite_keys: true        encoding: utf-8        fields.sourceType: docker-log         #索引名格式---apiVersion: extensions/v1beta1kind: DaemonSetmetadata:  name: filebeat  namespace: kube-system  labels:    k8s-app: filebeatspec:  template:    metadata:      labels:        k8s-app: filebeat    spec:      serviceAccountName: filebeat      terminationGracePeriodSeconds: 30      containers:      - name: filebeat        image: docker.elastic.co/beats/filebeat:6.5.4       #提前准备好镜像,需要***下载        args: [          "-c", "/etc/filebeat.yml",          "-e",        ]        securityContext:          runAsUser: 0          # If using Red Hat OpenShift uncomment this:          #privileged: true        resources:          limits:            memory: 200Mi          requests:            cpu: 100m            memory: 100Mi        volumeMounts:        - name: config          mountPath: /etc/filebeat.yml          readOnly: true          subPath: filebeat.yml        - name: inputs          mountPath: /usr/share/filebeat/inputs.d          readOnly: true        - name: data          mountPath: /usr/share/filebeat/data        - name: varlibdockercontainers          mountPath: /var/lib/docker/containers          readOnly: true      volumes:      - name: config        configMap:          defaultMode: 0600          name: filebeat-config      - name: varlibdockercontainers        hostPath:          path: /var/lib/docker/containers      - name: inputs        configMap:          defaultMode: 0600          name: filebeat-inputs      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart      - name: data        hostPath:          path: /var/lib/filebeat-data          type: DirectoryOrCreate---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:  name: filebeatsubjects:- kind: ServiceAccount  name: filebeat  namespace: kube-systemroleRef:  kind: ClusterRole  name: filebeat  apiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata:  name: filebeat  labels:    k8s-app: filebeatrules:- apiGroups: [""] # "" indicates the core API group  resources:  - namespaces  - pods  verbs:  - get  - watch  - list---apiVersion: v1kind: ServiceAccountmetadata:  name: filebeat  namespace: kube-system  labels:    k8s-app: filebeat

创建并运行:

能看到上面日志,则表示启动成功。

4、排错

如果没启动成功,查看logstash的日志,报错如下

[2019-12-20T19:53:14,049][ERROR][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"dev-%{[fields][sourceType]}-2019-12-20", :_type=>"doc", :routing=>nil}, #], :response=>{"index"=>{"_index"=>"dev-%{[fields][sourceType]}-2019-12-20", "_type"=>"doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [dev-%{[fields][sourceType]}-2019-12-20], must be lowercase", "index_uuid"=>"_na_", "index"=>"dev-%{[fields][sourceType]}-2019-12-20"}}}}[2019-12-20T19:53:14,049][ERROR][logstash.outputs.

原因是logstash中output的index不能有大写:

我原来的logstash的conf文件

output {    elasticsearch {            hosts => ["localhost:9200"]            index => '%{platform}-%{[fields][sourceType]}-%{+YYYY-MM-dd}'            template => "/opt/logstash-6.5.2/config/af-template.json"            template_overwrite => true    }}

修改后的

output {    elasticsearch {            hosts => ["localhost:9200"]            index => "k8s-log-%{+YYYY.MM.dd}"            template => "/opt/logstash-6.5.2/config/af-template.json"            template_overwrite => true    }}

完美结束!无坑

0