千家信息网

SpringCloud中如何进行日志收集Kafka-ELK

发表于:2025-01-25 作者:千家信息网编辑
千家信息网最后更新 2025年01月25日,本篇文章给大家分享的是有关SpringCloud中如何进行日志收集Kafka-ELK,小编觉得挺实用的,因此分享给大家学习,希望大家阅读完这篇文章后可以有所收获,话不多说,跟着小编一起来看看吧。微服务
千家信息网最后更新 2025年01月25日SpringCloud中如何进行日志收集Kafka-ELK

本篇文章给大家分享的是有关SpringCloud中如何进行日志收集Kafka-ELK,小编觉得挺实用的,因此分享给大家学习,希望大家阅读完这篇文章后可以有所收获,话不多说,跟着小编一起来看看吧。

微服务应用在容器化后,日志的查询就会变成困难的问题,虽说有portainer这类的容器管理工具,能够方便的查询每个容器中的日志,但容器到达一定数量后,尤其是应用有多个实例时候,查询就成了头疼的问题。所以我们采用了Kafka-Logstash-Elasticsearch-Kibana的方案来处理日志。

首先说说我的日志收集思路:

  1. 应用将日志传入kafka集群。在集群建立相应topic,并传入日志。

  2. logstash在kafka上消费(读取)日志内容,写入elasticsearch。

  3. kibana读elasticsearch,做对应的展示。

这样的好处,1)几乎不用做特别大的修改,只需做一定的配置工作即可完成日志收集;2)日志内容输入kafka几乎没有什么瓶颈,另外kafka的扩展性能很好,也很简单;3)收集的日志几乎是实时的;4)整体的扩展性很好,很容易消除瓶颈,例如elasticsearch分片、扩展都很容易。

应用侧配置

在应用中,我们只需配置log4j的相应配置,将其日志输出到kafka即可。以下为配置示例,配置中包含将日志输入出命令行和kafka部分中。注意,在输出到kafka中,需要有一个appender类kafka.producer.KafkaLog4jAppender一般是没有的,则我在本地自己写上该类(KafkaLog4jAppender.java),并加入相应的文件路径。

log4j.rootLogger=INFO,console,kafka# 输出到kafkalog4j.appender.kafka=com.yang.config.KafkaLog4jAppender  #kafka.producer.KafkaLog4jAppenderlog4j.appender.kafka.topic=api-adminlog4j.appender.kafka.brokerList=192.0.0.2:9092,192.0.0.3:9092,192.0.0.4:9092 # 这里填写kafka的iplog4j.appender.kafka.compressionType=nonelog4j.appender.kafka.syncSend=truelog4j.appender.kafka.layout=org.apache.log4j.PatternLayoutlog4j.appender.kafka.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L %% - %m%n# 输出到Consolelog4j.appender.console=org.apache.log4j.ConsoleAppenderlog4j.appender.console.target=System.errlog4j.appender.console.layout=org.apache.log4j.PatternLayoutlog4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n

KafkaLog4jAppender.java

public class KafkaLog4jAppender extends AppenderSkeleton {    private static final String BOOTSTRAP_SERVERS_CONFIG = ProducerConfig.BOOTSTRAP_SERVERS_CONFIG;    private static final String COMPRESSION_TYPE_CONFIG = ProducerConfig.COMPRESSION_TYPE_CONFIG;    private static final String ACKS_CONFIG = ProducerConfig.ACKS_CONFIG;    private static final String RETRIES_CONFIG = ProducerConfig.RETRIES_CONFIG;    private static final String KEY_SERIALIZER_CLASS_CONFIG = ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG;    private static final String VALUE_SERIALIZER_CLASS_CONFIG = ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG;    private static final String SECURITY_PROTOCOL = CommonClientConfigs.SECURITY_PROTOCOL_CONFIG;    private static final String SSL_TRUSTSTORE_LOCATION = SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG;    private static final String SSL_TRUSTSTORE_PASSWORD = SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG;    private static final String SSL_KEYSTORE_TYPE = SslConfigs.SSL_KEYSTORE_TYPE_CONFIG;    private static final String SSL_KEYSTORE_LOCATION = SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG;    private static final String SSL_KEYSTORE_PASSWORD = SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG;    private String brokerList = null;    private String topic = null;    private String compressionType = null;    private String securityProtocol = null;    private String sslTruststoreLocation = null;    private String sslTruststorePassword = null;    private String sslKeystoreType = null;    private String sslKeystoreLocation = null;    private String sslKeystorePassword = null;    private int retries = 0;    private int requiredNumAcks = Integer.MAX_VALUE;    private boolean syncSend = false;    private Producer producer = null;    public Producer getProducer() {        return producer;    }    public String getBrokerList() {        return brokerList;    }    public void setBrokerList(String brokerList) {        this.brokerList = brokerList;    }    public int getRequiredNumAcks() {        return requiredNumAcks;    }    public void setRequiredNumAcks(int requiredNumAcks) {        this.requiredNumAcks = requiredNumAcks;    }    public int getRetries() {        return retries;    }    public void setRetries(int retries) {        this.retries = retries;    }    public String getCompressionType() {        return compressionType;    }    public void setCompressionType(String compressionType) {        this.compressionType = compressionType;    }    public String getTopic() {        return topic;    }    public void setTopic(String topic) {        this.topic = topic;    }    public boolean getSyncSend() {        return syncSend;    }    public void setSyncSend(boolean syncSend) {        this.syncSend = syncSend;    }    public String getSslTruststorePassword() {        return sslTruststorePassword;    }    public String getSslTruststoreLocation() {        return sslTruststoreLocation;    }    public String getSecurityProtocol() {        return securityProtocol;    }    public void setSecurityProtocol(String securityProtocol) {        this.securityProtocol = securityProtocol;    }    public void setSslTruststoreLocation(String sslTruststoreLocation) {        this.sslTruststoreLocation = sslTruststoreLocation;    }    public void setSslTruststorePassword(String sslTruststorePassword) {        this.sslTruststorePassword = sslTruststorePassword;    }    public void setSslKeystorePassword(String sslKeystorePassword) {        this.sslKeystorePassword = sslKeystorePassword;    }    public void setSslKeystoreType(String sslKeystoreType) {        this.sslKeystoreType = sslKeystoreType;    }    public void setSslKeystoreLocation(String sslKeystoreLocation) {        this.sslKeystoreLocation = sslKeystoreLocation;    }    public String getSslKeystoreLocation() {        return sslKeystoreLocation;    }    public String getSslKeystoreType() {        return sslKeystoreType;    }    public String getSslKeystorePassword() {        return sslKeystorePassword;    }    @Override    public void activateOptions() {        // check for config parameter validity        Properties props = new Properties();        if (brokerList != null)            props.put(BOOTSTRAP_SERVERS_CONFIG, brokerList);        if (props.isEmpty())            throw new ConfigException("The bootstrap servers property should be specified");        if (topic == null)            throw new ConfigException("Topic must be specified by the Kafka log4j appender");        if (compressionType != null)            props.put(COMPRESSION_TYPE_CONFIG, compressionType);        if (requiredNumAcks != Integer.MAX_VALUE)            props.put(ACKS_CONFIG, Integer.toString(requiredNumAcks));        if (retries > 0)            props.put(RETRIES_CONFIG, retries);        if (securityProtocol != null && sslTruststoreLocation != null &&                sslTruststorePassword != null) {            props.put(SECURITY_PROTOCOL, securityProtocol);            props.put(SSL_TRUSTSTORE_LOCATION, sslTruststoreLocation);            props.put(SSL_TRUSTSTORE_PASSWORD, sslTruststorePassword);            if (sslKeystoreType != null && sslKeystoreLocation != null &&                    sslKeystorePassword != null) {                props.put(SSL_KEYSTORE_TYPE, sslKeystoreType);                props.put(SSL_KEYSTORE_LOCATION, sslKeystoreLocation);                props.put(SSL_KEYSTORE_PASSWORD, sslKeystorePassword);            }        }        props.put(KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");        props.put(VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");        this.producer = getKafkaProducer(props);        LogLog.debug("Kafka producer connected to " + brokerList);        LogLog.debug("Logging for topic: " + topic);    }    protected Producer getKafkaProducer(Properties props) {        return new KafkaProducer(props);    }    @Override    protected void append(LoggingEvent event) {        String message = subAppend(event);        LogLog.debug("[" + new Date(event.getTimeStamp()) + "]" + message);        Future response = producer.send(new ProducerRecord(topic, message.getBytes()));        if (syncSend) {            try {                response.get();            } catch (InterruptedException ex) {                throw new RuntimeException(ex);            } catch (ExecutionException ex) {                throw new RuntimeException(ex);            }        }    }    private String subAppend(LoggingEvent event) {        return (this.layout == null) ? event.getRenderedMessage() : this.layout.format(event);    }    public void close() {        if (!this.closed) {            this.closed = true;            producer.close();        }    }    public boolean requiresLayout() {        return true;    }}

logstash配置

logstash可能会有快速启动实例的需求,我们就采用docker部署,能够快速启动、扩展等功能。

镜像就直接logstash官方镜像logstash docker镜像,我们选择了一种最简单启动方式做演示,具体还有多种docker部署方法,可以参考以上链接。

docker启动命令,input输入中,指定kafka集群地址和对应topic;输出,则是elasticsearch集群地址、索引,以及elastisearch配置参数。

#启动命令docker run -it -d logstash -e 'input { kafka {         bootstrap_servers => "kafkaIp1:9092,kafkaIp2:9092"         topics => ["api-admin"] } } output { elasticsearch {         hosts => ["elasticsearch2:9200","elasticsearch3:9200","elasticsearch4:9200"]         index => "api-admin-%{+YYYY.MM.dd}"         flush_size => 20000         idle_flush_time => 10         template_overwrite => true } }'

在marathon中启动在Command选项中加入参数logstash -e 'input {} output {}'即可。另外说一句, 如果在容器编排系统(mesos/marathon、kubernetes)中,可能会有健康检查要求,其端口为9600。

以上就是SpringCloud中如何进行日志收集Kafka-ELK,小编相信有部分知识点可能是我们日常工作会见到或用到的。希望你能通过这篇文章学到更多知识。更多详情敬请关注行业资讯频道。

0