Preface
Due to the large memory usage of logstash and the relatively low flexibility, ELK is being gradually replaced by EFK. The EFK mentioned in this article is Elasticsearch+Fluentd+Kfka. In fact, K should be Kibana for the display of logs. Demonstration, this article only describes the data collection process.
premise
Architecture
Data collection process
Data generation uses cadvisor to collect monitoring data of the container and transfer the data to Kafka.
The data transmission link is like this: Cadvisor->Kafka->Fluentd->elasticsearch
Each service can be scaled horizontally, adding services to the log system.
Configuration file
docker-compose.yml
version: "3.7"
services:
elasticsearch:
image: elasticsearch:7.5.1
environment:
- discovery.type=single-node #使用单机模式启动
ports:
- 9200:9200
cadvisor:
image: google/cadvisor
command: -storage_driver=kafka -storage_driver_kafka_broker_list=192.168.1.60:9092(kafka服务IP:PORT) -storage_driver_kafka_topic=kafeidou
depends_on:
- elasticsearch
fluentd:
image: lypgcs/fluentd-es-kafka:v1.3.2
volumes:
- ./:/etc/fluent
- /var/log/fluentd:/var/log/fluentd
among them:
- The data generated by cadvisor will be transmitted to the kafka service of the machine 192.168.1.60, and the topic is kafeidou
- Elasticsearch is designated to start in stand-alone mode (discovery.type=single-node environment variable), and start in stand-alone mode is to facilitate the overall effect of the experiment
fluent.conf
#<source>
# type http
# port 8888
#</source>
<source>
@type kafka
brokers 192.168.1.60:9092
format json
<topic>
topic kafeidou
</topic>
</source>
<match **>
@type copy
# <store>
# @type stdout
# </store>
<store>
@type elasticsearch
host 192.168.1.60
port 9200
logstash_format true
#target_index_key machine_name
logstash_prefix kafeidou
logstash_dateformat %Y.%m.%d
flush_interval 10s
</store>
</match>
among them:
- The plug-in with type copy is to be able to copy the data received by fluentd, to facilitate debugging, to print the data in the console or to store it in a file. This configuration file is closed by default, and only the necessary es output plug-ins are provided.
You can turn on the @type stdout block when needed to debug whether data is received.
- The input source is also configured with an http input configuration, which is closed by default and is also used for debugging. Put data into fluentd.
You can execute the following command on linux:
curl -i -X POST -d 'json={"action":"write","user":"kafeidou"}' http://localhost:8888/mytag
- The target index key parameter, this parameter uses the value corresponding to a field in the data as the index of es, for example, this configuration file uses the value in the field of machine_name as the index of es.
Start deployment
Execute in the directory containing the docker-compose.yml file and fluent.conf file:
docker-compose up -d
After checking that all containers are working properly, you can check whether elasticsearch has generated the expected data as verification. Here, check whether the index of es has been generated and the amount of data to verify:
-bash: -: 未找到命令
[root@master kafka]# curl http://192.168.1.60:9200/_cat/indices?v
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open 55a4a25feff6 Fz_5v3suRSasX_Olsp-4tA 1 1 1 0 4kb 4kb
You can also directly enter http://192.168.1.60:9200/_cat/indices?v in the browser to view the results, which will be more convenient.
You can see that I used the machine_name field as the index value here. The result of the query is to generate an index data called ``55a4a25feff6'', and generate 1 piece of data (``docs.count'')
So far, the log collection process of kafka->fluentd->es has been completed.
Of course, the architecture is not fixed. You can also use fluentd->kafka->es to collect data. I will not demonstrate here, just modify the fluentd.conf configuration file and do the es and kafka-related configuration The corresponding position can be exchanged.
Encourage to read more official documents, you can find fluentd-es plugin and fluentd-kafka plugin on github or fluentd official website.
Originating in four coffee beans , reproduced, please declare the source.
Follow the public account -> [Four Coffee Beans] Get the latest content