Docker部署EFK+Kafka日志收集系统

日志收集流程为:fluentd - kafka - logstash - elasticsearch - kibana

使用镜像如下:

registry.cn-hangzhou.aliyuncs.com/xiao_bai/fluentd-kafka:v1                       version:1.4
registry.cn-hangzhou.aliyuncs.com/xiao_bai/kafka:v1                                   version:2.1
docker.io/zookeeper:3.5
docker.elastic.co/elasticsearch/elasticsearch:6.5.4
docker.elastic.co/logstash/logstash:6.5.4
docker.elastic.co/kibana/kibana:6.5.4
docker.io/mobz/elasticsearch-head:5


192.168.2.244 fluentd、zk、kafka、es、kibana
192.168.2.245 logstash、es-head、zk、kafka、es
192.168.2.246 es-balance、zk、kafka、es

创建数据持久化目录,配置文件目录

mkdir -p /efk/{es,es-balance,kafka,zk,fluentd,logstash}/conf && mkdir /efk/{es,kafka,zk}/data


  • 系统环境配置

echo "vm.max_map_count=262144" >> /etc/sysctl.conf && sysctl -p
echo -e "* soft nofile 65536\n* hard nofile 65536" >> /etc/security/limits.conf


  • zk集群安装

配置文件(三台机器使用相同配置)

cat > /efk/zk/conf/zoo.cfg << EOF
tickTime=2000
dataDir=/data/zookeeper/
dataLogDir=/data/zookeeper/logs
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.2.244:2888:3888
server.2=192.168.2.245:2888:3888
server.3=192.168.2.246:2888:3888
EOF

为每台机器设置myid

echo 1 > /efk/zk/data/myid    #244     跟配置文件server.id设置一致
echo 2 > /efk/zk/data/myid    #245
echo 3 > /efk/zk/data/myid    #246

启动容器(三台机器执行相同命令)

docker run -d --restart=always --name zk --net=host -v /etc/localtime:/etc/localtime:ro -v /efk/zk/conf/zoo.cfg:/conf/zoo.cfg -v /efk/zk/data/:/data/zookeeper zookeeper:3.5

zk集群启动后查看各节点身份

docker exec -it zk zkServer.sh status



  • kafka集群安装

配置文件(每台机器的broker.id不能相同)

cat > /efk/kafka/conf/server.properties << EOF
broker.id=1
listeners=PLAINTEXT://192.168.2.246:9092
port=9092
logs.dirs=/tmp/kafka-logs
message.max.byte=5242880
default.replication.factor=2
replica.fetch.max.bytes=5242880
zookeeper.connect=192.168.2.244:2181,192.168.2.245:2181,192.168.2.246:2181
EOF

启动容器(三台机器执行相同命令)

docker run -d --restart=always --name=kafka --net=host -v /etc/localtime:/etc/localtime:ro -v /efk/kafka/conf/server.properties:/usr/local/kafka_2.11-2.1.0/config/server.properties -v /efk/kafka/data:/tmp/kafka-logs registry.cn-hangzhou.aliyuncs.com/xiao_bai/kafka:v1



  • es集群安装

配置文件(修改node.name、network.publish_host)

cat > /efk/es/conf/elasticsearch.yml << EOF
cluster.name: es-cluster
node.name: es-node1
network.bind_host: 0.0.0.0
network.publish_host: 192.168.2.244
http.port: 9200
transport.tcp.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true 
node.data: true  
discovery.zen.ping.unicast.hosts: ["192.168.2.244:9300","192.168.2.245:9300","192.168.2.246:9300"]
discovery.zen.minimum_master_nodes: 2
EOF

balance节点配置文件

cat > /efk/es-balance/conf/elasticsearch.yml << EOF
cluster.name: es-cluster
node.name: es-node4
network.bind_host: 0.0.0.0
network.publish_host: 192.168.2.246
http.port: 9201
transport.tcp.port: 9301
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: false
node.data: false
node.ingest: false
discovery.zen.ping.unicast.hosts: ["192.168.2.244:9300","192.168.2.245:9300","192.168.2.246:9300"]
discovery.zen.minimum_master_nodes: 2
EOF

启动容器(三台机器执行相同命令,数据节点)

docker run -d --restart=always --name=es --net=host -e ES_JAVA_OPTS="-Xms2g -Xmx2g" -v /etc/localtime:/etc/localtime:ro -v /efk/es/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /efk/es/data:/usr/share/elasticsearch/data docker.elastic.co/elasticsearch/elasticsearch:6.5.4

balance节点

docker run -d --restart=always --name=es-balance --net=host -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -v /etc/localtime:/etc/localtime:ro -v /efk/es-balance/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml docker.elastic.co/elasticsearch/elasticsearch:6.5.4

启动es-head插件

docker run -d --name=es-head -p 9100:9100 mobz/elasticsearch-head:5



  • kibana安装

启动容器

docker run -d --name kibana --restart=always -e ELASTICSEARCH_URL=http://192.168.2.246:9201 -v /etc/localtime:/etc/localtime:ro -p 5601:5601 docker.elastic.co/kibana/kibana:6.5.4



  • logstash安装

配置文件

consumer_threads与kafka设置的分区数保持一致

touch /efk/logstash/conf/logstash.yml
cat > /efk/logstash/conf/kafka-logstash-es.conf << EOF
input {
    kafka {
bootstrap_servers => "192.168.2.244:9092,192.168.2.245:9092,192.168.2.246:9092"
group_id => "kafka-logstash"
topics => ["docker_log","sys_log"]
consumer_threads => 1
codec => "json"
        }
    }
output {
    elasticsearch {
hosts => ["192.168.2.244:9200","192.168.2.245:9200","192.168.2.246:9200"]
index => "logstash-%{+YYYY.MM.dd}"
codec => "json"
    }
    stdout {
codec => "rubydebug"
        }
    }
EOF

启动logstash容器

docker run -d --name=logstash --net=host -v /etc/localtime:/etc/localtime:ro -v /efk/logstash/conf/logstash.yml:/usr/share/logstash/config/logstash.yml -v /efk/logstash/conf/kafka-logstash-es.conf:/usr/share/logstash/pipeline/kafka-logstash-es.conf docker.elastic.co/logstash/logstash:6.5.4



  • Fluentd安装

配置文件(收集容器日志以及系统日志)

最后一个match需要设置为**,否则会出现warn信息

cat > /efk/fluentd/conf/fluent.conf << EOF
<source>
  @type tail
  path /var/log/containers/*/*.log
  pos_file /var/log/containers/containers.log.pos
  tag docker.log 
  <parse>
    @type json
  </parse>
  read_from_head true
</source>
<source>
  @type syslog
  port 5140
  bind 0.0.0.0
  <parse>
    message_format rfc3164
  </parse>
  tag system.log
</source>
<filter **>
  @type record_transformer
  <record>
    hostname "192.168.2.244"
  </record>
</filter>
<match docker.*>
  @type kafka_buffered
  brokers 192.168.2.244:9092,192.168.2.245:9092,192.168.2.246:9092
  default_topic docker_log
  output_data_type json
</match>
<match **>
  @type kafka_buffered
  brokers 192.168.2.244:9092,192.168.2.245:9092,192.168.2.246:9092
  default_topic sys_log
  output_data_type json
</match>
EOF

收集系统日志需要开启rsyslog服务(根据服务器IP进行配置)

echo '*.* @192.168.2.244:5140' >> /etc/rsyslog.conf && systemctl restart rsyslog

启动fluentd容器

docker run -d --name=fluentd --net=host -v /etc/localtime:/etc/localtime:ro -v /efk/fluentd/conf/fluent.conf:/fluentd/etc/fluent.conf -v /var/lib/docker/containers/:/var/log/containers registry.cn-hangzhou.aliyuncs.com/xiao_bai/fluentd-kafka:v1


参考文档:

https://www.elastic.co/guide

https://github.com/mobz/elasticsearch-head

https://docs.fluentd.org

http://kafka.apache.org/21/documentation.html

http://zookeeper.apache.org/doc/r3.5.4-beta/zookeeperAdmin.html


猜你喜欢

转载自blog.51cto.com/13740724/2387263