ELK+Kafka日志收集环境搭建

1.搭建Elasticsearch环境并测试:
  (1)删除es的容器
  (2)删除es的镜像
  (3)宿主机调内存: 执行命令:sudo sysctl -w vm.max_map_count=655360
  (4)通过ftp软件修改docker-compose.yml中的 mem_limit: 2048M
  (5)找到虚拟机, 执行命令:cd /home/px2/envdm/springcloudV2.0/
    执行命令:docker-compose up -d elasticsearch
  (6)测试es:http://192.168.115.158:9200 出现es的版本号说明配置环境成功。

2.搭建logstash
  (1)打开物料中logstash
  步骤:进入logstash的容器
  vi /usr/local/logstash-6.3.0/config/logstash.yml(修改连接es的ip)
  修改成:
  http.host: "0.0.0.0"
  xpack.monitoring.elasticsearch.url: http://192.168.115.158:9200
  xpack.monitoring.elasticsearch.username: elastic
  xpack.monitoring.elasticsearch.password: changeme
  xpack.monitoring.enabled: false

  (2)vi /usr/local/logstash-6.3.0/bin/logstash.conf
  修改ip以及加入日志参数
  input{
    kafka{
      bootstrap_servers => ["192.168.115.158:9092"]
      group_id => "test-consumer-group"
      auto_offset_reset => "latest"
      consumer_threads => 5
      decorate_events => true
      topics => ["dm"]
      type => "bhy"
    }
  }

  output {
    elasticsearch{
      hosts=> ["192.168.115.158:9200"]
      index=> "dmservice-%{+YYYY.MM.dd}"
    }
    stdout{
      codec => json_lines
    }
  }

3.修改kibana配置文件
  (1)找到elasticsearch.url:并修改成 "http://192.168.115.158:9200"
  (2)访问:http://192.168.115.158:5601没有出现无es数据即说明成功。

4.kafka
  (1)找到server.properties
  listeners=PLAINTEXT://0.0.0.0:9092
  advertised.listeners=PLAINTEXT://192.168.115.158:9092

  (2)启动kafka命令
  ./kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9092 --topic dm --from-beginning

猜你喜欢

转载自www.cnblogs.com/lingboweifu/p/11809540.html