A note on the construction of the elk log platform

I recently took some time to build a new version of the elk log platform

elastaicsearch and logstash, kibana and filebeat are all version 5.6

Redis is used for caching in the middle, version 3.2

The system used is centos 7.4

The JAVA environment must first be installed and set up

All download the RPM installation package on the official website and install it directly.

Let's talk about the configuration and important places

1,elasticsearch

In terms of configuration, it is not troublesome. If you use a single machine, you can run it with the following configuration

The configuration file is /etc/elasticsearch/elasticsearch.yml

The configuration is as follows, and the comments will no longer be written

cluster.name: elasticsearch
node.name: node-1
path.data: /data/elasticsearch/data/
path.logs: /data/elasticsearch/logs/
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.17.3.14"]
discovery.zen.minimum_master_nodes: 1
action.destructive_requires_name: true

Set the red part to false, otherwise it will not start.

2, logstash

After installation, the default configuration file path is under /etc/logstash/. Generally, the custom configuration file is placed under conf.d. The default configuration file should not be changed.

After I installed logstash, there was a problem, that is, when I started it, I was prompted that the JAVA execution command could not be found. I have made a global declaration, but it still doesn't work. Now let's talk about the solution.

Uninstall logstash first, and then softly link the java execution command to /usr/bin

Then reinstall it, and then generate the startup script, which is generated like this under centos7

/usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd

It is generated like this under centos6

/usr/share/logstash/bin/system-install /etc/logstash/startup.options sysv

Then we can start logstash

We can output the system messages log to elasticsearch. The configuration is as follows

input {
        file {
                type => "flow"
                path => "/var/log/messages"
        }
}


output {
  stdout {
     codec => rubydebug
  }
  if [type] == "flow" {
    elasticsearch {
          index => "flows-%{+YYYY.MM.dd}"
          hosts => "172.17.3.14:9200"
       }
   }
}

It should be noted that the value of the type of input and output should be the same

The red part is directed to the standard output. For the convenience of debugging, it is not necessary to use it formally.

Obtain data through redis and output the data to elasticsearch

input {
  say again {
    host => "172.17.3.14"
    port => "6379"
    type => "nginx_access"
    db => "0"
    data_type => "list"
    key => "ucenterfront"
  }
}

output {
    stdout {
     codec => rubydebug
  }
    if [type] == "nginx_access" {
      elasticsearch {
      hosts => "172.17.3.14:9200"
      index => "ucenter-front-%{+YYYY.MM.dd}"
      }
    }
}

It should be noted that the value of the key in the input must be the same as the key value defined in filebeat

The type must be consistent up and down. The red part of the index value defined by out can be consistent with the value of the key above, or it can be inconsistent. This does not have much impact. When kibana creates an index, just fill in the value of the index.

In addition, it should be noted that the value of type in the input must be the same as the value of document_type in filebeat. I was inconsistent at the beginning, and there is no data in logstash.

Others do not need special attention, the default is fine.

3, the installation of filebeat

The configuration file after installation is under /etc/filebeat

 

filebeat.prospectors:
- input_type: log
  paths:
    - /data/logs/nginx/*.log
  document_type: nginx_access
  scan_frequency: 1s
#----------------------------- Redis output --------------------------------
output.redis:
  hosts: ["172.17.3.14:6379"]
  key: "ucenterfront"
  db: 0
  db_topology: 1
  timeout: 5
  reconnect_interval: 1

 

The above document_type should be consistent with the type in logstash

Modify the sending interval to 1 second

4, Kibana

Direct installation, the configuration file is in /etc/kibana/kibana.yml

server.host: "172.17.3.14"
elasticsearch.url: "http://localhost:9200"

The configuration is very simple, just write that address behind the server to listen to that address.

The one below is the address of elasticsearch

At this point, the construction is complete.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325021899&siteId=291194637