ELK use of lightweight structures

Author: Dengcong Cong

  In general we need to log analysis scenarios: grep directly in the log file, awk can get the information they want. But in the larger scene, this method is inefficient, problems including how to log archiving is too big, too slow how to do text searches, how multi-dimensional queries. Require centralized log management, log on all servers collect aggregate. Common Solutions is creating a centralized log collection system, the logs on all nodes in the unified collection, management, access.

General large-scale deployment of the system is a distributed architecture, different service modules deployed on a different server, when problems arise, in most cases need to issue critical information exposure, targeted to specific servers and services module, to build a centralized style log system, can improve the efficiency of fault location.

A complete centralized logging system, need to include the following main features:

  • Collection - Log data can be collected from multiple sources
  • Transmission - can stably transmit the log data to the central system
  • Storage - How to store log data
  • Analysis - analysis supports UI
  • Warning - can provide error reporting, monitoring mechanisms

ELK offers a complete solution, and is open source software, with each other with the use of the perfect convergence, efficient to meet a number of occasions applications. A log mainstream system.

ELK profile:

ELK is an abbreviation of three open source software, respectively: Elasticsearch, Logstash, Kibana, they are open source software. Added a FileBeat, it is a lightweight log collection processing tools (Agent), Filebeat small footprint, suitable for transmission on each server logs to collect after Logstash, the government has also recommended this tool.

Elasticsearch is an open source distributed search engine that provides collection, analysis, storage of data three functions. Its features include: distributed, zero-configuration, auto-discovery, auto-slice index, index replication mechanism, restful style interfaces, multiple data sources, such as automatic load search.

Logstash mainly used to log collection, analysis, log filtering tools to support large amounts of data acquisition mode. General work of c / s architecture, client installed on the host side need to collect logs, server side is responsible for each node the received log is filtered, modification and other operations in a concurrent to elasticsearch up.

Kibana is also an open source and free tools, you can analyze Kibana friendly Web interface and log Logstash ElasticSearch provided to help summarize, analyze and search for important data logs.

Filebeat affiliated with Beats. Beats currently contains four tools:

    1. Packetbeat (collect network traffic data)
    2. Topbeat (collection systems, processes and file system-level CPU and memory usage data)
    3. Filebeat (collect data files)
    4. Winlogbeat (collect Windows event log data)

The official document:

Filebeat:

https://www.elastic.co/cn/products/beats/filebeat
https://www.elastic.co/guide/en/beats/filebeat/5.6/index.html

Logstash:
https://www.elastic.co/cn/products/logstash
https://www.elastic.co/guide/en/logstash/5.6/index.html

Kibana:

https://www.elastic.co/cn/products/kibana

https://www.elastic.co/guide/en/kibana/5.5/index.html

Elasticsearch:
https://www.elastic.co/cn/products/elasticsearch
https://www.elastic.co/guide/en/elasticsearch/reference/5.6/index.html

elasticsearch Chinese community:
https://elasticsearch.cn/

 

  elasticsearch, Logstash require jdk support, if it is not on the server, you need to install the JDK, above 5.x versions required level of support jdk1.8

Install JDK, JDK and set environment variables 

drwxr-xr-x. 8   10  143       255 Jun 17  2014 jdk1.8.0_11
-rw-r--r--. 1 root root 159019376 Jun  6 11:58 jdk-8u11-linux-x64.tar.gz
[root@test_server java]# cd jdk1.8.0_11/
[root@test_server jdk1.8.0_11]# ll
total 25428
drwxr-xr-x. 2 10 143     4096 Jun 17  2014 bin
-r--r--r--. 1 10 143     3244 Jun 17  2014 COPYRIGHT
drwxr-xr-x. 4 10 143      122 Jun 17  2014 db
drwxr-xr-x. 3 10 143      132 Jun 17  2014 include
-rw-r--r--. 1 10 143  4673670 Jun 17  2014 javafx-src.zip
drwxr-xr-x. 5 10 143      185 Jun 17  2014 jre
drwxr-xr-x. 5 10 143      225 Jun 17  2014 lib
-r--r--r--. 1 10 143       40 Jun 17  2014 LICENSE
drwxr-xr-x. 4 10 143       47 Jun 17  2014 man
-r--r--r--. 1 10 143      159 Jun 17  2014 README.html
-rw-r--r--. 1 10 143      525 Jun 17  2014 release
-rw-r--r--. 1 10 143 21047086 Jun 17  2014 src.zip
-rw-r--r--. 1 10 143   110114 Jun 17  2014 THIRDPARTYLICENSEREADME-JAVAFX.txt
-r--r--r--. 1 10 143   178445 Jun 17  2014 THIRDPARTYLICENSEREADME.txt
[root@test_server jdk1.8.0_11]# 

jdk environment settings:

[root@test_server jdk1.8.0_11]# tail -f /etc/profile
done

unset i
unset -f pathmunge
JAVA_HOME=/usr/local/java/jdk1.8.0_11
JRE_HOME=${JAVA_HOME}/jre
CLASSPATH=$JAVA_HOME/lib:$JRE_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export PATH JAVA_HOME CLASSPATH JRE_HOME
ulimit -u 4096

Verify the version information and verify that the installation was successful

[root@test_server jdk1.8.0_11]# java -version
java version "1.8.0_11"
Java(TM) SE Runtime Environment (build 1.8.0_11-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.11-b03, mixed mode)
[root@test_server jdk1.8.0_11]# 

ELK install the software, there are Elasticsearch, Logstash, Kibana three software to ensure version consistency during installation

[root@test_server elk]# ll
total 89284
drwxr-xr-x  9 elk elk      155 Jun  7 20:53 elasticsearch-6.3.0
-rw-r--r--  1 elk elk 91423553 Jun  6 16:46 elasticsearch-6.3.0.tar.gz
drwxr-xr-x 11 elk elk      229 Jun  6 18:05 kibana-6.3.0-linux-x86_64
drwxr-xr-x 14 elk elk      321 Jun  7 00:32 logstash-6.3.0

Create a user ELK run

[root @ test_server] # groupadd Elk 
[root @ test_server] # groupadd Elk 
# software to store and launch the directory 
[root @ test_server] # mkdir / Elk 
[root @ test_server] # chown -R Elk: Elk / Elk 
# close seliunx 
[ @ test_server root] # getenforce 
Disabled

Configuration Elasticsearch, and start

The following is my modified entry: 
[root @ test_server config] # CAT elasticsearch.yml | grep -v ^ " # " 
cluster.name: My - the Application 
node.name: the Node - 1 
network.host: xxxx 
http.port: 9200 
# start elasticsearch put running in the background task
 / Elk / elasticsearch- 6.3 . 0 / bin / elasticsearch & 
# see if run successfully 
[root @ test_server config] # SS - NTL 
State Recv -Q SEND- Q Local Address: Port Peer Address: Port               
LISTEN      0       128                             *:10022                                       *:*                  
LISTEN     0      128                             *:80                                          *:*                  
LISTEN     0      100                     127.0.0.1:25                                          *:*                  
LISTEN     0      128                  118.188.20.5:5601                                        *:*                  
LISTEN     0      128                            :::10022                                       ::: *                   
LISTEN      0       128            :: ffff: xxxx: 9200  // run port * :::                   
LISTEN      0       128            :: ffff: xxxx : 9300  // run port * :::                   
LISTEN      0       100                            :: 1 : 25                                          : * ::                   
LISTEN      0       50                :: ffff: 127.0.0.1:9600                                       :::*                  
[root@test_server config]# 

Configuration logstash

[root@test_server config]# cat logstash.yml|grep -v ^"#"
path.data: /elk/logstash-6.3.0/data
path.config: /elk/logstash-6.3.0/config
#启动
[root@test_server config]# /elk/logstash-6.3.0/bin/logstash -f /elk/logstash-6.3.0/config/yourfile.conf &

Logstash log collection configuration file, which I syslog configuration reference

[root@test_server config]# cat syslog.conf 
input {
  file {
    path => "/var/log/boot.log"          
    start_position => "beginning"
    type => "test" 
  }
}

filter {
    grok {
      match => { "message" => "(?:%{SYSLOGTIMESTAMP:timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource}+(?: %{SYSLOGPROG}:|)" }
  }
}

output {
  elasticsearch {
    hosts => "x.x.x.x:9200"
    index => "blog2"
        document_type => "test"
  }
    stdout { codec => rubydebug }  //屏幕输出
}

Configuration kibana, the following items are my changes

[root@test_server elk]# cat kibana-6.3.0-linux-x86_64/config/kibana.yml|grep -Ev ^"#|^$"
server.port: 5601
server.host: "x.x.x.x"
elasticsearch.url: "http://x.x.x.x:9200"
#启动kinaba
[root@test_server elk]# /elk/kibana-6.3.0-linux-x86_64/bin/kibana &

Call nginx proxy configuration, it has a verification function, add the following nginx configuration file

        include /etc/nginx/default.d/*.conf;
        location / {
            auth_basic "secret";
            auth_basic_user_file /etc/nginx/passwd.db;
            proxy_pass http://x.x.x.x:5601;
            proxy_set_header Host $host:5601;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Via "nginx";
        }

 

Guess you like

Origin www.cnblogs.com/dengcongcong/p/10990780.html