Logstash uses grok to filter nginx logs (2)

http://www.cnblogs.com/Orgliny/p/5592186.html

In the production environment, the nginx log format often uses a custom format. We need to structure the messages in logstash and then store them to facilitate kibana's search and statistics. Therefore, we need to parse the messages.

  This article uses the grok filter, uses match regular expression parsing, and customizes it according to its own log_format.

1. nginx log format

  The log_format configuration is as follows:

log_format  main  '$remote_addr - $remote_user [$time_local] $http_host $request_method "$uri" "$query_string" '
                  '$status $body_bytes_sent "$http_referer" $upstream_status $upstream_addr $request_time $upstream_response_time '
                  '"$http_user_agent" "$http_x_forwarded_for"' ;

  The corresponding log is as follows:

1.1.1.1 - - [06/Jun/2016:00:00:01 +0800] www.test.com GET "/api/index" "?cms=0&rnd=1692442321" 200 4 "http://www.test.com/?cp=sfwefsc" 200 192.168.0.122:80 0.004 0.004 "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36" "-"

2. Write regular expressions

  There are some regular patterns in logstash by default for us to use. You can visit the Grok Debugger to view them. You can view them in the  $logstash/vendor/bundle/jruby/ 1.9 /gems/logstash-patterns-core- 4.0 . 0 /patterns/  directory.

  The basic definition is in grok-patterns, and we can use the regular pattern in it. Of course, not all of them are suitable for the nginx field. At this time, we need to customize the regular pattern, and then call it by specifying patterns_dir.

At the same time, you can use the Grok Debugger or Grok Comstructor tool to help us debug faster   when writing regular expressions. When you don't know how to use the regular expressions in logstash, you can also use Grok Debugger's Descover to automatically match.

  1) nginx standard log format

    The grok regularization that comes with logstash has Apache's standard log format:

COMMONAPACHELOG %{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}

    For the nginx standard log format, it can be found that there is only one more  $http_x_forwarded_for  variable at the end. Then the grok regularity of the nginx standard log is defined as:

MAINNGINXLOG %{COMBINEDAPACHELOG} %{QS:x_forwarded_for}

  2) Custom format

    Using log_format to match the corresponding regular is as follows:

%{IPV4:remote_addr} - (%{USERNAME:user}|-) \[%{HTTPDATE:log_timestamp}\] (%{HOSTNAME1:http_host}|-) (%{WORD:request_method}|-) \"(%{URIPATH1:uri}|-|)\" \"(%{URIPARM1:param}|-)\" %{STATUS:http_status} (?:%{BASE10NUM:body_bytes_sent}|-) \"(?:%{GREEDYDATA:http_referrer}|-)\" (%{STATUS:upstream_status}|-) (?:%{HOSTPORT1:upstream_addr}|-) (%{BASE16FLOAT:upstream_response_time}|-) (%{STATUS:request_time}|-) \"(%{GREEDYDATA:user_agent}|-)\" \"(%{FORWORD:x_forword_for}|-)\"

    这里面有几个是我自定义的正则:

URIPARM1 [A-Za-z0-9$.+!*'|(){},~@#%&/=:;^\\_<>`?\-\[\]]*
URIPATH1 (?:/[\\A-Za-z0-9$.+!*'(){},~:;=@#% \[\]_<>^\-&?]*)+
HOSTNAME1 \b(?:[0-9A-Za-z_\-][0-9A-Za-z-_\-]{0,62})(?:\.(?:[0-9A-Za-z_\-][0-9A-Za-z-:\-_]{0,62}))*(\.?|\b)
STATUS ([0-9.]{0,3}[, ]{0,2})+
HOSTPORT1 (%{IPV4}:%{POSINT}[, ]{0,2})+
FORWORD (?:%{IPV4}[,]?[ ]?)+|%{WORD}

  message是每段读进来的日志,IPORHOST、USERNAME、HTTPDATE等都是patterns/grok-patterns中定义好的正则格式名称,对照日志进行编写。

  grok pattren的语法为:%{SYNTAX:semantic},":" 前面是grok-pattrens中定义的变量,后面可以自定义变量的名称。(?:%{SYNTAX:semantic}|-)这种形式是条件判断。

  如果有双引号""或者中括号[],需要加 \ 进行转义。

  详解自定义正则:

 URIPARAM \?[A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]<>]* 

 URIPARM1 [A-Za-z0-9$.+!*'|(){},~@#%&/=:;^\\_<>`?\-\[\]]* grok-patterns中正则表达式,可以看到grok-patterns中是以“?”开始的参数,在nginx的 $query_string 中已经把“?”去掉了,所以我们这里不再需要“?”。另外单独加入日志中出现的  ^ \ _ < > ` 特殊符号 

 URIPATH (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%&_\-]*)+ 

 URIPATH1 (?:/[\\A-Za-z0-9$.+!*'(){},~:;=@#% \[\]_<>^\-&?]*)+ grok-patterns中正则表达式,grok-patterns中的URIPATH不能匹配带空格的URI,于是在中间加一个空格。另外还有 \ [ ] < > ^ 特殊符号。

 HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b) 

 HOSTNAME1 \b(?:[0-9A-Za-z_\-][0-9A-Za-z-_\-]{0,62})(?:\.(?:[0-9A-Za-z_\-][0-9A-Za-z-:\-_]{0,62}))*(\.?|\b) 添加匹配 http_host 中带有 "-" 的字符。

 HOSTPORT %{IPORHOST}:%{POSINT} 

 HOSTPORT1 (%{IPV4}:%{POSINT}[, ]{0,2})+ 在匹配 upstream_addr 字段时发现,会出现多个IP地址的情况出现,匹配多个IP地址。

 STATUS ([0-9.]{0,3}[, ]{0,2})+ 该字段是当出现多个 upstream_addr 字段时匹配多个 http_status 。

 FORWORD (?:%{IPV4}[,]?[ ]?)+|%{WORD} 当 x_forword_for 字段出现多个IP地址时匹配。

  nginx左右字段都定义完成,可以使用Grok Debugger或者Grok Comstructor工具来测试。添加自定义正则的时候,在Grok Debugger中可以勾选“Add custom patterns”。

  以上日志匹配结果为:

复制代码
{
  "remote_addr": [
    "1.1.1.1"
  ],
  "user": [
    "-"
  ],
  "log_timestamp": [
    "06/Jun/2016:00:00:01 +0800"
  ],
  "http_host": [
    "www.test.com"
  ],
  "request_method": [
    "GET"
  ],
  "uri": [
    "/api/index"
  ],
  "param": [
    "?cms=0&rnd=1692442321"
  ],
  "http_status": [
    "200"
  ],
  "body_bytes_sent": [
    "4"
  ],
  "http_referrer": [
    "http://www.test.com/?cp=sfwefsc"
  ],
  "port": [
    null
  ],
  "upstream_status": [
    "200"
  ],
  "upstream_addr": [
    "192.168.0.122:80"
  ],
  "upstream_response_time": [
    "0.004"
  ],
  "request_time": [
    "0.004"
  ],
  "user_agent": [
    ""Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36""
  ],
  "client_ip": [
    "2.2.2.2"
  ],
  "x_forword_for": [
    null
  ]
}
复制代码

3、logstash的配置文件

  创建自定义正则目录

# mkdir -p /usr/local/logstash/patterns
# vi /usr/local/logstash/patterns/nginx

  然后写入上面自定义的正则

URIPARM1 [A-Za-z0-9$.+!*'|(){},~@#%&/=:;_?\-\[\]]*
URIPATH1 (?:/[A-Za-z0-9$.+!*'(){},~:;=@#%&_\- ]*)+
URI1 (%{URIPROTO}://)?(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?
NGINXACCESS %{IPORHOST:remote_addr} - (%{USERNAME:user}|-) \[%{HTTPDATE:log_timestamp}\] %{HOSTNAME:http_host} %{WORD:request_method} \"%{URIPATH1:uri}\" \"%{URIPARM1:param}\" %{BASE10NUM:http_status} (?:%{BASE10NUM:body_bytes_sent}|-) \"(?:%{URI1:http_referrer}|-)\" (%{BASE10NUM:upstream_status}|-) (?:%{HOSTPORT:upstream_addr}|-) (%{BASE16FLOAT:upstream_response_time}|-) (%{BASE16FLOAT:request_time}|-) (?:%{QUOTEDSTRING:user_agent}|-) \"(%{IPV4:client_ip}|-)\" \"(%{WORD:x_forword_for}|-)\"

  logstash.conf配置文件内容

复制代码
input {
        file {
                path => "/data/nginx/logs/access.log"
                type => "nginx-access"
                start_position => "beginning"
                sincedb_path => "/usr/local/logstash/sincedb"
        }
}
filter {
        if [type] == "nginx-access" {
                grok {
                        patterns_dir => "/usr/local/logstash/patterns"        //设置自定义正则路径
                        match => {
                                "message" => "%{NGINXACCESS}"
                        }
                }
                date {
                        match => [ "log_timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
                }
         urldecode {
                 all_fields => true
           }
         //把所有字段进行urldecode(显示中文)
        }
}
output {
        if [type] == "nginx-access" {
                elasticsearch {
                        hosts => ["10.10.10.26:9200"]
                        manage_template => true
                        index => "logstash-nginx-access-%{+YYYY-MM}"
                }
        }

}
复制代码

 

 

 4、启动logstash,然后就可以查看日志是否写入elasticsearch中。


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325406058&siteId=291194637