elk笔记2.0

1、filebeat配置:

filebeat读取日志文件写入redis,为了保持日志采集、解析和持久化过程中的一致性,对同一种类型日志使用相同解析规则,在日志采集时添加自定义字段log_typ(日志类型)和log_file(日志来源)。

filebeat.inputs:
- type: log
  paths: - /home/public/pm2/channelHandle-out-2.log fields: log_file: xsj-channelhandle-out log_type: a-out-log fields_under_root: true encoding: utf-8 - type: log paths: - /home/public/pm2/channelHandle-err-2.log fields: log_file: xsj-channelhandle-err log_type: a-err-log fields_under_root: true encoding: utf-8 processors: - drop_event: when.not.contains: message: "收到" output.redis: hosts: ["10.0.1.223:6700","10.0.1.224:6700"] db: 0 password: "[email protected]" key: "%{[log_type]:api}" timeout: 5 

./filebeat -e

2、logstatsh

logstash规则解析:

创建多个配置文件,一个配置文件用来完成一种类型日志的消费、解析和持久化工作。

cd /data/logstash/
mkdir logstash.d
touch logstash.d/{channelhandle.conf,wss-nginx.conf}
vim logstash.d/channelhandle.conf

# 配置内容如下

input {
  redis {
    host => "127.0.0.1" port => "6700" password => "[email protected]" data_type => "list" key => "a-out-log" } redis { host => "127.0.0.1" port => "6700" password => "[email protected]" data_type => "list" key => 'a-err-log' } } filter { mutate { rename => {"[host][name]" => "host_name" } remove_field => ["ecs", "input", "log", "agent", "host"] } if [log_type] == "a-out-log" { grok { match => { "message" => ["(?<recTime>(\d+-){2}\d+\s+(\d+:){2}\d+).*?收到.*?(?<content>{.*})", "^收到.*?(?<content>{.*})"] } } json { source => "content" remove_field => ["content"] } mutate { remove_field => ["content", "param"] } date { match => [ "reqTime", "yyyy-MM-dd HH:mm:ss", "UNIX"] timezone => "Asia/Shanghai" target => ["@timestamp"] } if [recTime] { date { match => ["recTime", "yyyy-MM-dd HH:mm:ss", "UNIX"] target => ["recTime"] } ruby { init => "require 'time'" code => "duration = (event.get('recTime') - event.get('@timestamp')); event.set('duration', duration)" } mutate { remove_field => ["recTime"] } } geoip { source => "ip" target => "geoip" add_tag => [ "agent-ip" ] } } } output { elasticsearch { hosts => ["10.0.1.221:9200", "10.0.1.222:9200"] index => "logstash-%{[log_file]}-%{+YYYYMMdd}" } } 

supervisor进程管理配置文件:

[program:logstash]
command=/data/logstash/bin/logstash
autostart=true
autorestart=true logfile_maxbytes=50MB logfile_backups=5 environment=JAVA_HOME=/usr/local/jdk stdout_logfile=/var/log/supervisor/logstash.out.log stderr_logfile=/var/log/supervisor/logstash.err.log 

3、 redis

supervisor 进程管理启动文件:

[program:redis]
command=/usr/local/bin/redis-server /data/redis/conf/redis-6700.conf
autostart=true autorestart=true logfile_maxbytes=50MB logfile_backups=5 stdout_logfile=/var/log/supervisor/redis.out.log stderr_logfile=/var/log/supervisor/redis.err.log 

redis 主从配置(redis/config/redis-6700.conf)

slaveof 10.0.1.223 6700
masterauth [email protected] slave-serve-stale-data yes slave-read-only yes 

4、java环境配置(elastic\logstash)

扫描二维码关注公众号,回复: 6482396 查看本文章
[root@server01 src]$ tar -zvxf jdk-8u151-linux-x64.tar.gz -C /data/app/
[root@server01 src]$ ln -s /data/app/jdk1.8.0_151 /data/app/jdk
[root@server01 src]$ cat <<EOF >> /etc/profile
export JAVA_HOME=/data/app/jdk
PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar EOF [root@server01 src]$ source /etc/profile 

5、elasticsearch

/etc/sysctl.conf

vm.max_map_count = 655350

/etc/security/limits.conf

*  -  nofile 102400
*  -  nproc 4096

修改/etc/supervisord.conf

minfds=102400 

配置子进程管理文件:/etc/supervisord.d/elastic.ini

[program:elasticsearch]
user=elkuser
command=/data/elasticsearch/bin/elasticsearch
environment=ES_HEAP_SIZE=2g minfds=102400 minprocs=32768 autostart=true autorestart=true logfile_maxbytes=50MB logfile_backups=5 stdout_logfile=/var/log/supervisor/elasticsearch.out.log stderr_logfile=/var/log/supervisor/elasticsearch.err.log 

kibana 汉化: 配置文件修改:i18n.locale: "zh-CN"

6、 logstash 解析wss反向代理nginx日志:

nginx配置文件中定义的access日志格式:

log_format main '$remote_addr $http_X_Forwarded_For [$time_local] ' '$upstream_addr "$upstream_response_time" "$request_time" ' '$http_host $request ' '"$status" $body_bytes_sent "$http_referer" ' '"$http_accept_language" "$http_user_agent" '; 

nginx json日志格式

log_format json escape=json  '{"@timestamp":"$time_iso8601",'
                    '"@source":"$server_addr",' '"hostname":"$hostname",' '"ip":"$http_x_forwarded_for",' '"client":"$remote_addr",' '"request_method":"$request_method",' '"scheme":"$scheme",' '"domain":"$server_name",' '"client_host":"$host",' '"referer":"$http_referer",' '"request":"$request_uri",' '"args":"$args",' '"size":$body_bytes_sent,' '"status": $status,' '"responsetime":$request_time,' '"upstreamtime":"$upstream_response_time",' '"upstreamaddr":"$upstream_addr",' '"http_user_agent":"$http_user_agent",' '"https":"$https"' '}'; 

自定义解析编码存放位置: /data/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-patterns-core-4.1.2/patterns

WSS_NGINX_GROK
vim wss-nginx

WSS_NGINX_ACCESS %{IPORHOST:clientip} %{NOTSPACE:http_x_forwarded_for} \[%{HTTPDATE:timestamp}\] (?<upstream_addr>%{IPORHOST}:%{NUMBER}) \"%{NUMBER:upstream_response}\" \"%{NUMBER:request_time}\" (?<http_host>%{IPORHOST}:%{NUMBER}) %{NOTSPACE:request} /.*? \"%{NUMBER:status}\" (?:%{NUMBER:sent_bytes}|-) (\".*\"?){2} %{QS:agent}
LEYOU_API_NGINX_GROK
%{IPORHOST:clientip}.*?\[%{HTTPDATE:timestamp}\] \"(?<request>%{WORD}) (?<request_url>.*?) .*?\" %{NUMBER:status} (?:%{NUMBER:sent_bytes}|-) (\".*?)\" %{QS:agent} %{NOTSPACE:http_x_forwarded_for}

filebeat配置:

filebeat.inputs:
- type: log
  paths: - /usr/local/nginx/logs/access.log fields: log_file: lc-wss-access log_type: wss-nginx-access fields_under_root: true encoding: utf-8 output.redis: hosts: ["10.0.1.223:6700", "10.0.1.224:6700"] db: 0 password: "[email protected]" key: "%{[log_type]:xsj_wss}" timeout: 5 

logstash配置文件:

input {
  redis {
    host => "127.0.0.1"
    port => "6700" password => "[email protected]" data_type => "list" key => 'wss-nginx-access' } } filter { mutate { rename => {"[host][name]" => "host_name" } remove_field => ["ecs", "input", "log", "agent", "host"] } if [log_type] == "wss-nginx-access" { grok { match => [ "message" , "%{WSS_NGINX_ACCESS}"] overwrite => [ "message" ] remove_tag => ["_grokparsefailure"] } mutate { convert => ["response", "integer"] convert => ["bytes", "integer"] convert => ["responsetime", "float"] } geoip { source => "http_x_forwarded_for" target => "geoip" add_tag => [ "nginx-geoip" ] } date { match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z"] target => ["@timestamp"] } } } output { elasticsearch { hosts => ["10.0.1.221:9200", "10.0.1.222:9200"] index => "logstash-%{[log_file]}-%{+YYYYMMdd}" } } 

配置pipelines启用多配置文件

 - pipeline.id: channelhandle
   pipeline.workers: 1 
   path.config: "/data/logstash/logstash.d/channelhandle.conf" - pipeline.id: wss-nginx pipeline.workers: 1 path.config: "/data/logstash/logstash.d/wss-nginx.conf" 

地图热力图

/data/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.0.1-java/lib/logstash/outputs/elasticsearch/elasticsearch-template-es7x.json

猜你喜欢

转载自www.cnblogs.com/capable/p/11025997.html
elk