EFK+kafka缓存采集NGINX状态码、pv uv 、访问趋势、前十访问量

EFK 流程图

在这里插入图片描述

一.准备环境

1.准备三台centos7虚拟机

在这里插入图片描述

2.关掉防火墙

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# setenforce 0

3.同步时间

[root@localhost ~]# yum -y install ntpdate
[root@localhost ~]# ntpdate pool.ntp.org

4.将三个主机改名为kafka …

[root@localhost ~]# hostname kafka1  136主机
[root@localhost ~]# hostname kafka2  137主机
[root@localhost ~]# hostname kafka3  138主机

5.域名解析 三台都添加

[root@localhost ~]# vim /etc/hosts
192.168.27.136 kafka1
192.168.27.137 kafka2
192.168.27.138 kafka3

6.上传安装包 (三台自己分配)

第一台 136在这里插入图片描述
第二台 137在这里插入图片描述
第三台 138在这里插入图片描述

二.安装部署

1.安装jdk环境 三台都安装

[root@kafka1 src]# rpm -ivh jdk-8u131-linux-x64_.rpm 
准备中...                          ################################# [100%]
正在升级/安装...
   1:jdk1.8.0_131-2000:1.8.0_131-fcs  ################################# [100%]
Unpacking JAR files...
	tools.jar...
	plugin.jar...
	javaws.jar...
	deploy.jar...
	rt.jar...
	jsse.jar...
	charsets.jar...
	localedata.jar...
[root@kafka1 src]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
[root@kafka1 src]# 

2.安装zookeeper 三台都安装

解压安装
[root@kafka3 src]# tar xzf zookeeper-3.4.14.tar.gz 
[root@kafka3 src]# mv zookeeper-3.4.14 /usr/local/zookeeper
[root@kafka3 zookeeper]# cd /usr/local/zookeeper/conf/
[root@kafka3 conf]# ls
configuration.xsl  log4j.properties  zoo_sample.cfg
[root@kafka3 conf]# mv zoo_sample.cfg zoo.cfg 
配置zookeeper 三台都添加
[root@kafka3 conf]# vim zoo.cfg 

# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=192.168.27.136:2888:3888
server.2=192.168.27.137:2888:3888
server.3=192.168.27.138:2888:3888
创建目录 然后依次启动
[root@kafka1 conf]# mkdir /tmp/zookeeper
[root@kafka1 conf]# echo "1" > /tmp/zookeeper/myid
[root@kafka1 conf]# /usr/local/zookeeper/bin/zkServer.sh start
[root@kafka2 conf]# mkdir /tmp/zookeeper
[root@kafka2 conf]# echo "2" > /tmp/zookeeper/myid
[root@kafka2 conf]# /usr/local/zookeeper/bin/zkServer.sh start
[root@kafka3 conf]# mkdir /tmp/zookeeper
[root@kafka3 conf]# echo "3" > /tmp/zookeeper/myid
[root@kafka3 conf]# /usr/local/zookeeper/bin/zkServer.sh start
查看zookeeper
[root@kafka1 src]# /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@kafka2 src]# /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@kafka3 src]# /usr/local/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Mode: leader

3.安装kafka 三台都安装

解压安装
[root@kafka1 src]# tar xzf kafka_2.11-2.2.0.tgz 
[root@kafka1 src]# mv kafka_2.11-2.2.0 /usr/local/kafka
编辑配置文件
[root@kafka1 src]# vim /usr/local/kafka/config/server.properties 

在这里插入图片描述在这里插入图片描述

启动 查看端口
[root@kafka1 src]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
[root@kafka1 src]# netstat -nltpu |grep 9092
tcp6       0      0 :::9092                 :::*                    LISTEN      26868/java          

验证kafka

创建topic
[root@kafka1 src]# /usr/local/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.27.136:2181  --replication-factor 2 --partitions 3 --topic wg007
Created topic wg007.
验证topic
[root@kafka1 src]# /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.27.136:2181
wg007
模拟生产者
[root@kafka01 config]# /usr/local/kafka/bin/kafka-console-producer.sh --broker-list 192.168.27.136:9092 --topic wg007
>宫保鸡丁
模拟消费者
[root@kafa02 config]# /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.27.136:9092  --topic wg007 --from-beginning
宫保鸡丁

4.安装 nginx filebeat

在其中一台安装 (我的是137)
注意:两个得安装在一台 因为用filebeat收集nginx日志 ,filebeat不能跨主机收集
先安装epel源
[root@kafka2 ~]# yum -y install epel-release
安装nginx
[root@kafka2 ~]# yum -y install nginx
安装filebeat
[root@kafka2 ~]# rpm -ivh filebeat-6.8.12-x86_64.rpm
启动nginx
[root@kafka2 ~]# systemctl start nginx
安装httpd-tolls 压测下nginx 产生点日志
[root@kafka2 ~]# yum -y install httpd-tools
[root@kafka2 ~]# ab -n 1000 -c 1000 http://192.168.27.137/
配置filebeat文件
[root@kafka2 ~]# vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log

output.kafka:
  enabled: true
  hosts: ["192.168.27.136:9092","192.168.27.137:9092","192.168.27.138:9092"]
  topic: nginx
  
重新启动 filebeat

验证kafka有没有收到filebeat的数据
先看下有没有nginx的topic
[root@kafka2 ~]# /usr/local/kafka_2.11-2.2.0/bin/kafka-topics.sh --list --zookeeper 192.168.27.136:2181
nginx

在模拟下消费着看下
[root@kafka2 ~]# /usr/local/kafka_2.11-2.2.0/bin/kafka-console-consumer.sh --bootstrap-server 192.168.27.136:9092 --topic nginx --from-beginning

5.安装配置elasticsearch

在一台安装就行 (我在136)
[root@kafka1 ~]# rpm -ivh elasticsearch-6.6.2.rpm
配置elasticsearch
[root@kafka1 ~]# vim /etc/elasticsearch/elasticsearch.yml 

在这里插入图片描述
在这里插入图片描述

启动elasticsearch
[root@kafka1 ~]# systemctl start elasticsearch

6.安装配置logsatsh

在一台安装 (我的在137)
[root@kafka2 ~]# rpm -ivh logstash-6.6.0.rpm
配置logsaths
[root@kafka2 ~]# vim /etc/logstash/conf.d/nginx.conf
input{
    
    
        kafka{
    
    
                bootstrap_servers => ["192.168.27.136:9092,192.168.27.137:9092,192.168.27.138:9092"]
                group_id => "logstash"
                topics => "nginx"
                consumer_threads => 5
        }

}
filter {
    
    
        json{
    
    
                source => "message"
        }
        
        mutate {
    
    
                remove_field => ["host","prospector","fields","input","log"]
        }
        grok {
    
    
                match => {
    
     "message" => "%{NGX}" }
        }

}

output{
    
     
        elasticsearch {
    
    
                hosts => "192.168.27.136:9200"
                index => "nginx-%{+YYYY.MM.dd}"
        }
}
添加下正则表达式
[root@kafka2 ~]# vim /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/nginx
NGX %{
    
    IPORHOST:client_ip} (%{
    
    USER:ident}|- ) (%{
    
    USER:auth}|-) \[%{
    
    HTTPDATE:timestamp}\] "(?:%{WORD:verb} (%{NOTSPACE:request}|-)(?: HTTP/%{NUMBER:http_version})?|-)" %{
    
    NUMBER:status} (?:%{
    
    NUMBER:bytes}|-) "(?:%{URI:referrer}|-)" "%{GREEDYDATA:agent}"
重启logstash 查看日志
root@kafka2 ~]# systemctl restart logstash
[root@kafka2 ~]# tailf /var/log/logstash/logstash-plain.log 
[2020-09-22T21:16:16,084][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-2, groupId=logstash] Successfully joined group with generation 6
[2020-09-22T21:16:16,084][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-2, groupId=logstash] Setting newly assigned partitions []
[2020-09-22T21:16:16,084][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Successfully joined group with generation 6
[2020-09-22T21:16:16,085][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-4, groupId=logstash] Successfully joined group with generation 6
[2020-09-22T21:16:16,085][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-4, groupId=logstash] Setting newly assigned partitions []
[2020-09-22T21:16:16,091][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Successfully joined group with generation 6
[2020-09-22T21:16:16,092][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Setting newly assigned partitions [nginx-0]
[2020-09-22T21:16:16,119][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=logstash] Setting newly assigned partitions []
[2020-09-22T21:16:16,119][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=logstash] Setting newly assigned partitions [msg-0]
[2020-09-22T21:16:16,154][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {
    
    :port=>9600}
[2020-09-22T21:16:16,241][INFO ][org.apache.kafka.clients.consumer.internals.Fetcher] [Consumer clientId=logstash-0, groupId=logstash] Resetting offset for partition nginx-0 to offset 1000.
没有报错

7.安装配置kibana

在一台安装 (我在138)
	[root@kafka3 ~]# rpm -ivh kibana-6.6.2-x86_64.rpm
配置kibana
[root@kafka3 ~]# vim /etc/kibana/kibana.yml 

在这里插入图片描述
在这里插入图片描述

启动kibana
[root@kafka3 ~]# systemctl start kibana
查看elasticsearch日志收到数据没
[root@kafka1 ~]# tailf /var/log/elasticsearch/wg007.log 
[2020-09-22T20:28:50,252][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2020-09-22T20:28:50,370][INFO ][o.e.l.LicenseService     ] [node-1] license [1c133ff2-d40d-4e30-9bd7-e4f937d362bc] mode [basic] - valid
[2020-09-22T20:29:52,112][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [logstash] for index patterns [logstash-*]
[2020-09-22T20:30:16,281][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [msg-2020.09.22] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2020-09-22T20:30:16,761][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [msg-2020.09.22/Zwzx43dCTVGHTHW5D7YpUg] create_mapping [doc]
[2020-09-22T20:30:28,648][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [.kibana_1] creating index, cause [api], templates [], shards [1]/[1], mappings [doc]
[2020-09-22T20:30:28,651][INFO ][o.e.c.r.a.AllocationService] [node-1] updating number_of_replicas to [0] for indices [.kibana_1]
[2020-09-22T20:30:28,865][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2020-09-22T20:30:28,903][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2020-09-22T20:31:24,158][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-1] adding template [kibana_index_template:.kibana] for index patterns [.kibana]

有kibana啦 如果没nginx 在压测一下

[root@kafka2 ~]# ab -n 1000 -c 1000 http://192.168.27.137/
再次查看
[root@kafka1 ~]# tailf /var/log/elasticsearch/wg007.log 
[2020-09-22T21:25:20,673][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [nginx-2020.09.22/X_m8nXLrQb2y4-b5FLHMkA] create_mapping [doc]
这就有啦

8.打开kibana页面创建nginx索引

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

9.添加nginx可视化

nginx状态码

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

nginx pv值

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

nginx访问趋势

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

nginx前十访问量

在这里插入图片描述

在这里插入图片描述

添加可视化

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/Q274948451/article/details/108703755