01 Introduction
ELK is elastic company's three products ElasticSearch, initials Logstash, Kibana, mainly for log collection, analysis and reporting show. No exaggeration to say: This is a very powerful big data analytics platform.
This switched my personal public number: Tianmu Star, please a lot of attention.
First, the software version and architecture diagram used.
linux: CentOS 7.5.1804
ElasticSearch: elasticsearch-6.2.4
Logstash: logstash-6.2.4
Kibana: Kibana-6.2.4
filebeat: filebeat-6.2.4
server
ELK server: 192.168.0.1
Shipper端: 192.168.0.2
Second, install the software
1, various ELK installation, I use here to install the RPM package
Download software
# 前往官网下载:https://www.elastic.co/downloads/past-releases
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.4.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.4.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.4-x86_64.rpm
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.4-x86_64.rpm
ELK installed on the server
rpm -ivh elasticsearch-6.2.4.rpm
rpm -ivh logstash-6.2.4.rpm
rpm -ivh kibana-6.2.4-x86_64.rpm
Beats assembly mounted at the end Shipper
We only collect logs all choose to install filebeat
rpm -ivh filebeat-6.2.4-x86_64.rpm
Third, the configuration and startup
1, Elasticsearch configuration
# 配置
vim /etc/elasticsearch/elasticsearch.yml
### elasticsearch.yml ###
cluster.name: test-cluster # 集群名称
node.name: node-1 # 该节点的名称
node.master: true # 设置该节点为master
node.data: true # 设置该节点为数据节点
path.data: /var/lib/elasticsearch # 数据存放的路径
path.logs: /var/log/elasticsearch # 日志存放的路径
network.host: 192.168.0.1 # 监听的IP地址
http.port: 9200 # 使用的端口
discovery.zen.ping.unicast.hosts: ["192.168.159.32"] # 集群各个主机地址,本例是单机,只有一个。
### 测试elasticsearch是否正常启动,使用其他方式安装的需要检查JAVA是否配置好
systemctl start elasticsearch.service
### 其中9300端口是集群通信使用,9200则是数据传输时用 ###
netstat -tnlp |grep -v grep |grep java
tcp6 0 0 192.168.0.1:9200 :::* LISTEN 1838/java
tcp6 0 0 192.168.0.1:9300 :::* LISTEN 1838/java
#### 使用curl查看状态
curl http://192.168.159.32:9200
################## output #####################
{
"name" : "node-1",
"cluster_name" : "test-cluster",
"cluster_uuid" : "6bjEJ5KRQoGXJsh_ne60Zg",
"version" : {
"number" : "6.2.4",
"build_hash" : "ccec39f",
"build_date" : "2018-04-12T20:37:28.497551Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
2, Logstash Configuration
Set logstash need to collect logs
# 打开logstash配置文件
vim /etc/logstash/logstash.yml
### logstash.yml ###
path.data: /var/lib/logstash # 数据存放路径
http.host: "192.168.159.32" # 该节点的ip地址
http.port: 9600 # 监听的端口
path.logs: /var/log/logstash # 日志存放路径
# 先设置一个收集系统日志的配置
vim /etc/logstash/conf.d/syslog.conf
### syslong.conf ###
input {
syslog {
type => "system-syslog"
port => 10000
}
}
output {
elasticsearch {
hosts => ["192.168.0.1:9200"]
index => "system-syslog-%{+YYYY.MM}"
}
}
# 使用命令测试配置是否ok
/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
#####################
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
#####################
# 测试没有问题我们启动服务看看
systemctl start logstash.service
### 9600是logstash监听端口,10000是需要收集日志的输入端口 ######
netstat -tnlp |grep -v grep|grep java
tcp6 0 0 :::10000 :::* LISTEN 2605/java
tcp6 0 0 192.168.159.32:9600 :::* LISTEN 2605/java
# 获取收集的日志的索引情况
curl http://192.168.0.1:9200/_cat/indices
################## output #####################
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open system-syslog-2019.04 ENdIHf85QCu 5 1 25948 0 3.4mb 3.4m
##############################################
# 获取此索引详细情况
curl http://192.168.0.1:9200/system-2019.04?pretty
################## output #####################
{
"system-syslog-2019.04" : {
"aliases" : { },
"mappings" : {
"doc" : {
"properties" : {
"@timestamp" : {
"type" : "date"
},
....省略....
....省略....
....省略....
"settings" : {
"index" : {
"creation_date" : "1555555663263",
"number_of_shards" : "5",
"number_of_replicas" : "1",
"uuid" : "ENdIHf85QCuCHrIYajcd0w",
"version" : {
"created" : "6020499"
},
"provided_name" : "system-syslog-2019.04"
}
}
}
}
After obtaining more information and instructions logstash elasticsearch communication is normal.
3, kibana arrangement
Remember the new / var / log / kibana directory for storing log, and to give permission
# 打开以下配置
vim /etc/kibana/kibana.yml
### kibaba.yml ###
server.port: 5601 # 配置监听端口
server.host: 192.168.0.1 # 配置监听IP
elasticsearch.url: "http://192.168.159.0.1:9200" # 配置ES的url
logging.dest: /var/log/kibana/kibana.log # 配置kibana的日志存放路径
# 启动服务
systemctl start kibana.service
netst -tnlp |grep -v|grep 5601
tcp 0 0 192.168.159.32:5601 0.0.0.0:* LISTEN 4249/node
Open a browser to access:
http://192.168.0.1:5601
Fourth, create an index in the kibana
1, just create a profile to collect system logs in logstash, and now try to create an index in kibana.
Discover in, we can query the system has collected relevant log
02 end
至此我们已经搭建好ELK的基本功能,下期我们来试试使用这个平台收集一下另一台服务器的nginx的日志,请继续关注。