linux系统搭建ELK Stack企业日志平台

修改时区

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
yum install ntpdate -y
ntpdate time.windows.com

配置YUM仓库,后面安装都用这个库:

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
 vi /etc/yum.repos.d/elastic.repo
[elastic-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

1、安装elasticsearch

yum install elasticsearch -y
vim /etc/elasticsearch/elasticsearch.yml

#节点1

cluster.name: elk-cluster
node.name: node-1
#node.master: true或fase  #是否作为主节点
path.data: /home/es/es_data
network.host: 192.168.1.195
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.195", "192.168.1.196", "192.168.1.197"]
discovery.zen.minimum_master_nodes: 2

#节点2

cluster.name: elk-cluster
node.name: node-2
path.data: /home/es/es_data
network.host: 192.168.1.196
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.195", "192.168.1.196", "192.168.1.197"]
discovery.zen.minimum_master_nodes: 2

#节点3

cluster.name: elk-cluster
node.name: node-3
path.data: /home/es/es_data
network.host: 192.168.1.197
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.195", "192.168.1.196", "192.168.1.197"]
discovery.zen.minimum_master_nodes: 2

cluster.name # 集群名称
node.name # 节点名称
path.data # 数据目录。可以设置多个路径,这种情况下,所有的路径都会存储数据。

集群主要关注两个参数:
discovery.zen.ping.unicast.hosts # 单播,集群节点IP列表。提供了自动组织集群,自动扫描端口9300-9305连接其他节点。无需额外配置。
discovery.zen.minimum_master_nodes # 最少主节点数
为防止数据丢失,这个参数很重要,如果没有设置,可能由于网络原因脑裂导致分为两个独立的集群。为避免脑裂,应设置符合节点的法定人数:(nodes / 2 ) + 1
换句话说,如果集群节点有三个,则最小主节点设置为(3/2) + 1 或2

查看集群节点:

curl -XGET 'http://127.0.0.1:9200/_cat/nodes?pretty'  

查看集群健康状态:

curl -i -XGET http://127.0.0.1:9200/_cluster/health?pretty

安装Elasticsearch – head插件

安装npm软件

tar -zxvf node-v4.4.7-linux-x64.tar.gz
vi /etc/profile
NODE_HOME=/usr/local/node-v4.4
PATH=$NODE_HOME/bin:$PATH
export NODE_HOME PATH
source /etc/profile

安装elasticsearch-head

git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
vi Gruntfile.js
options: {
     port: 9100,
     base: '.',
     keepalive: true,
     hostname: '*'                                                                                                                                                                                                                                                                                                                
}
npm install
npm run start

2、安装logstash

yum install logstash -y
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
input {
    file {
        path => "/var/log/messages"
        type => "system"
        start_position => "beginning"
        }
}
output {    
         elasticsearch {
                hosts => ["192.168.1.202:9200"]
                index => "system-%{+YYYY.MM.dd}"
            }
}

下面是我的配置文件例子

cat yuejiaxiao.conf 
input {
   file {
        path => ["/data/docker-yuejiaxiao/logs/certification/certification-provider.info.log"]
        type => "certification-info"
        start_position => "beginning"
   }
   file {
        path => ["/data/docker-yuejiaxiao/logs/certification/certification-provider.error.log"]
        type => "certification-error"
        start_position => "beginning"
   }
}
filter {
    date {
       match => ["timestamp","yyyy-MM-dd HH:mm:ss"]
       remove_field => "timestamp"
    }  
}
output {
    if [type] == "certification-info" {
         elasticsearch {
            hosts  => ["http://172.16.86.215:9200"]
            index  => "certification-info-%{+YYYY.MM.dd}"
         }
    }
    if [type] == "certification-error" {
         elasticsearch {
            hosts  => ["http://172.16.86.215:9200"]
            index  => "certification-error-%{+YYYY.MM.dd}"
         }
    }
}

中间的filter配置,是为了解决拉取的日志时间和本地系统时间不一致的问题。

3、安装kibana

yum install kibana –y
vi /etc/kibana/kibana.yml 
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: http://localhost:9200
systemctl start kibana
systemctl enable kibana

elk搭建好了,elasticsearch(搜索引擎)、logstash(搜集),kibana(可视化平台),这是kibana平台地址http://ip:5601

kibana中文汉化解决

将Kibana_Hanization-master.zip上传至kibana服务器

unzip Kibana_Hanization-master.zip

拷贝此项目中的translations文件夹到您的kibana目录下的src/legacy/core_plugins/kibana/目录

cd Kibana_Hanization-master/
cp -r translations/  /usr/share/kibana/src/legacy/core_plugins/kibana/

修改您的kibana配置文件kibana.yml中的配置项:i18n.locale: “zh-CN”

vim /etc/kibana/kibana.yml
#i18n.locale: "en"
i18n.locale: "zh-CN" 

重启Kibana,汉化完成

systemctl restart kibana

猜你喜欢

转载自www.cnblogs.com/xinxing1994/p/11947017.html
今日推荐