ELK6.5的安装与使用

一、ELK的简介

  • elk分别是elasticsearch(简称es)和logstash以及kibana,elasticseach是用于进行存储和索引的一个组件,而logstash是用于收集和传输数据的,kibana通常是配合es进行日志展示. logstash在各个服务器上进行数据采集,将采集到的数据存储到es上,然后kibana通过es进行数据的获取和查询.以web界面的形式进行展示.
    ELK架构图
    这里只做一些基本的使用搭建和讲解,想要深入的朋友可以去elk的官网进行学习,elk的组件版本不可以相差太大,相差太大的版本会无法正常使用

二、ELK的安装和使用

1. 配置信息

服务器: es 192.168.31.132/133    logstash 192.168.31.134 kibana 192.168.31.1
硬件: cpu2核  内存4G  磁盘50G 
系统: centos7.
ELK版本: 6.5  
JDK版本: jdk8](http://www.170hi.com/kw/other.web.nl01.sycdn.kuwo.cn/resource/n1/44/64/1077252466.mp3

2. jdk的安装

每台服务器都需要安装配置jdk,版本必须是jdk8以上的版本,这里不讲怎么配置了,主要讲下如何配置环境变量
[root@Logstash java]# cat ~/.bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi

# User specific environment and startup programs
#在$PATH:$HOME/bin后面添加:/usr/java/jdk/bin
PATH=$PATH:$HOME/bin:/usr/java/jdk/bin

export PATH

即刻生效
[root@Logstash java]# source ~/.bash_profile 
[root@Logstash java]# java -version
java version "1.8.0_162"
Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)

3. ES的安装


创建目录
[root@Elasticsearch ~]# mkdir /elk

创建用户,es必须以非root用户启动
[root@Elasticsearch ~]# useradd elk

下载es二进制包并解压
[root@Elasticsearch ~]# cd /elk/
[root@Elasticsearch elk]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.5.4.tar.gz
[root@Elasticsearch elk]# tar -xf elasticsearch-6.5.4
[root@Elasticsearch elk]# cd elasticsearch-6.5.4
[root@Elasticsearch elasticsearch-6.5.4]# ls
bin  config  data  lib  LICENSE.txt  logs  modules  NOTICE.txt  plugins  README.textile
[root@Elasticsearch elasticsearch-6.5.4]# cd config/
[root@Elasticsearch config]# ls
elasticsearch.keystore  elasticsearch.yml  jvm.options  log4j2.properties  role_mapping.yml  roles.yml  users  users_roles

修改配置:node-1
[root@Elasticsearch config]# grep '^[a-Z]' elasticsearch.yml 
cluster.name: escluster #集群名称,用于其他节点进行发现,各节点及群名必须一致
node.name: node-1   #节点名称,每个节点不可一致
network.host: 192.168.31.132 #主机名
discovery.zen.ping.unicast.hosts: ["192.168.31.132", "192.168.31.133"]    #集群节点ip,有多少节点就写几个

修改配置:node-2
[root@Elasticsearch config]# grep '^[a-Z]' elasticsearch.yml 
cluster.name: escluster #集群名称,用于其他节点进行发现,各节点及群名必须一致
node.name: node-2   #节点名称,每个节点不可一致
network.host: 192.168.31.133 #主机名
discovery.zen.ping.unicast.hosts: ["192.168.31.132", "192.168.31.133"]    #集群节点ip,有多少节点就写几个

调整系统内核
[root@Elasticsearch bin]# sysctl -w vm.max_map_count=262144
[root@Elasticsearch bin]# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
[root@Elasticsearch bin]# sysctl -p

启动es
[root@Elasticsearch elasticsearch-6.5.4]#  chown -R elk. /elk/
[elk@Elasticsearch elasticsearch-6.5.4]# bin/elasticsearch -d

访问任意节点
http://192.168.31.132:9200/_cluster/health?pretty
注:status为green表示正常,yellow为警告,red为故障
{
  "cluster_name" : "escluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

4. logstash的安装和配置

创建目录
[root@Logstash ~]# mkdir /elk
[root@Logstash ~]# cd /elk/

下载logstash二进制包并解压
[root@Logstash elk]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz
[root@Logstash elk]# tar -xf logstash-6.5.4.tar.gz
[root@Logstash elk]# cd logstash-6.5.4
[root@Logstash logstash-6.5.4]# ls
 bin  conf  config  CONTRIBUTORS  data  Gemfile  Gemfile.lock  lib  LICENSE.txt  logs  logstash-core  logstash-core-plugin-api  modules  NOTICE.TXT  tools  vendor  x-pack

测试
[root@Logstash logstash-6.5.4]# bin/logstash -e 'input { stdin { type => test } } output { stdout {  } }'
Sending Logstash logs to /elk/logstash-6.5.4/logs which is now configured via log4j2.properties
[2018-12-25T21:20:29,517][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-12-25T21:20:29,538][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.5.4"}
[2018-12-25T21:20:34,131][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-12-25T21:20:40,865][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x52a485ca run>"}
The stdin plugin is now waiting for input:
[2018-12-25T21:20:40,929][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-12-25T21:20:41,211][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}  #这里会等待你输入,输入后会打印json格式的数据
hello world    
{
          "type" => "test",
       "message" => "hello world",
    "@timestamp" => 2018-12-25T13:21:50.827Z,
      "@version" => "1",
          "host" => "Logstash"
}


编写conf文件用于收集日志
创建conf目录用于存放自写的conf文件
[root@Logstash logstash-6.5.4]# mkdir conf
[root@Logstash logstash-6.5.4]# cat conf/test.conf 
input{
	file{					#使用file插件
	type =>"test"			
	path =>"/var/log/messages"     		#输入日志的路径
	start_position => "beginning"  		#从最早的日志开始收集
	}
}

output{						#使用es插件
	elasticsearch{					
	hosts => ["192.168.31.132:9200"]	#es主机地址
	action => "index"			#es动作设置
	index => "test-%{+YYYY-MM-dd}"		#设置索引名

	}
}

后台启动
[root@Logstash logstash-6.5.4]# nohup bin/logstash -f conf/test.conf &

启动完成后访问es,查看是否有索引
http://192.168.31.132:9200/_cat/indices?v
health status index           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   test-2018-12-25 8FGxJf2MTXaBi8x6JH5GxQ   5   1       3542            0    475.6kb           460b

5. kibana的安装和配置

创建目录
[root@Kibana ~]# mkdir /elk
[root@Kibana ~]# cd /elk

下载kibana
[root@Kibana ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.5.4-linux-x86_64.tar.gz
[root@Kibana elk]#  tar -xf kibana-6.5.4-linux-x86_64

配置kibana文件
[root@Kibana elk]# cd kibana-6.5.4-linux-x86_64
[root@Kibana kibana-6.5.4-linux-x86_64]# ls
bin  config  data  LICENSE.txt  node  node_modules  NOTICE.txt  optimize  package.json  plugins  README.txt  src  webpackShims
[root@Kibana kibana-6.5.4-linux-x86_64]# grep '^[a-Z]' config/kibana.yml 
server.host: "192.168.31.135"
elasticsearch.url: "http://192.168.31.132:9200"

启动并访问
http://192.168.31.135:5601

猜你喜欢

转载自blog.csdn.net/Jack_Yangyj/article/details/85255806