CentOS7 mounted ELK (nginx, elasticsearch-5.1.1, logstash-5.1.1, kibana-5.1.1)

nginx:

# Yum install directly: 
[root @ Elk-node1 ~] # yum -y install nginx 
official document: HTTP: //nginx.org/en/docs/http/ngx_http_log_module.html#log_format 
# modify the configuration file of the log format: 
vim /etc/nginx/nginx.conf 
# http adding module 
          log_format JSON '{ "@timestamp": "$ time_iso8601",' 
                           ' "@version": ". 1",' 
                           ' "Client": "$ REMOTE_ADDR", ' 
                           ' "url": "$ uri", ' 
                           ' "Status": "$ Status", ' 
                           ' "Domain": "$ Host", ' 
                           ' "Host": "$ server_addr",'
                           '"size":$body_bytes_sent,'
                           '"responsetime":$request_time,'
                           '"referer": "$http_referer",'
                           '"ua": "$http_user_agent"'
               '}';
#在server模块中添加
access_log  /home/wwwlogs/access_json.log  json;
#修改后的nginx.conf文件
[root@elk-node1 ~]# grep -Ev "#|^&" /etc/nginx/nginx.conf
user  www www;

worker_processes auto;

error_log  /home/wwwlogs/nginx_error.log  crit;

pid        /usr/local/nginx/logs/nginx.pid;

#Specifies the value for maximum file descriptors that can be opened by this process.
worker_rlimit_nofile 51200;

events
    {
        use epoll;
        worker_connections 51200;
        multi_accept on;
    }

http
    {
	log_format json '{"@timestamp":"$time_iso8601",'
                           '"@version":"1",'
                           '"client":"$remote_addr",'
                           '"url":"$uri",'
                           '"status":"$status",'
                           '"domain":"$host",'
                           '"host":"$server_addr",'
                           '"size":$body_bytes_sent,'
                           '"responsetime":$request_time,'
                           '"referer": "$http_referer",'
                           '"ua": "$http_user_agent"'
               '}';
        include       mime.types;
        default_type  application/octet-stream;

        server_names_hash_bucket_size 128;
        client_header_buffer_size 32k;
        large_client_header_buffers 4 32k;
        client_max_body_size 50m;

        sendfile   on;
        tcp_nopush on;

        keepalive_timeout 60;

        tcp_nodelay on;

        fastcgi_connect_timeout 300;
        fastcgi_send_timeout 300;
        fastcgi_read_timeout 300;
        fastcgi_buffer_size 64k;
        fastcgi_buffers 4 64k;
        fastcgi_busy_buffers_size 128k;
        fastcgi_temp_file_write_size 256k;

        gzip on;
        gzip_min_length  1k;
        gzip_buffers     4 16k;
        gzip_http_version 1.1;
        gzip_comp_level 2;
        gzip_types     text/plain application/javascript application/x-javascript text/javascript text/css application/xml application/xml+rss;
        gzip_vary on;
        gzip_proxied   expired no-cache no-store private auth;
        gzip_disable   "MSIE [1-6]\.";

        #limit_conn_zone $binary_remote_addr zone=perip:10m;
        ##If enable limit_conn_zone,add "limit_conn perip 10;" to server section.

        server_tokens off;
        access_log on;

server
    {
        listen 80 default_server;
        #listen [::]:80 default_server ipv6only=on;
        server_name _;
        index index.html index.htm index.php;
        root  /home/wwwroot/default;

        #error_page   404   /404.html;

        # Deny access to PHP files in specific directory
        #location ~ /(wp-content|uploads|wp-includes|images)/.*\.php$ { deny all; }

        include enable-php.conf;

        location /nginx_status
        {
            stub_status on;
            access_log   off;
        }

        location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$
        {
            expires      30d;
        }

        location ~ .*\.(js|css)?$
        {
            expires      12h;
        }

        location ~ /.well-known {
            allow all;
        }

        location ~ /\.
        {
            deny all;
        }

       #access_log  /home/wwwlogs/access.log;
       access_log  /home/wwwlogs/access_json.log  json;
    }
include vhost/*.conf;
}
#启动:
[root@controller ~]# systemctl start nginx
root@elk-node1 ~]# ss -lntp|grep 80
LISTEN     0      511          *:80                       *:*                   users:(("nginx",pid=8045,fd=6),("nginx",pid=8044,fd=6),("nginx",pid=8043,fd=6))
LISTEN     0      511         :::80                      :::*                   users:(("nginx",pid=8045,fd=7),("nginx",pid=8044,fd=7),("nginx",pid=8043,fd=7))

  

0.5 Preparation before installation:

CentOS7 operating system
jdk1.8 (elasticsearch-5.1.1, logstash- 5.1.1 needs the support jdk1.8 environment)
go to the official website to download ELK (elasticsearch-5.1.1, logstash- 5.1.1, kibana-5.1.1) address : https: //www.elastic.co/downloads. It is recommended to download .tar.gz suffix, the installation more convenient (Note: jdk1.8 recommended download .rpm suffix .CentOS7 usually comes (jdk1.8 the input terminal: java -version to see if the version is jdk 1.8 you can jump over the first step 1.jdk1.8 installation, the installation of step 2)
1.jdk1.8 installation:
directly on the official website of oracle to download jdk1.8 address:. http: //www.oracle.com /technetwork/java/javase/downloads/jdk8-downloads-2133151.html Figure:

 

Note: There are two compression formats (.tar.gz, .rpm). Here briefly explain:
. 1, * RPM package source code form

installation:

rpm -ivh * .rpm
Uninstall:

-e packgename RPM
2, .tar.gz / .tgz, *. bz2 in the form of source code package

installation:

tar zxvf * .tar.gz, or tar yxvf * .bz2` // first extract
// then enter the unpacked directory: cd you unpack directory
./configure // configuration
make // compile
make install // installation

can be seen .rpm archive using simpler to install, does not require configuration, compilation process. So here to download the jdk package .rpm suffix directly under the.
Download good 8u112 linux-x64.rpm jdk--post, use the following command to install:

rpm -ivh under jdk-8u112-linux-x64.rpm // premise is that you should enter into your download directory jdk

 

Java environment configuration:

export NODE_HOME=/usr/local/node-v8.10.0
export JAVA_HOME=/usr/local/jdk
export PATH=$PATH:$JAVA_HOME/bin:$NODE_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
source /etc/profile

  

Use the following command to check whether the installation was successful:

java -version

shown below, then prove jdk1.8 already installed on your CentOS7 in the.

2.1.elasticsearch-5.1.1 Installation:
Once you have downloaded elasticsearch-5.1.1.tar.gz, use the following command:

tar -xzf elasticsearch-5.1.1.tar.gz // decompression. Note To cd into your elasticsearch-5.1.1.tar.gz to unpack the download directory

cd elasticsearch-5.1.1 //进入刚才解压的目录
[root@localhost elasticsearch-5.1.1]# ./bin/elasticsearch //运行 ,启动的时候系统可能会卡住,耐心等待就好了

注意,上面运行后会出现这样的错误:can not run elasticsearch as root ,错误很明显,不能以root用户运行elasticsearch 。所以我们这里要新建一个用户组,及用户。如下:

groupadd elsearchgroup //添加名为elsearchgroup的组
useradd -g elsearchgroup elsearchuser //在elsearchgroup组中添加一个elsearchuser用户
passwd elsearchuser //更改elsearchuser的密码
su elsearchuser //从root用户切换到elsearchuser用户
[elsearchuser@localhost elasticsearch-5.1.1]$ ./bin/elasticsearch //在elsearchuser下运行elasticsearch

  

这时候可能还会报像这样的错误:AccessDeniedException …. 。原因是你的文件访问权限问题(因为你刚才是在root用户下解压的这些目录,读写权限root用户拥有,而你更换了用户角色,所以就没权限了)。更改一下访问权限就可以了。

chown -R elsearchuser:elsearchgroup elasticsearch-5.1.1//更改elasticsearch-5.1.1目录及其子目录的所属组为elsearchgroup,所述用户为elsearchuser
chmod -R 777 elasticsearch-5.1.1 //-R参数 和 777 结合表示把elasticsearch-5.1.1目录及其子目录的权限设置为完全可读写。


再次启动,效果如下:


验证:打开浏览器输入localhost:9200 ,得到下图则表明elasticsearch-5.1.1安装成功:

 

2.2.elasticsearch-5.1.1的比较重要的配置:

elasticsearch-5.1.1有两个配置文件:elasticsearch.yml、log4j2.properties。 位置在elasticsearch-5.1.1/config目录下。
elasticsearch.yml配置文件下比较重要的配置如下:

path.data 和 path.logs
cluster.name
node.name
bootstrap.memory_lock
network.host
discovery.zen.ping.unicast.hosts
discovery.zen.minimum_master_nodes
2.2.1.path.data 和 path.logs:
我们使用的.tar.gz格式的压缩包安装的elasticsearch-5.1.1。解压出来默认的目录结构都是默认组织在elasticsearch-5.1.1目录下的。如下图:

 

可以看到data和logs目录都在默认的elasticsearch-5.1.1目录下。如果我们就使用这样的默认目录结构保存我们的数据(data)和日志(logs),当我们更新我们的elasticsearch-5.1.1到更新的版本后,data和logs就很大的风险被新的版本覆盖。所以我们应该更改elasticsearch.yml配置文件的下的path.data 和 path.logs的路径,避免使用默认的路径。例:

path.data: /usr/worksapces/elsearch5.1.1/data //去掉#,更改路径
path.logs: /usr/worksapces/elsearch5.1.1/logs

  

2.2.2.cluster.name :
集群(cluster)是一个或多个节点(node)(server:服务器)的集合,它们一起保存整个数据,并在所有节点上提供联合索引和搜索功能。集群默认名为:elasticsearch。cluster.name 就是来更改默认的名称为我们想要的名称的。群集的名称不能重复。如果一个节点是通过设置集群名来加入该集群,那么该节点只能是该群集的一部分。例:

cluster.name: logging-prod
1
2.2.3.node.name:
节点(node)就是一个单一的服务器(比如我们拿自己的电脑启动了elasticsearch,我们的电脑就相当于一个节点),它是集群的一部分。它能存储数据,并参与群集的索引和搜索功能。节点的默认名是随机的UUID(random Universally Unique IDentifier)。一个节点只能加入一个集群。通过node.name来更改默认的名称。 例:

node.name: prod-data-2
1
2.2.4.bootstrap.memory_lock
2.2.5.network.host
默认的就是127.0.0.1(localhost),即本地。但是为了与其他服务器上的节点进行通信并形成集群,所以我们应该更改network.host,例如:

network.host: 192.168.233.134 //自己在网络上的ip,这里给的是自己电脑(节点)在局域网内的ip

关于如何查看你电脑的ip地址:打开电脑设置—>网络。如图:

 

注意:如果你更改了network.host的配置,elasticsearch就会认为你已经从开发环境转到了生成环境,以前在开发环境(localhost:9200)被忽略的警告(warning),将被升级为错误(error)。这样在开发环境可能能运行elasticsearch,但到了生产环境就不能运行了。elasticsearch这样做,是出于安全考虑,避免因为你忽略的警告而造成数据的丢失。
更改network.host可能会遇到下面的错误:
错误1:max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]。
解决错误1:
打开/etc/security/limits.conf文件,添加以下两行代码并保存:

* soft nofile 65536 //* 表示任意用户,这里是elasticsearch报的错,也可以直接填运行elasticsearch的用户。如:
* hard nofile 131072

错误2:memory locking requested for elasticsearch process but memory is not locked
解决错误2:修改elasticsearch.yml文件

bootstrap.memory_lock : false

错误3:max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
解决错误3:终端输入以下命令:

sysctl -w vm.max_map_count=262144 //需要root权限
sysctl -a|grep vm.max_map_count //查看修改结果,可选

2.2.6.discovery.zen.ping.unicast.hosts
如果没有任何网络配置,Elasticsearch将扫描端口9300到9305来连接到同一台服务器上运行的其他节点,这其实就是在我们没有进行任何网络配置的情况下,给了我们一个自动的群集(auto-clustering)体验。
当你要将网络上多个在其它服务器上的节点建立起一个集群的时候,你必须将这些节点一一罗列出来:例:

discovery.zen.ping.unicast.hosts:
- 192.168.1.10:9300
- 192.168.1.11 //默认使用9300端口


可以通过这些节点来自动发现新加入集群的节点。
2.2.7.discovery.zen.minimum_master_nodes
设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点。例:

discovery.zen.minimum_master_nodes: 3

3.kibana-5.1.1的安装:
将下载好的kibana-5.1.1.tar.gz解压:

tar -xzf kibana-5.1.1.tar.gz //解压
cd kibana-5.1.1
./bin/kibana //运行

  

kibana与elasticsearch的连接:
修改kibana-5.1.1/config目录下的kibana.yml文件:

elasticsearch.url: "http://localhost:9200" //elasticsearch的ip地址 ,暂时我只能使用localhost连同,更改了elasticsearch.yml文件的ip地址始终连不通。故前面的elasticsearch.yml配置文件我最后只改了path.data,path.logs 以及node.name 其它都是默认。连通如下图:

这里有个警告:Warning No default index pattern. You must select or create one to continue.
解决方法:

检查logstash与elasticsearch之间的通讯是否有问题(查看4.logstash-5.1.1的安装处),一般问题就在这。

4.logstash-5.1.1的安装:

tar -xzf logstash-5.1.1.tar.gz
./bin/logstash -f config/elktest.conf //运行config目录下的elktest.conf文件 。

  

//附:elktest.conf文件内容如下

# Set:
input {
   #stdin {} //控制台输入,因为这里只是做ELK连通测试,所以输入端就简单的使用控制台。例如下面是收集log4j日志的输入配置(这里将其注释掉)。
  file {
       path => "/home/wwwlogs/access_json.log"
       codec => json
       start_position => "beginning"
       type => "nginx-log"
    }
}
filter {
  #Only matched data are send to output.
}

output {
  elasticsearch {
    action => "index"
    hosts  => "localhost:9200"
    index  => "nginx-log"
  }
}

  

5.查看ELK是否成功配置

 

可以看到我们在logstash步配置的index,以及在输入的测试内容。


至此ELK已经配通了。关于logstash使用input、output等配置来收集其他应用(比如Apache、Log4j等的日志)产生的日志,请看官方文档,我也还在学习当中。

6.参考:
ELK的官方文档:https://www.elastic.co/learn
elasticsearch配置文件详解:http://www.cnblogs.com/sunxucool/p/3799190.html

Guess you like

Origin www.cnblogs.com/sandea/p/12000894.html