ELK - the basic use of LogStash

1. What is logStash

ELK(Elasticsearch+Logstash+Kibana)

What exactly is Logstash? official description

Official text description: Logstash is an open source server-side data processing pipeline capable of ingesting data from multiple sources simultaneously, transforming it, and sending it to your favorite "repository".

Popular explanation: Logstash is a powerful data processing tool, often used for log processing.

Logstash has over 200 plugins available so far, along with the flexibility to create and contribute your own. The community ecology is very complete, and we can use it with confidence.

2. Why use logStash

Usually when the system fails, engineers need to log in to each server and use Linux script tools such as grep / sed / awk to find the cause of the failure in the log. In the absence of a log system, you first need to locate the server that processes the request. If multiple instances are deployed on this server, you need to go to the log directory of each application instance to find the log files. Each application instance will also set a log rolling policy (for example: generate a file every day), as well as a log compression and archiving policy, etc.

After such a series of processes, it caused a lot of trouble for us to troubleshoot the fault and find the cause of the fault in time. Therefore, if we can centrally manage these logs and provide a centralized retrieval function, we can not only improve the efficiency of diagnosis, but also have a comprehensive understanding of the system situation and avoid the passiveness of firefighting after the event.

Therefore, the log centralized management function can be implemented using the ELK technology stack. Elasticsearch only has data storage and analysis capabilities, and Kibana is a visual management platform. The role of data collection and collation is also missing, which is what Logstash is responsible for.

3. Working principle of Logstash

  • Data Source

There are many data sources supported by Logstash. For example, for the log function, only logs with log records and log delivery functions are supported. In Spring Boot, logback is recommended by default to support log output functions (output to the database, data output to files).

  • Logstash Pipeline

The whole whole is what Logstash does.

There are three very important functions included in Logstash:

  • Input

The input source is generally configured as the host and port that it monitors. The DataSource outputs logs to the specified ip and port, and the Input source can collect the data information after listening to it.

  • Filter

Filter function, to filter the collected information (additional processing), you can also omit this configuration (no processing)

  • Output

Who to send the collected information to. In the ELK technology stack, it is all output to Elasticsearch, and the subsequent data retrieval and data analysis process is given to Elasticsearch.

Final effect: Through the overall steps, the original line of log information can be converted into data in the form of Document (key-value pair form) supported by Elasticsearch for storage .

4.docker install logStash

  • Pull the logStash image

docker pull logstash:7.7.0

  • Start the container

docker run -it -p 4560:4560 --name logstash -d  logstash:7.7.0

  • Change setting

into the container

docker exec -it logstash /bin/bash

Modify the configuration file

vi /usr/share/logstash/config/logstash.yml

Change the ip to the elasticsearch access address IP

  • Modify input and output configuration

Continue typing on the container command line

vi /usr/share/logstash/pipeline/logstash.conf

Configuration explanation:

input: Receive log input configuration

        tcp: protocol

                mode: logstash service

                port: Port, specify by yourself. default 4560

output: log processing output

        elasticsearch: hand over to es for processing

                action: index command in es. That is, the new command.

                hosts: host of es

                index: The index of the stored log. It can be created automatically if it does not exist. The default type name is doc

  • Restart the container

Exit the container command line, enter the Linux terminal, and restart the logstash container.

docker restart logstash

5. Use logback to output logs in logStash

Add logStash dependency

        <!-- logstash依赖-->
        <dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>6.3</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

import logback.xml

<?xml version="1.0" encoding="UTF-8"?>
<!--该日志将日志级别不同的log信息保存到不同的文件中 -->
<configuration>
	<include resource="org/springframework/boot/logging/logback/defaults.xml" />

	<springProperty scope="context" name="springAppName"
		source="spring.application.name" />

	<!-- 日志在工程中的输出位置 -->
	<property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}" />

	<!-- 控制台的日志输出样式 -->
	<property name="CONSOLE_LOG_PATTERN"
		value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}" />

	<!-- 控制台输出 -->
	<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
		<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
			<level>INFO</level>
		</filter>
		<!-- 日志输出编码 -->
		<encoder>
			<pattern>${CONSOLE_LOG_PATTERN}</pattern>
			<charset>utf8</charset>
		</encoder>
	</appender>
    <!-- logstash远程日志配置-->
    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.8.128:4560</destination>
        <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" />
    </appender>
	<!-- 日志输出级别 -->
	<root level="INFO">
		<appender-ref ref="console" />
		<appender-ref ref="logstash" />
	</root>
</configuration>

After starting the project, check whether the log information is written to elasticsearch in kibanna

6. View log information in Kibanna

Type: GET test_log/_search to see all.

7. Build a log system

Given a simple requirement:

Build a log system and provide an interface for querying log information in Elasticsearch. (The log information within 15 minutes is queried by default)

7.1 Modify the pom.xml file

    <dependencies>
        <!-- springDateElasticsearch依赖-->
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
		</dependency>
        <!--lombok依赖-->
		<dependency>
			<groupId>org.projectlombok</groupId>
			<artifactId>lombok</artifactId>
			<optional>true</optional>
		</dependency>
        <!-- test依赖-->
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
        <!-- logstash依赖-->
        <dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>6.3</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
	</dependencies>

7.2 Modify the application.yml configuration file

spring:
  elasticsearch:
    rest:
      uris: http://192.168.8.128:9200

7.3 Creating Entities

According to the log information viewed in kibana, it can be seen that except message is a class type, which contains some other attributes, the other attributes are simple type attributes.

 Note that @version and @timestamp are received using @JsonProperty in Spring data Elasticsearch 3.x.

@version and @timestamp can be directly mapped using the name attribute of @Field in spring data elasticsearch 4.x.

The time type should be converted using the format attribute. Do not use custom methods.

Create LogPojo entity class

@Data
@Document(indexName = "test_log")
public class LogPojo {
    @Id
    private String id;
    @Field(type = FieldType.Keyword)
    private String host;
    @Field(type = FieldType.Long)
    private Long port;
    @Field(type = FieldType.Text)
    private String message;
    @Field(type = FieldType.Date,name = "@timestamp",format = DateFormat.date_time)
    private Date timestamp;
    @Field(type = FieldType.Text,name = "@version")
    private String version;
    // 不与ES中的属性对应。当自己业务中需要日志内容时
    private MessagePojo messagePojo;
}

Create a MessagePojo entity class (for refining the log information of the message field)

@Data
public class MessagePojo {
    @JsonProperty("@timestamp")
    private Date timestamp;
    @JsonProperty("@version")
    private String version;
    private String message;
    private String logger_name;
    private String thread_name;
    private String level;
    private String level_value;
}

7.4 New service and implementation class

public interface DemoService {
    List<LogPojo> demo();
}
@Service
@Slf4j
public class DemoServiceImpl implements DemoService {

    @Autowired
    private ElasticsearchRestTemplate elasticsearchRestTemplate;

    @Override
    public List<LogPojo> demo() {
        // 获取当前时间
        Calendar instance = Calendar.getInstance();
        instance.add(Calendar.MINUTE,-15);
        // 查询近15分钟的日志信息
        Query query = new NativeSearchQuery(QueryBuilders.rangeQuery("@timestamp").gte(instance));
        // 分页
        query.setPageable(PageRequest.of(1,30));
        SearchHits<LogPojo> search = elasticsearchRestTemplate.search(query, LogPojo.class);
        log.error("查询结果总条数为: "+ search.getTotalHits());
        List<SearchHit<LogPojo>> searchHits = search.getSearchHits();
        List<LogPojo> list = new ArrayList<>();
        searchHits.forEach(hits->{
            LogPojo logPojo = hits.getContent();
            String message = logPojo.getMessage();
            ObjectMapper objectMapper = new ObjectMapper();
            MessagePojo mp = null;
            try {
                mp = objectMapper.readValue(message, MessagePojo.class);
            } catch (JsonProcessingException e) {
                e.printStackTrace();
            }
            logPojo.setMessagePojo(mp);
            list.add(logPojo);
        });
        return list;
    }
}

7.5 Create a new controller

@RestController
public class DemoController {

    @Autowired
    private DemoService demoService;

    /**
     * 获取es中最近15分钟的日志
     * @return
     */
    @RequestMapping("/")
    public List<LogPojo> demo(){
       return demoService.demo();
    }
}

7.6 Test results

Enter in the browser: http://127.0.0.1:8080/ 

You will see the result below.

Guess you like

Origin blog.csdn.net/xiaozhang_man/article/details/125034422