Manage SpringBoot logs with ELK

Manage SpringBoot logs with ELK

When we develop a new project, a common problem is log management. The ELK stack (Elastic, Logstash, Kibana) is a powerful and free log management solution. In this article, I will show you how to install, set up ELK, and use it to manage logs in the default format in a Spring Boot application.

  In this article, we set up a demo SpringBoot application with log management enabled, and sent log entries to Elasticsearch using the Logstash configuration.

  The application will store the log in a file. Logstash will read and parse log files and send log entries to Elasticsearch instances. Finally, we will use Kibana4 (Elasticsearch web frontend) to search and analyze the logs.

  Step 1 Install Elasticsearch

  Download Elasticsearch, download address: https://www.elastic.co/downloads/elasticsearch Unzip to a path (unzip)

  Run Elasticsearch (bin/elasticsearch or bin/elasticsearch.bat on Windows) Check it's running using curl-XGET http://localhost:9200 Here's how to do it (the steps below are written for OSX, but the same for other operating systems )

  wgethttps://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.zip  unzipelasticsearch-1.7.1.zip  cdelasticsearch-1.7.1  bin/elasticsearch

Now, Elasticsearch should be running. You can use curl command to verify. Execute a GET request to the Elasticsearch status page in a separate terminal window: curl-XGEThttp://localhost:9200 If everything went well, you should get the following result:

{  "status":200,  "name":"Tartarus",  "cluster_name":"elasticsearch",  "version":{  "number":"1.7.1",  "build_hash":"b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",  "build_timestamp":"2015-07-29T09:54:16Z",  "build_snapshot":false,  "lucene_version":"4.10.4"  },  "tagline":"YouKnow,forSearch"  }

Step 2 Install InstallKibana04

Download Kibana, download address: https://www.elastic.co/downloads/kibana (please pay attention to your system properties, download the version that matches your system, the link given is for OSX system)

Unzip the file and run Kivana (bin/kibana)

Check if Kibana's is running by pointing your browser to it

The following is the specific method:

wgethttps: //download.elastic.co/kibana/kibana/kibana-4.1.1-darwin-x64.tar.gz tarxvzfkibana-4.1.1-darwin-x64.tar.gz cdkibana-4.1.1-darwin-x64 bin / kibana

Point your browser to http://localhost:5601 (if the page looks fine, I'm doing fine, we'll configure it later).

Step 3 Install Logstash

Download Logstash, download address: https://www.elastic.co/downloads/logstash

Extract the file (unzip) wget https://download.elastic.co/logstash/logstash/logstash-1.5.3.zip unziplogstash-1.5.3.zip

Step 4 Configure SpringBoot's log file

In order for Logstash to pass files to Elasticsearch, we must first configure SpringBoot to store log entries in a file. We will establish the following delivery paths: SpringBootApp - Log Files - Logstash - Elasticsearch. There are other ways to accomplish the same thing here, such as configuring logback to use a TCP appender to send logs to a remote Logstash, and some other configurations. But I prefer the file approach because it's simpler, more natural (you can easily add it to an existing system), and doesn't lose or corrupt any files when Logstash stops working or Elasticsearch goes down .

Anyway, let's configure SpringBoot's log file. The easiest way is to configure the log file in application.properties. It is enough to add the following line:

logging.file=application.log Now, SpringBoot will record ERROR, WARN and INFO level information in application.log, and change when the amount of information reaches 10M.

Step 5 Configure Logstash

By configuring Logstash, you can understand the log file format of SpringBoot. This part is trickier. We need to create a Logstash configuration file. A typical Logstash configuration file contains three parts: input, filter, and output. Each section contains plugins that perform the processing of the relevant part, for example: a file input plugin that reads log times from a file, or an elasticseatch output plugin that sends log times to Elasticsearch.

The input section defines where Logstash reads the input data - in our case it's a file, so we'll use a file plugin multilinecodec. This basically means that our input file may have multiple lines per entry.

  Input section  The following is the configuration of the input section:

 input{  file{  type=>"java"  path=>"/path/to/application.log"  codec=>multiline{  pattern=>"^%{YEAR}-%{MONTHNUM}-%{MONTHDAY}%{TIME}.*"  negate=>"true"  what=>"previous"  }  }  }

  We will use the file plugin. type is set to java - this is just extra metadata for future use of multiple types of log files. path is the absolute path to the log file. It has to be absolute - Logstash is strict about this.

Our use of multilinecodec means that multiple lines may correspond to a log event. To detect lines that are logically grouped with the previous line, we use the detection pattern: pattern=>"^%{YEAR}-%{MONTHNUM}-%{MONTHDAY}%{TIME}.*" → each new log event Both need to start with a date. negate=>"true"→if it doesn't start with a date... what=>"previous"→...then, it should be grouped with the previous row. The file input plugin, as configured, will track the log file (eg: only read new entries at the end of the file). So, when testing, in order for Logstash to read something, you need to generate new log entries.

  Filter section  The Filter section contains plugins that perform intermediate processing on log times. In our case China, events can be single log licensed multi-line log events grouped according to the above rules. In the Filter section, we will do several things:

  If a log event contains a stack trace, we flag it. This will help us find the information we expect later on. Parse out timestamp, log level, pid, thread, class name (the actual logger) and log information. Specify the region and format of the timestamp - Kibana will use this to perform basic searches for practice.

The filter part of the SpringBoot log format mentioned above looks like this: filter{ #Ifloglinecontainstabcharacterfollowedby'at'thenwewilltagthatentryasstacktrace if[message]=~"\tat"{ grok{ match=>["message","^(\ tat)"] add_tag=>["stacktrace"] } } #GrokkingSpringBoot'sdefaultlogformat grok{ match=>["message", "(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY}%{TIME}) %{LOGLEVEL:level}%{NUMBER:pid}---\[(?[A-Za-z0-9-]+)\][A-Za-z0-9.]*\.(?[A -Za-z0-9#_]+)\s*:\s+(?.*)", "message", "(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY}%{TIME}) %{LOGLEVEL:level}%{NUMBER:pid}---.+?:\s+(?.*)" ] } #Parsingouttimestampswhichareintimestampfieldthankstopreviousgroksection date{ match=>["timestamp","yyyy-MM-ddHH:mm: ss.SSS"] } }

Explanation if[message]=~"\tat" → If the message contains the tab character followed by at (this is ruby ​​syntax), then... ...use the grok plugin to mark the stack trace: match=>[ "message","^(\tat)"]→When message matches the beginning of a line, followed by tab, followed by at, then... add_tag=>["stacktrace"]→...tag events with stacktrace tags . Regular SpringBoot log message parsing with grok plugin: The first mode extracts timestamp, level, pid, thread, class name (actually logger name) and log message. Unfortunately some log messages do not have a logger name similar to a class name (eg Tomcat logs), so the second pattern will skip the logger/class field and parse out timestamp, level, pid, thread and log information. Use the date plugin to parse and set the event date: match=>["timestamp","yyyy-MM-ddHH:mm:ss.SSS"]→timestamp field (grokked earlier) contains the timestamp in the specified format

  Output section  The Output section contains output plugins that send event data to specific targets. Output is the final stage of the event pipeline. We will send log events to stdout (console output, for debugging) and Elasticsearch. Compared with the Filter score, the Output part is very simple:

  filter{  #Ifloglinecontainstabcharacterfollowedby'at'thenwewilltagthatentryasstacktrace  if[message]=~"\tat"{  grok{  match=>["message","^(\tat)"]  add_tag=>["stacktrace"]  }  }  #GrokkingSpringBoot'sdefaultlogformat  grok{  match=>["message",  "(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY}%{TIME})%{LOGLEVEL:level}%{NUMBER:pid}---\[(?[A-Za-z0-9-]+)\][A-Za-z0-9.]*\.(?[A-Za-z0-9#_]+)\s*:\s+(?.*)",  "message",  "(?%{YEAR}-%{MONTHNUM}-%{MONTHDAY}%{TIME})%{LOGLEVEL:level}%{NUMBER:pid}---.+?:\s+(?.*)"

  ]

  }

  #Parsingouttimestampswhichareintimestampfieldthankstopreviousgroksection

  date{

  match=>["timestamp","yyyy-MM-ddHH:mm:ss.SSS"]

  }

  }

  illustrate:

  We are using multiple outputs: stdout and elasticsearch.

  stdout{...}→stdout plugin prints log events to standard output (console).

  codec=>rubydebug → use pretty print events in JSON-like format.

  elasticsearch{...}→elasticsearch plugin sends log events to Elasticsearch server.

  host=>"127.0.0.1" → The hostname where Elasticsearch is located is localhost in our case.

  Update5/9/2016: At the time of writing this update, the latest version of Logstash's elasticsearch output plugin uses the hosts configuration parameter instead of host as shown in the example above. The new parameter takes a set of hosts (e.g. elasticsearchcluster) as value. In other words, if you are using the latest Logstash version, configure the elasticsearch output plugin as follows:

  elasticsearch{

  hosts=>["127.0.0.1"]

  }

  put the pieces together

  Finally, the three parts - input, filter and output need to be copied and pasted together and saved in the Logstash.conf configuration file. Once the configuration files are in place, and Elasticsearch is running, we can run Logstash:

  elasticsearch{

  hosts=>["127.0.0.1"]

  }

  If all goes well, Logstash is delivering log times to Elasticsearch.

  Step 6 Configure Kibana

  Now it's time for us to visit the KibanawebUI again, which we have started in the second part, and it should be running on http://localhost:5601.

  First, you need to point Kibana to the elasticsearch index of your choice. The name pattern in which Logstash creates indexes is: logstash-YYYY.MM.DD. Configure indexes in Kibana Settings - Indexes.

  Index contains time-based events (select this)

  Create index name with event time (select this)

  Index Pattern Interval: Day

  Index name or pattern: [logstash-]YYYY.MM.DD

  Click "Create Index"

  Now, click on the "Discover" tab. It seems to me that the "Discover" tab is indeed incorrectly named in Kibana - it should be labeled "Search" instead of "Discover" as it allows you to perform new searches and save/manage them. Log events should now be displayed in the main window. If not, check the time period filter in the upper right corner of the screen again. The default table has 2 columns by default: Time and _source. To make the list more useful, we can configure the displayed columns. Select the level, category and log message from the selection menu on the left.

  OK! You are now ready to use the ELK stack to control your logs and start customizing and tuning your log management configuration.

  You can download the sample application used in this article from here: https://github.com/knes1/todo. It has been configured to write logs to a file and has the above Logstash configuration (although the absolute path logstash.conf needs to be adjusted).

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325347187&siteId=291194637