ELK基础环境搭建-kibana配置分布式日志系统



    "line_number": "String",

    "speaker": "String",

    "text_entry": "String",

}

Theaccounts data set is organized in the following schema:

{

    "account_number": INT,

    "balance": INT,

    "firstname": "String",

    "lastname": "String",

    "age": INT,

扫描二维码关注公众号,回复: 1601702 查看本文章

    "gender": "M orF",

    "address": "String",

    "employer": "String",

    "email": "String",

    "city": "String",

    "state": "String"

}

Theschema for the logs data set has dozens of different fields, but the notableones used in this tutorial are:

{

    "memory": INT,

    "geo.coordinates": "geo_point"

    "@timestamp": "date"

}

2.        mapping字段

mapping的作用主要用于告诉elasticsearch日志数据的格式和处理方式,比如speaker字段是string类型,不需要做分析,这样即使speaker字段还可细分,也作为一个整体进行处理。

curl -XPUThttp://192.168.2.11:9200/shakespeare -d '

{

 "mappings" : {

  "_default_" : {

   "properties" : {

    "speaker" : {"type":"string", "index" : "not_analyzed" },

    "play_name" : {"type":"string", "index" : "not_analyzed" },

    "line_id" : { "type" :"integer" },

    "speech_number" : {"type" : "integer" }

   }

  }

 }

}

';

{"acknowledged":true}

log data

curl -XPUT http://192.168.2.11:9200/logstash-2015.05.18-d '

{

  "mappings": {

    "log": {

      "properties": {

        "geo": {

          "properties": {

            "coordinates": {

              "type": "geo_point"

            }

          }

        }

      }

    }

  }

}

';

curl -XPUT http://192.168.2.11:9200/logstash-2015.05.19-d '

{

  "mappings": {

    "log": {

      "properties": {

        "geo": {

          "properties": {

            "coordinates": {

              "type":"geo_point"

            }

          }

        }

      }

    }

  }

}

';

curl -XPUT http://192.168.2.11:9200/logstash-2015.05.20-d '

{

  "mappings": {

    "log": {

      "properties": {

        "geo": {

          "properties": {

            "coordinates": {

              "type":"geo_point"

            }

          }

        }

      }

    }

  }

}

';

accounts的数据不需要mapping.

3.        采用elasticsearch的bulk的API进行数据装载

这里数据可以放到logstash服务器或者elasticsearch服务器。

curl-XPOST '192.168.2.11:9200/bank/account/_bulk?pretty' [email protected]

curl-XPOST '192.168.2.11:9200/shakespeare/_bulk?pretty' [email protected]

curl-XPOST '192.168.2.11:9200/_bulk?pretty' --data-binary @logs.jsonl

可以使用以下命令查看装载情况:

curl '192.168.2.11:9200/_cat/indices?v'

输出大致如下:

[cendish@es1 logs]$ curl'192.168.2.11:9200/_cat/indices?v'

health status index               pri rep docs.count docs.deletedstore.size pri.store.size

yellow open   bank                  5   1      1000            0    475.7kb        475.7kb

yellow open   .kibana               1   1         2            0     11.6kb         11.6kb

yellow open   shakespeare           5  1     111396            0     18.4mb         18.4mb

yellow open   logstash-2016.10.09   5  1        100            0   241.8kb        241.8kb

yellow open   logstash-2015.05.20   5  1          0            0       795b           795b

yellow open   logstash-2015.05.18   5  1          0            0       795b           795b

yellow open   logstash-2015.05.19   5  1          0           0       795b           795b

[cendish@es1 logs]$

1.2 定义Index Patterns

Eachset of data loaded to Elasticsearch has an index pattern. In the previous section, theShakespeare data set has an index named shakespeare, and the accounts data set has anindex named bank. An index pattern is a string with optional wildcards thatcan match multiple indices. For example, in the common logging use case, atypical index name contains the date in MM-DD-YYYY format, and an index patternfor May would look something likelogstash-2015.05*.

访问http://192.168.2.31:5601

Setting->Indices->AddNew->Create[make sure ' Index contains time-basedevents ' is unchecked]



1.3 发现数据

Discover->Chose a pattern->Inputsearch expression


You can construct searches byusing the field names and the values you’re interested in. With numeric fieldsyou can use comparison operators such as greater than (>), less than (<),or equals (=). You can link elements with the logical operators AND, OR, andNOT, all in uppercase.

For Example: account_number:<100 ANDbalance:>47500


如果只想显示特定列,那么在字段列表中添加特定列即可。


1.4 数据可视化(Data Visualization)

Visualize->Create a NewVisualization->Pie Chart->From a New Search->Chose a pattern[ban*]


Visualizationsdepend on Elasticsearch aggregations in two different types: bucketaggregationsand metricaggregations. A bucket aggregation sorts your data according to criteria youspecify. For example, in our accounts data set, we can establish a range ofaccount balances, then display what proportions of the total fall into whichrange of balances.




设置好范围后,点击“Apply changes”按钮生成饼图。

    "line_number": "String",

    "speaker": "String",

    "text_entry": "String",

}

Theaccounts data set is organized in the following schema:

{

    "account_number": INT,

    "balance": INT,

    "firstname": "String",

    "lastname": "String",

    "age": INT,

    "gender": "M orF",

    "address": "String",

    "employer": "String",

    "email": "String",

    "city": "String",

    "state": "String"

}

Theschema for the logs data set has dozens of different fields, but the notableones used in this tutorial are:

{

    "memory": INT,

    "geo.coordinates": "geo_point"

    "@timestamp": "date"

}

2.        mapping字段

mapping的作用主要用于告诉elasticsearch日志数据的格式和处理方式,比如speaker字段是string类型,不需要做分析,这样即使speaker字段还可细分,也作为一个整体进行处理。

curl -XPUThttp://192.168.2.11:9200/shakespeare -d '

{

 "mappings" : {

  "_default_" : {

   "properties" : {

    "speaker" : {"type":"string", "index" : "not_analyzed" },

    "play_name" : {"type":"string", "index" : "not_analyzed" },

    "line_id" : { "type" :"integer" },

    "speech_number" : {"type" : "integer" }

   }

  }

 }

}

';

{"acknowledged":true}

log data

curl -XPUT http://192.168.2.11:9200/logstash-2015.05.18-d '

{

  "mappings": {

    "log": {

      "properties": {

        "geo": {

          "properties": {

            "coordinates": {

              "type": "geo_point"

            }

          }

        }

      }

    }

  }

}

';

curl -XPUT http://192.168.2.11:9200/logstash-2015.05.19-d '

{

  "mappings": {

    "log": {

      "properties": {

        "geo": {

          "properties": {

            "coordinates": {

              "type":"geo_point"

            }

          }

        }

      }

    }

  }

}

';

curl -XPUT http://192.168.2.11:9200/logstash-2015.05.20-d '

{

  "mappings": {

    "log": {

      "properties": {

        "geo": {

          "properties": {

            "coordinates": {

              "type":"geo_point"

            }

          }

        }

      }

    }

  }

}

';

accounts的数据不需要mapping.

3.        采用elasticsearch的bulk的API进行数据装载

这里数据可以放到logstash服务器或者elasticsearch服务器。

curl-XPOST '192.168.2.11:9200/bank/account/_bulk?pretty' [email protected]

curl-XPOST '192.168.2.11:9200/shakespeare/_bulk?pretty' [email protected]

curl-XPOST '192.168.2.11:9200/_bulk?pretty' --data-binary @logs.jsonl

可以使用以下命令查看装载情况:

curl '192.168.2.11:9200/_cat/indices?v'

输出大致如下:

[cendish@es1 logs]$ curl'192.168.2.11:9200/_cat/indices?v'

health status index               pri rep docs.count docs.deletedstore.size pri.store.size

yellow open   bank                  5   1      1000            0    475.7kb        475.7kb

yellow open   .kibana               1   1         2            0     11.6kb         11.6kb

yellow open   shakespeare           5  1     111396            0     18.4mb         18.4mb

yellow open   logstash-2016.10.09   5  1        100            0   241.8kb        241.8kb

yellow open   logstash-2015.05.20   5  1          0            0       795b           795b

yellow open   logstash-2015.05.18   5  1          0            0       795b           795b

yellow open   logstash-2015.05.19   5  1          0           0       795b           795b

[cendish@es1 logs]$

1.2 定义Index Patterns

Eachset of data loaded to Elasticsearch has an index pattern. In the previous section, theShakespeare data set has an index named shakespeare, and the accounts data set has anindex named bank. An index pattern is a string with optional wildcards thatcan match multiple indices. For example, in the common logging use case, atypical index name contains the date in MM-DD-YYYY format, and an index patternfor May would look something likelogstash-2015.05*.

访问http://192.168.2.31:5601

Setting->Indices->AddNew->Create[make sure ' Index contains time-basedevents ' is unchecked]



1.3 发现数据

Discover->Chose a pattern->Inputsearch expression


You can construct searches byusing the field names and the values you’re interested in. With numeric fieldsyou can use comparison operators such as greater than (>), less than (<),or equals (=). You can link elements with the logical operators AND, OR, andNOT, all in uppercase.

For Example: account_number:<100 ANDbalance:>47500


如果只想显示特定列,那么在字段列表中添加特定列即可。


1.4 数据可视化(Data Visualization)

Visualize->Create a NewVisualization->Pie Chart->From a New Search->Chose a pattern[ban*]


Visualizationsdepend on Elasticsearch aggregations in two different types: bucketaggregationsand metricaggregations. A bucket aggregation sorts your data according to criteria youspecify. For example, in our accounts data set, we can establish a range ofaccount balances, then display what proportions of the total fall into whichrange of balances.




设置好范围后,点击“Apply changes”按钮生成饼图。

猜你喜欢

转载自blog.csdn.net/bianchengninhao/article/details/80690581