Elasticsearch学习之JDBC插件

有时我们的关系型数据库当达到一定的数据量时,做数据查询操作会非常的缓慢,这时我们就可以把MySQL关系型数据库中的数据导入到Elasticsearch中存储进行查询,因为Elasticsearch是全文索引支持实时搜索所以做数据查询操作会比在数据库中进行查询操作快很多,下面我们开始做这么一个数据导入的实验,以下是实验架构图:
  在这里插入图片描述
  由于我们这里是从MySQL同步数据到Elasticsearch中,所以需要先安装Logstash以及JDBC的输入插件和Elasticsearch的输出插件:logstash-input-jdbc和logstash-output-elasticsearch

[root@node1 ~]# cd /usr/local/logstash/bin/
[root@node1 bin]# ./logstash-plugin install logstash-input-jdbc 
	Validating logstash-input-jdbc
   	Installing logstash-input-jdbc
Installation successful
[root@node1 bin]# ./logstash-plugin install logstash-output-elasticsearch
	Validating logstash-output-elasticsearch
	Installing logstash-output-elasticsearch
	Installation successful
[root@node1 bin]#

由于Logstash是基于JRuby所开发的,所以我们这里要去下载MySQL的连接库jar包,从官网中下载,此处我下载的是mysql-connector-java-5.1.48.jar版本,将此文件放在Logstash的config目录下。

[root@node1 config]# ls mysql-connector-java-5.1.48.zip -l
-rw-r--r-- 1 root root 4814134 Dec 28 20:33 mysql-connector-java-5.1.48.zip

## 创建配置文件 ##
[root@node1 config]# mkdir mysql
[root@node1 config]# mv mysql-connector-java-5.1.48.zip mysql
[root@node1 config]# cd mysql/
[root@node1 mysql]# vim logstash-mysql-es.conf
input {
      jdbc {
      	# 连接mysql数据库参数,需要指明数据库地址、端口、库名、用户名以及密码;
        jdbc_connection_string => "jdbc:mysql://192.168.146.130:3306/herosdb?character_set_server=utf8mb4"
        jdbc_user => "root"
        jdbc_password => "redhat"
        
		# jdbc连接mysql的驱动文件目录地址
        jdbc_driver_library => "/usr/local/logstash/mysql/mysql-connector-java-5.1.48.jar"
        jdbc_driver_class => "com.mysql.jdbc.Driver"
        jdbc_paging_enabled => true
        jdbc_page_size => "1"
	
		# 指明时区
        jdbc_default_timezone =>"Asia/Shanghai"

		# 指明要插入的数据,sql语句的结果集即为插入的数据集
        statement => "select * from heros"

		# 类似于crontab,可以定制定时操作,比如每分钟执行一次同步(分 时 天 月 年)
        schedule => "* * * * *"

		# 是否需要记录某个column 的值,如果record_last_run为真,可以自定义我们需要 track
			 的 column 名称,此时该参数就要为 true. 否则默认 track 的是 timestamp 的值.
        use_column_value => true

		# 如果 use_column_value 为真,需配置此参数. track 的数据库 column 名,
			该 column 必须是递增的. 一般是mysql主键
        tracking_column => "id"
        tracking_column_type => "numeric"

		# 上次运行的数据文件保存文件位置
        last_run_metadata_path => "/usr/local/logstash/config/mysql/last_finder_patent_id"

		# 是否清除 last_run_metadata_path 的记录,如果为真那么每次都相当于从头
			开始查询所有的数据库记录
        clean_run => false
				#是否将 字段(column) 名称转小写
               lowercase_column_names => false
      }
    }

	# 输出位置,指明ES的地址、端口、索引名称和id
    output {
      elasticsearch {
        hosts => "127.0.0.1:9200"
        index => "jike"
        document_id => "%{id}"
      }
   }

[root@node2 mysql]# ../bin/logstash -f logstash-mysql-es.conf
Sending Logstash logs to /usr/local/logstash-6.6.2/logs which is now configured via log4j2.properties
[2020-01-02T15:25:31,054][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/local/logstash-6.6.2/data/queue"}
[2020-01-02T15:25:31,067][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/local/logstash-6.6.2/data/dead_letter_queue"}
[2020-01-02T15:25:31,766][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-01-02T15:25:31,784][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.6.2"}
[2020-01-02T15:25:31,834][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"b6818bca-2c1c-43da-8df7-bcfe70e378c8", :path=>"/usr/local/logstash-6.6.2/data/uuid"}
[2020-01-02T15:25:43,155][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2020-01-02T15:25:43,997][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
[2020-01-02T15:25:44,675][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
[2020-01-02T15:25:44,852][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2020-01-02T15:25:44,865][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2020-01-02T15:25:44,919][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1:9200"]}
[2020-01-02T15:25:44,971][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2020-01-02T15:25:45,027][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2020-01-02T15:25:45,415][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x32085ae5 run>"}
[2020-01-02T15:25:45,551][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-01-02T15:25:46,743][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-01-02T15:26:02,497][INFO ][logstash.inputs.jdbc     ] (0.016101s) SELECT version()
[2020-01-02T15:26:02,575][INFO ][logstash.inputs.jdbc     ] (0.000696s) SELECT version()
[2020-01-02T15:26:02,876][INFO ][logstash.inputs.jdbc     ] (0.000707s) SELECT count(*) AS `count` FROM (select * from heros) AS `t1` LIMIT 1
[2020-01-02T15:26:02,933][INFO ][logstash.inputs.jdbc     ] (0.000702s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 0
[2020-01-02T15:26:03,085][INFO ][logstash.inputs.jdbc     ] (0.001508s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 1
[2020-01-02T15:26:03,102][INFO ][logstash.inputs.jdbc     ] (0.001944s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 2
[2020-01-02T15:26:03,121][INFO ][logstash.inputs.jdbc     ] (0.005882s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 3
[2020-01-02T15:26:03,148][INFO ][logstash.inputs.jdbc     ] (0.001022s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 4
[2020-01-02T15:26:03,169][INFO ][logstash.inputs.jdbc     ] (0.003125s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 5
[2020-01-02T15:26:03,183][INFO ][logstash.inputs.jdbc     ] (0.001005s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 6
[2020-01-02T15:26:03,196][INFO ][logstash.inputs.jdbc     ] (0.005507s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 7
[2020-01-02T15:26:03,204][INFO ][logstash.inputs.jdbc     ] (0.002375s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 8
[2020-01-02T15:26:03,212][INFO ][logstash.inputs.jdbc     ] (0.003843s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 9
[2020-01-02T15:26:03,224][INFO ][logstash.inputs.jdbc     ] (0.006120s) SELECT * FROM (select * from heros) AS `t1` LIMIT 1 OFFSET 10
...................................................


## 查看是否有对应的索引信息 ##
[root@node2 ~]# curl -XGET -H "Content-Type:application/json"   'http://localhost:9200/_cat/indices?v' 
health status index     uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   jike      lC8W40rFRoaqkx7Cs8VqPw   2   1         69            0    107.1kb        107.1kb
[root@node2 mysql]# cat last_finder_patent_id 
--- 10068
[root@node2 config]# curl -XGET -H Content-Type:application/json 'localhost:9200/jike/doc/10001?pretty'
{
  "_index" : "jike",
  "_type" : "doc",
  "_id" : "10001",
  "_version" : 81,
  "_seq_no" : 2611,
  "_primary_term" : 11,
  "found" : true,
  "_source" : {
    "hp_growth" : 275.0,
    "hp_5s_max" : 92.0,
    "attack_range" : "近战",
    "@version" : "1",
    "hp_5s_start" : 48.0,
    "@timestamp" : "2020-01-02T07:45:00.042Z",
    "defense_max" : 409.0,
    "hp_start" : 3150.0,
    "hp_max" : 7000.0,
    "defense_growth" : 22.06999969482422,
    "role_assist" : "坦克",
    "mp_growth" : 95.0,
    "defense_start" : 100.0,
    "name" : "钟无艳",
    "mp_5s_max" : 37.0,
    "mp_start" : 430.0,
    "attack_speed_max" : 0.0,
    "role_main" : "战士",
    "birthdate" : null,
    "hp_5s_growth" : 3.1429998874664307,
    "id" : 10001,
    "attack_start" : 164.0,
    "mp_max" : 1760.0,
    "attack_growth" : 11.0,
    "mp_5s_start" : 15.0,
    "attack_max" : 318.0,
    "mp_5s_growth" : 1.5709999799728394
  }
}

通过Nginx反向代理至Kibana实现数据可视化
  Kibana是为Elasticsearch设计的开源分析和可视化平台,你可以使用Kibana来搜索,查看存储在Elasticsearch索引中的数据并与之交互。你可以很容易的实现高级的数据分析和可视化,以图表的形式展现出来。
  Kiabana的使用场景,集中在两个方面:
  1)实时监控:通过histogram面板,配合不同条件的queries可以对一个事件走很多个维度组合出不同的时间序列走势,时间序列数据是最常见的监控报警。
  2)问题分析:关于ELK的用途可以参照对应的商业产品Splunk的场景,使用Splunk的意义在于使信息收集和处理智能。其操作的智能化表现在搜索,通过下钻数据排除问题,通过分析根本原因来解决问题;实时可见性,可以将对系统的检测和报警结合在一起,便于跟踪SLA和性能问题;历史分析,可以从中找出趋势和历史模式,行为基线和阈值,生成一致性报告。

[root@node2 conf]# cat /usr/local/kibana-6.6.2-linux-x86_64/config/kibana.yml | grep -v "^#"  | grep -v   "^$"
server.port: 5601
server.host: "192.168.146.130"
server.name: "node2"
elasticsearch.hosts: ["http://localhost:9200"]
[root@node2 config]# ../bin/kibana
[root@node2 ~]# htpasswd -c /usr/local/kibana-6.6.2-linux-x86_64/config/.htpasswd   jyy
New password: 
Re-type new password: 
Adding password for user jyy
[root@node2 conf]# vim nginx.conf
  location /kibana {
            proxy_pass  http://localhost:5601;
            auth_basic "Restricted Content";
            auth_basic_user_file /usr/local/kibana-6.6.2-linux-x86_64/config/.htpasswd;
        }
[root@node2 conf]# ../sbin/nginx -t 
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@node2 conf]# 
[root@node2 conf]# ../sbin/nginx 

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

发布了83 篇原创文章 · 获赞 6 · 访问量 1万+

猜你喜欢

转载自blog.csdn.net/Micky_Yang/article/details/103747916