go-mysql-elasticsearch同步

go-mysql-elasticsearch同步


git clone https://github.com/siddontang/go-mysql-elasticsearch.git
之后直接更新mod:
go mod tidy

然后 make编译
修改好etc/river.go的配置文件后,直接启动
./bin/go-mysql-elasticsearch ./etc/river.toml
若是需要后台启动,就nohup ./bin/go-mysql-elasticsearch ./etc/river.toml &
这样即使退出容器exit,你在进行增删改的情况下,也会自动同步,官方说支持版本号小于elasticsearch 6.0不用理会,亲测7.8.1也能同步

配置文件用我自己写的,不用官方的垃圾文档:

# MySQL address, user and password
    # user must have replication privilege in MySQL.
    my_addr = "IP:3306"
    my_user = "root"
    my_pass = "123456"
    #my_charset = "utf8"
    
    # Set true when elasticsearch use https
    es_https = false
    # Elasticsearch address
    es_addr = "IP:9200"
    # Elasticsearch user and password, maybe set by shield, nginx, or x-pack
    es_user = ""
    es_pass = ""
    
    # Path to store data, like master.info, if not set or empty,
    # we must use this to support breakpoint resume syncing.
    # TODO: support other storage, like etcd.
    data_dir = "./var"
    
    # Inner Http status address
    stat_addr = "127.0.0.1:12800"
    stat_path = "/metrics"
    
    # pseudo server id like a slave
    server_id = 1001
    
    # mysql or mariadb
    flavor = "mysql"
    
    # mysqldump execution path
    # if not set or empty, ignore mysqldump.
    mysqldump = "mysqldump"
    
    # if we have no privilege to use mysqldump with --master-data,
    # we must skip it.
    #skip_master_data = false
    
    # minimal items to be inserted in one bulk
    bulk_size = 128
    
    # force flush the pending requests if we don't have enough items >= bulk_size
    flush_bulk_time = "200ms"
    
    # Ignore table without primary key
    skip_no_pk_table = false
    
    # MySQL data source
    [[source]]
    schema = "official-website"
    tables = ["xc_cases"]
    [[rule]]
    schema = "official-website"
    table = "xc_cases"
    index = "xc_cases"
    type = "buildingtype"

猜你喜欢

转载自blog.csdn.net/XiaoAnGeGe/article/details/107958250