Keepalived+Haproxy+Mycat+MySQL

一、架构简图

软件环境:MySQL-5.7.17、mha4mysql-0.56、Mycat-server-1.6、Haproxy-1.5.18、Keepalived-1.3.5-1

               +-------------+      +-----------+       +--------------------------+
               | keepalived  |      |  +-----+  |       | +--------+    +--------+ |   
               |-------------|      |  |mycat|  |  ==>  | |mysql(M)|<==>|mysql(M)| |      
               |  +-------+  |      |  +-----+  |       | +--------+    +--------+ |   
               |  |haproxy|=>| ==>  |           |       |  MHA或其他多主高可用方案 |
               |  +-------+  |      |           |       |-~-~-~-~-~-~~~~-~-~-~-~-~-|      
client --> vip |    |高|     |      |           |  vip  | +--------+    +--------+ |
               |    |可|     |      |           |       | |mysql(S)| 从 |mysql(S)| |
               |    |用|     |      |           |       | +--------+ 库 +--------+ | 
               |  +-------+  |      |  +-----+  |       | +--------+ 集 +--------+ |    
               |  |haproxy|=>| ==>  |  |mycat|  |  ==>  | |    MHA | 群 |        | |  
               |  +-------+  |      |  +-----+  |       | +--------+   +--------+  |  
               +-------------+      +-----------+       +--------------------------+

二、MHA数据库集群搭建

角色

IP 地址

 主机名

Master 数据库服务器

192.168.1.51

master51

备用 1 master 数据库服务器

192.168.1.52

master52

第 1 台 slave 服务器

192.168.1.53

slave53

第 2 台 slave 服务器

192.168.1.54

slave54

Mha_manager 服务器

192.168.1.65

mham

VIP 地址

192.168.1.100

1.安装 mha软件包 

• 在所有数据库服务器上安装 mha-node 包

]# yum -y install mha4mysql-node-0.56-0.el6.noarch.rpm

• 在mham管理主机上安装 mha_node 和 mha-manager包

[root@mham ~ ]# yum -y install mha4mysql-node-0.56-0.el6.noarch.rpm

[root@mham ~ ]# yum -y install perl-ExtUtils-* perl-CPAN-*

[root@mham ~ ]#tar -zxf mha4mysql-manager-0.56.tar.gz

[root@mham ~ ]#cd mha4mysql-manager-0.56

[root@mham ~ ]# perl Makefile.pl

[root@mham ~ ]# make

[root@mham ~ ]# make install

• 安装 manager 软件包 后产生的命令

masterha_check_ssh     //检查 MHA 的 SSH 配置

masterha_check_repl     //检查 MySQL主从复制

masterha_manger     //启动 MHA

masterha_check_status        //检测 MHA 运行状态

masterha_master_monitor        //检测 master 是否宕机

2.配置 ssh 密钥对认证登陆

• 所有数据库服务器彼此之间互相以 root 用户 ssh 秘钥对认证登录

• 配置管理主机以 root 用户 ssh 秘钥对认证登录所有数据节点主机

3.预先手动配置 vip 地址

[root@master51 ~]# ifconfig eth0:1 192.168.1.100/24

4.master51 数据库服务配置

[root@master51 ~]vim /etc/my.cnf

[mysqld]

plugin-load ="rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"

relay_log_purge=0

rpl-semi-sync-master-enabled = 1

rpl-semi-sync-slave-enabled = 1

server_id=51

log-bin=master51

binlog-format="mixed"

[root@master51 ~]# systemctl restart mysqld

• 添加主从同步授权用户

mysql> grant replication slave on *.* to repluser@"%" identified by "123456";

5.备用主1 master52 数据库服务配置

[root@master52 ~]# vim /etc/my.cnf

[mysqld]

plugin-load ="rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"

rpl-semi-sync-master-enabled = 1

rpl-semi-sync-slave-enabled = 1

server_id=52

log-bin=master52

binlog-format="mixed"

• 配置主从同步

mysql> change master to

-> master_host="192.168.1.51",

-> master_user="repluser",

-> master_password="1234546",

-> master_log_file="master51.000001",

-> master_log_pos=520;

mysql> start slave;

6.从库 slave53 数据库服务配置

[root@master53 ~]# vim /etc/my.cnf

[mysqld]

server_id=53

• 配置主从同步

mysql> change master to

-> master_host="192.168.1.51",

-> master_user="repluser",

-> master_password="123456",

-> master_log_file="master51.000001",

-> master_log_pos=520;

mysql> start slave;

7.从库 slave54 数据库服务配置

[root@master54 ~]# vim /etc/my.cnf

[mysqld]

server_id=54

• 配置主从同步

mysql> change master to

-> master_host="192.168.1.51",

-> master_user="repluser",

-> master_password="123456",

-> master_log_file="master51.000001",

-> master_log_pos=520;

mysql> start slave;

8.配置管理主机

• 管理节点主机配置文件

[root@mham ~]# cp mha4mysql-manager-0.56/bin/* /usr/local/bin/

[root@mham ~]#mkdir /etc/mha/

[root@mham mha4mysql-manager-0.56]# cp samples/conf/app1.cnf     /etc/mha/

[root@mham ~]# vim /etc/mha_manager/app1.cnf

[server default]

manager_workdir=/etc/mha_manager

manager_log=/etc/mha_manager/manager.log

master_ip_failover_script=/usr/local/bin/master_ip_failover     // 自动failover 时候的切换脚本

#!/usr/bin/env perl

#  Copyright (C) 2011 DeNA Co.,Ltd.
#
#  This program is free software; you can redistribute it and/or modify
#  it under the terms of the GNU General Public License as published by:
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.
#
#  This program is distributed in the hope that it will be useful,
#  but WITHOUT ANY WARRANTY; without even the implied warranty of
#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#  GNU General Public License for more details.
#
#  You should have received a copy of the GNU General Public License
#   along with this program; if not, write to the Free Software
#  Foundation, Inc.,
#  51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA

## Note: This is a sample script and is not complete. Modify the script based on your environment.

use strict;
use warnings FATAL => 'all';

use Getopt::Long;
use MHA::DBHelper;

my (
  $command,        $ssh_user,         $orig_master_host,
  $orig_master_ip, $orig_master_port, $new_master_host,
  $new_master_ip,  $new_master_port,  $new_master_user,
  $new_master_password
);

my $vip = '192.168.1.100/24';  # Virtual IP
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";

GetOptions(
  'command=s'             => \$command,
  'ssh_user=s'            => \$ssh_user,
  'orig_master_host=s'    => \$orig_master_host,
  'orig_master_ip=s'      => \$orig_master_ip,
  'orig_master_port=i'    => \$orig_master_port,
  'new_master_host=s'     => \$new_master_host,
  'new_master_ip=s'       => \$new_master_ip,
  'new_master_port=i'     => \$new_master_port,
  'new_master_user=s'     => \$new_master_user,
  'new_master_password=s' => \$new_master_password,
);

exit &main();

sub main {
  if ( $command eq "stop" || $command eq "stopssh" ) {

    # $orig_master_host, $orig_master_ip, $orig_master_port are passed.
    # If you manage master ip address at global catalog database,
    # invalidate orig_master_ip here.
    my $exit_code = 1;
    eval {

      # updating global catalog, etc
      &stop_vip();
      $exit_code = 0;
    };
    if ($@) {
      warn "Got Error: $@\n";
      exit $exit_code;
    }
    exit $exit_code;
  }
  elsif ( $command eq "start" ) {

    # all arguments are passed.
    # If you manage master ip address at global catalog database,
    # activate new_master_ip here.
    # You can also grant write access (create user, set read_only=0, etc) here.
    my $exit_code = 10;
    eval {
      my $new_master_handler = new MHA::DBHelper();

      # args: hostname, port, user, password, raise_error_or_not
      $new_master_handler->connect( $new_master_ip, $new_master_port,
        $new_master_user, $new_master_password, 1 );

      ## Set read_only=0 on the new master
      $new_master_handler->disable_log_bin_local();
      print "Set read_only=0 on the new master.\n";
      $new_master_handler->disable_read_only();

      ## Creating an app user on the new master
      print "Creating app user on the new master..\n";
      $new_master_handler->enable_log_bin_local();
      $new_master_handler->disconnect();

      ## Update master ip on the catalog database, etc
      &start_vip();
      $exit_code = 0;
    };
    if ($@) {
      warn $@;

      # If you want to continue failover, exit 10.
      exit $exit_code;
    }
    exit $exit_code;
  }
  elsif ( $command eq "status" ) {

    # do nothing
    exit 0;
  }
  else {
    &usage();
    exit 1;
  }
}
sub start_vip() {
    `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
    return 0 unless ($ssh_user);
    `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}

sub usage {
  print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}

ssh_user=root       

ssh_port=22

repl_user=repluser   

repl_password=123456     

user=root     // 连接数据库的用户名

password=123456     // 密码

• 配置管理主机 ( 续 1)

[server1]

hostname=192.168.1.51

candidate_master=1

port=3306

[server2]

hostname=192.168.1.52

port=3306

candidate_master=1     // 设置为候选 master

[server3]

hostname=192.168.1.53

port=3306

no_master=1     // 不竞选 master

[server4]

hostname=192.168.1.54

port=3306

no_master=1     // 不竞选 master

9.测试集群配置

• 在管理节点上通过 master_check_ssh 做 ssh 检查

[root@mham ~ ]# masterha_check_ssh --conf= /etc/mha/app1.cnf

• 在管理节点上监控复制环境: 通过 masterha_check_repl 脚本查看整个集群的状态

[root@mham ~ ]# masterha_check_repl --conf=/etc/masterha/app1.cnf

10.启动MHA

– --remove_dead_master_conf     // 在 app1.cnf 文件里删除宕机的主库的信息

– --ignore_last_failover    // 忽略 .health 文件

[root@mham ~ ]# masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover

• 查看状态 : masterha_check_status

检查 mha 服务状态:

[root@mham bin]# masterha_check_status --conf=/etc/mha/app1.cnf

三、Mycat对MHA集群实现读写分离,负载均衡

1.在Master51主数据库上创建一个用于查询的用户

mysql> create user 'read'@'%' IDENTIFIED BY 'daer';

mysql> grant select on *.* to 'read'@'%';

2.两台Mycat服务器上安装 java-1.8.0-openjdk-devel

3.tar -zxf Mycat-server-1.6-beta-20150604171601-linux.tar.gz     //免安装

[root@mycat01 ~]# mv mycat/ /usr/local/

4.修改配置文件/usr/local/mycat/conf/server.xml 

        <user name="root">
                <property name="password">123456</property>
                <property name="schemas">mydb</property>
        </user>

        <user name="read">
                <property name="password">123456</property>
                <property name="schemas">mydb</property>
                <property name="readOnly">true</property>
        </user>

5.修改配置文件/usr/local/mycat/conf/schema.xml 

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
        <schema name="mydb" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1">
        </schema>
        <dataNode name="dn1" dataHost="localhost1" database="mydb" />
        <dataHost name="localhost1" maxCon="1000" minCon="10" balance="3" writeType="0" dbType="mysql" dbDriver="native">
                <heartbeat>select user()</heartbeat>
                <writeHost host="hostMaster" url="192.168.1.100:3306" user="root" password="111111">
                        <readHost host="hostS2" url="192.168.1.52:3306" user="read" password="123456" />
                        <readHost host="hostS2" url="192.168.1.53:3306" user="read" password="123456" />
                        <readHost host="hostS2" url="192.168.1.54:3306" user="read" password="123456" />
                </writeHost>

        </dataHost>
</mycat:schema>

6.配置文件注意事项: conf/server.xml 可以不修改,但要注意 <property name="schemas">TESTDB</property> 虚拟库名称,要和后面对应 schemas是这个用户下的逻辑数据库,可以有多个逻辑数据库可以用“,”逗号隔开,用户名和密码是连接 mycat 的用户名和密码,与 mysql 实例的用户名密码无关 mycat默认的普通连接端口是8066,管理连接端口是9066

schema:逻辑数据库

dataNode:节点

dataHost:节点对应的读库写库的地址和连接

balance指的负载均衡类型,目前的取值有4种: balance="0", 不开启读写分离机制,所有读操作都发送到当前可用的writeHost上。 balance="1",全部的readHost与stand by writeHost参与select语句的负载均衡 balance="2",所有读操作都随机的在writeHost、readhost上分发。 balance="3",所有读请求随机的分发到wiriterHost对应的readhost执行,writerHost不负担读压力。 

switchType指的是切换的模式,目前的取值也有4种: switchType='-1' 表示不自动切换 switchType='1' 默认值,表示自动切换, switchType='2' 基于MySQL主从同步的状态决定是否切换,心跳语句为 show slavestatus switchType='3' 基于MySQL galary cluster的切换机制(适合集群)(1.4.1),心跳语句为 show status like 'wsrep%'

WriteType参数设置: writeType=“0”, 所有写操作都发送到可用的writeHost上。writeType=“1”,所有写操作都随机的发送到readHost。 writeType=“2”,所有写操作都随机的在writeHost、readhost分上发。

7.配置完成以后连接 mycat 查询 mysql -uroot -p123456 -h192.168.4.20 -P 8066 -e 'select @@hostname;' 多查询几次,可以看到轮询效果。

8.第二台 mycat,拷贝 第一台/usr/local/mycat 到相同目录,启动服务即可

四、haproxy两台调度器配置

1.yum 安装 haproxy

2.修改 /etc/haproxy/haproxy.cfg

修改 /etc/haproxy/haproxy.cfg
listen mycat_01 *:3306
    mode    tcp        # mysql 得使用 tcp 协议
    option  tcpka      # 使用长连接
    balance leastconn  # 最小连接调度算法
    server  mycat_01 192.168.1.13:8066 check inter 3000 rise 1 maxconn 1000 fall 3
    server  mycat_02 192.168.1.14:8066 check inter 3000 rise 1 maxconn 1000 fall 3

3.第二台haproxy参考如上配置

五、 在两台haproxy上实现keepalived 高可用

1.yum 安装 keepalived

2.haproxy01上修改配置文件 keepalived.conf(两台服务器互为主备)

! Configuration File for keepalived
global_defs {
    router_id haproxy01
}
vrrp_script chk_haproxy {
    script "killall -0 haproxy"     # cheaper than pidof
    interval 2                      # check every 2 seconds
}

vrrp_instance Mycat {
    state BACKUP
    interface eth0
    track_interface {
        eth0
    }
    virtual_router_id 150
    priority 200
    ! nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass abcdef
    }
    virtual_ipaddress {
        192.168.1.8/24 brd 192.168.1.255 dev eth0 label eth0:1
    }
    track_script {
       chk_haproxy weight=0    # +2 if process is present
    }
}

vrrp_instance Mycat1 {
    state BACKUP
    interface eth0
    track_interface {
        eth0
    }
    virtual_router_id 151
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass fedcba
    }
    virtual_ipaddress {
        192.168.1.9/24 brd 192.168.1.255 dev eth0 label eth0:2
    }
    track_script {
       chk_haproxy weight=0    # +2 if process is present
    }
}

3.haproxy02上配置

! Configuration File for keepalived
global_defs {
    router_id haproxy02
}
vrrp_script chk_haproxy {
    script "killall -0 haproxy"     # cheaper than pidof
    interval 2                      # check every 2 seconds
}

vrrp_instance Mycat {
    state BACKUP
    interface eth0
    track_interface {
        eth0
    }
    virtual_router_id 150
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass abcdef
    }
    virtual_ipaddress {
        192.168.1.8/24 brd 192.168.1.255 dev eth0 label eth0:1
    }
    track_script {
       chk_haproxy weight=0    # +2 if process is present
    }
}

vrrp_instance Mycat1 {
    state BACKUP
    interface eth0
    track_interface {
        eth0
    }
    virtual_router_id 151
    priority 200
    !nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass fedcba
    }
    virtual_ipaddress {
        192.168.1.9/24 brd 192.168.1.255 dev eth0 label eth0:2
    }
    track_script {
       chk_haproxy weight=0    # +2 if process is present
    }
}

4.当其中一台haproxy服务或者keepalived服务挂掉时,可实现主动切换

猜你喜欢

转载自blog.csdn.net/qq_36586867/article/details/81508032