基于 MHA 的MySQL高可用-CentOS7(理论与实践)

基于 MHA 的MySQL高可用-CentOS7(理论)

MHA 简介

MHA(Master High Availability)目前在 MySQL 高可用方面是一个相对成熟的解决方案, 它由日本 DeNA 公司的 youshimaton 员工(现就职于 Facebook 公司)开发,是一套优秀的作 为 MySQL 高可用性环境下故障切换和主从角色提升的高可用软件。在 MySQL 故障切换过程 中,MHA 能做到在 0~30 秒之内自动完成数据库的主从故障切换操作,并且在进行故障切换 的过程中,MHA 能在最大程度上保证数据的一致性,以达到真正意义上的高可用。

MHA 由两部分组成:MHA Manager(管理节点)和 MHA Node(数据节点)。MHA Manager 可以单独部署在一台独立的机器上管理多个 master-slave 集群,也可以部署在一台 slave 节 点上。MHA Node 运行在每台 MySQL 服务器及 Manager 服务器上,MHA Manager 会定时探 测集群中的 master 节点,当 master 出现故障时,它可以自动将拥有最新数据的 slave 提升 为新的 master,然后将所有其他的 slave 重新指向新提升的 master。整个故障转移过程对应 用程序层面完全透明。

在 MHA 自动故障切换过程中,MHA 会试图从宕机的主服务器上保存二进制日志,最大 程度的保证数据不丢失,但这种操作是有概率性的。例如,如果主服务器硬件故障或无法通 过 ssh 访问,MHA没法保存二进制日志,只进行故障转移从而丢失了最新的数据。使用MySQL 5.5 的半同步复制,可以降低数据丢失的风险。MHA 可以与半同步复制结合起来。如果只有 一个 slave 已经收到了最新的二进制日志,MHA 可以将最新的二进制日志应用于其他所有的 slave 服务器上,因此可以保证所有节点的数据一致性。

目前 MHA 主要支持一主多从的架构,要搭建 MHA,要求一个复制集群中必须最少有三 台数据库服务器,一主二从,即一台充当 master,一台充当备用 master,另外一台充当从 库,因为至少需要三台服务器,出于机器成本的考虑,淘宝也在该基础上进行了改造,目前 淘宝 TMHA 已经支持一主一从。另外对于想快速搭建的可以参考:MHA 快速搭建 我们自己使用其实也可以使用 1 主 1 从,但是 master 主机宕机后无法切换,以及无法补全 binlog。master 的 mysqld 进程 crash 后,还是可以切换成功,以及补全 binlog 的。

官方介绍:https://code.google.com/p/mysql-master-ha/

工作流程

(1)从宕机崩溃的 master 上尝试保存二进制日志事件(binlog events);

(2)识别含有最新更新的 slave 服务器;

(3)应用差异的中继日志(relay log)到其他的 slave;

(4)应用从 master 保存的二进制日志事件(binlog events);

(5)提升一个 slave 为新的 master 服务器;

(6)将其他的 slave 连接指向新的 master 进行主从复制;

MHA 工具介绍

MHA 软件由两部分组成,Manager 工具包和 Node 工具包,具体的说明如下。

Manager 工具包主要包括以下几个工具:

➢ masterha_check_ssh 检查 MHA 的 SSH 配置状况

➢ masterha_check_repl 检查 MySQL 复制状况

➢ masterha_manger 启动 MHA

➢ masterha_check_status 检测当前 MHA 运行状态

➢ masterha_master_monitor 检测 master 是否宕机

➢ masterha_master_switch 控制故障转移(自动或者手动)

➢ masterha_conf_host 添加或删除配置的 server 信息

Node 工具包(这些工具通常由 MHA Manager 的脚本触发,无需人为操作)主要包括以下几 个工具:

➢ save_binary_logs 保存和复制 master 的二进制日志

➢ apply_diff_relay_logs 识别差异的中继日志事件并将其差异的事件应用于其他的 slave

➢ filter_mysqlbinlog 去除不必要的 ROLLBACK 事件(MHA 已不再使用这个工具)

➢ purge_relay_logs 清除中继日志(不会阻塞 SQL 线程)

注意:为了尽可能的减少主库硬件损坏宕机造成的数据丢失,因此在配置 MHA 的同时建议 配置成 MySQL 5.5 的半同步复制。关于半同步复制原理各位自己进行查阅。(不是必须) 

MHA 环境说明
所有操作系统均为 centos 7.x 64bit,涉及到主从复制环境搭建后面会简单演示步骤,但 是相关的安全复制不会详细说明,MySQL Replication 需要注意的问题:

角色 IP地址 主机名 ServerID 数据库类型
Primary Master 192.168.200.111 server01 1
Secondary Master 192.168.200.112 server02 2
Slave1 192.168.200.113 server03 3
Slave2 192.168.200.114 server04 4
Manager 192.168.200.115 server05 - 监控复制组



其中 Primary Master 对外提供写服务,备选 Secondary Master(实际的 slave 提供读服务, slave1和slave2也提供相关的读服务,一旦Primary Master宕机,将会把备选Secondary Master 提升为新的 Primary Master,slave1 和 slave2 指向新的 master。

基于 MHA 的MySQL高可用-CentOS7(实例)

环境部署 

角色 IP地址 主机名 ServerID 数据库类型
Primary Master 192.168.200.111 server01 1
Secondary Master 192.168.200.112 server02 2
Slave1 192.168.200.113 server03 3
Slave2 192.168.200.114 server04 4
Manager 192.168.200.115 server05 - 监控复制组

配置所有主机名称

master1 主机:

hostname server01

bash

master2 主机:

hostname server02

bash

slave1 主机:

hostname server03

bash

slave2 主机:

hostname server04

bash

manager 主机:

hostname server05

bash


配置所有主机名映射

[root@server05 ~]# vim /etc/hosts

192.168.200.111 server01

192.168.200.112 server02

192.168.200.113 server03

192.168.200.114 server04

192.168.200.115 server05

scp /etc/hosts 192.168.200.111:/etc/

scp /etc/hosts 192.168.200.112:/etc/

scp /etc/hosts 192.168.200.113:/etc/

scp /etc/hosts 192.168.200.114:/etc/

所有主机关闭防火墙和安全机制

iptables -F

systemctl stop firewalld

setenforce 0


下载 mha-manager 和 mha-node

http://downloads.mariadb.com/MHA/


安装 MHA node

所有主机安装 MHA node 及相关 perl 依赖包

安装 epel 源:

rpm -ivh epel-release-latest-7.noarch.rpm

yum -y install perl-DBD-MySQL.x86_64 perl-DBI.x86_64 perl-CPAN perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker

rpm -q perl-DBD-MySQL.x86_64 perl-DBI.x86_64 perl-CPAN perl-ExtUtils-CBuilder

perl-ExtUtils-MakeMaker

perl-DBD-MySQL-4.023-6.el7.x86_64

perl-DBI-1.627-4.el7.x86_64 

perl-CPAN-1.9800-292.el7.noarch

perl-ExtUtils-CBuilder-0.28.2.6-292.el7.noarch

perl-ExtUtils-MakeMaker-6.68-3.el7.noarch

注意:安装后建议检查一下所需软件包是否全部安装


所有主机上安装 MHA Node

tar xf mha4mysql-node-0.56.tar.gz

cd mha4mysql-node-0.56/

perl Makefile.PL

make && make install


MHA Node 安装完后会在 /usr/local/bin 生成以下脚本

ls -l /usr/local/bin/

总用量 40

-r-xr-xr-x 1 root root 16346 12 月 29 15:07 apply_diff_relay_logs

-r-xr-xr-x 1 root root 4807 12 月 29 15:07 filter_mysqlbinlog

-r-xr-xr-x 1 root root 7401 12 月 29 15:07 purge_relay_logs

-r-xr-xr-x 1 root root 7395 12 月 29 15:07 save_binary_logs

安装 MHA Manger

注意:安装 MHA Manger 之前也需要安装 MHA Node

首先安装 MHA Manger 依赖的 perl 模块(我这里使用 yum 安装)

yum -y install perl perl-Log-Dispatch perl-Parallel-ForkManager perl-DBD-MySQL perl-DBI perl-Time-HiRes

yum -y install perl-Config-Tiny-2.14-7.el7.noarch.rpm

rpm -q perl cpan perl-Log-Dispatch perl-Parallel-ForkManager perl-DBD-MySQL perl-DBI perl-Time-HiRes perl-Config-Tiny

perl-5.16.3-292.el7.x86_64

perl-Log-Dispatch-2.41-1.el7.1.noarch

perl-Parallel-ForkManager-1.18-2.el7.noarch

perl-DBD-MySQL-4.023-6.el7.x86_64

perl-DBI-1.627-4.el7.x86_64

perl-Time-HiRes-1.9725-3.el7.x86_64

perl-Config-Tiny-2.14-7.el7.noarch

注意:之前时候 perl-Config-Tiny.noarch 没有安装成功,后来用 cpan(cpan install Config::Tiny ) 

安装 MHA Manger 软件包

tar xf mha4mysql-manager-0.56.tar.gz

cd mha4mysql-manager-0.56/

perl Makefile.PL

make && make install


安装完成后会有以下脚本文件

ls -l /usr/local/bin/

总用量 76

-r-xr-xr-x 1 root root 16346 12 月 29 15:08 apply_diff_relay_logs

-r-xr-xr-x 1 root root 4807 12 月 29 15:08 filter_mysqlbinlog

-r-xr-xr-x 1 root root 1995 12 月 29 15:37 masterha_check_repl

-r-xr-xr-x 1 root root 1779 12 月 29 15:37 masterha_check_ssh

-r-xr-xr-x 1 root root 1865 12 月 29 15:37 masterha_check_status

-r-xr-xr-x 1 root root 3201 12 月 29 15:37 masterha_conf_host

-r-xr-xr-x 1 root root 2517 12 月 29 15:37 masterha_manager

-r-xr-xr-x 1 root root 2165 12 月 29 15:37 masterha_master_monitor

-r-xr-xr-x 1 root root 2373 12 月 29 15:37 masterha_master_switch

-r-xr-xr-x 1 root root 3879 12 月 29 15:37 masterha_secondary_check

-r-xr-xr-x 1 root root 1739 12 月 29 15:37 masterha_stop

-r-xr-xr-x 1 root root 7401 12 月 29 15:08 purge_relay_logs

-r-xr-xr-x 1 root root 7395 12 月 29 15:08 save_binary_logs


配置 SSH 密钥对验证

服务器之间需要实现密钥对验证。关于配置密钥对验证可看下面步骤。但是有一点需要 注意:不能禁止 password 登陆,否则会出现错误

1. 服务器先生成一个密钥对

2. 把自己的公钥传给对方

Server05(192.168.200.115)上:

ssh-keygen -t rsa

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

注意:Server05 需要连接每个主机测试,因为第一次连接的时候需要输入 yes,影响后期故 障切换时,对于每个主机的 SSH 控制。

ssh server01

                     yes
ssh server02

                     yes

ssh server03

                     yes

ssh server04

                     yes


Primary Master(192.168.200.111):

ssh-keygen -t rsa

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

Secondary Master(192.168.200.112):

ssh-keygen -t rsa

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

slave1(192.168.200.113):

ssh-keygen -t rsa

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

slave2(192.168.200.114):

ssh-keygen -t rsa

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

安装 mysql

111-114 主机上的操作:

yum -y install mariadb mariadb-server mariadb-devel

systemctl start mariadb

netstat -lnpt | grep :3306

设置数据库初始密码(后续操作中使用)

mysqladmin -u root password 123456


搭建主从复制环境

注意:binlog-do-db 和 replicate-ignore-db 设置必须相同。 MHA 在启动时候会检测过滤规则,如果过滤规则不同,MHA 将不启动监控和故障转移功能。


修改 mysql 主机的配置文件

Primary Master(192.168.200.111):

vim /etc/my.cnf

[mysqld]

server-id = 1

log-bin=master-bin

log-slave-updates=true

relay_log_purge=0

systemctl restart mariadb

Secondary Master(192.168.200.112):

vim /etc/my.cnf

[mysqld]

server-id=2

log-bin=master-bin

log-slave-updates=true

relay_log_purge=0

systemctl restart mariadb

slave1(192.168.200.113):

vim /etc/my.cnf

[mysqld]

server-id=3

log-bin=mysql-bin

relay-log=slave-relay-bin

log-slave-updates=true

relay_log_purge=0

systemctl restart mariadb

slave2(192.168.200.114):

vim /etc/my.cnf

[mysqld]

server-id=4

log-bin=mysql-bin

relay-log=slave-relay-bin

log-slave-updates=true

relay_log_purge=0

systemctl restart mariadb


在 Primary Master(192.168.200.111)上对旧数据进行备份

mysqldump --master-data=2 --single-transaction -R --triggers -A > all.sql

参数解释:

--master-data=2                       备份时刻记录 master 的 Binlog 位置和 Position

--single-transaction                  获取一致性快照

-R                                             备份存储过程和函数

-triggres                                    备份触发器

-A                                              备份所有的库


mysql 服务器创建复制授权用户

grant replication slave on *.* to 'repl'@'192.168.200.%' identified by '123456';

flush privileges;


查看主库备份时的 binlog 名称和位置

MariaDB [(none)]> show master status;

+-------------------+----------+--------------+------------------+

| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |

+-------------------+----------+--------------+------------------+

| master-bin.000001 | 474 |             |                          |

+-------------------+----------+--------------+------------------+

1 row in set (0.00 sec)


把数据备份复制其他主机

scp all.sql 192.168.200.112:/tmp/

scp all.sql 192.168.200.113:/tmp/

scp all.sql 192.168.200.114:/tmp/


导入备份数据到 112-114 主机上执行复制相关命令

mysql -uroot -p123456 < /tmp/all.sql

stop slave;

CHANGE MASTER TO

MASTER_HOST='192.168.200.111',

MASTER_USER='repl',

MASTER_PASSWORD='123456',

MASTER_LOG_FILE='master-bin.000001',

MASTER_LOG_POS=474;

start slave;

show slave status\G

# 检查 IO 和 SQL 线程是否为:yes

                          Slave_IO_Running: Yes

                        Slave_SQL_Running: Yes


主从同步故障处理


                        Slave_IO_Running: No

                       Slave_SQL_Running: Yes

-----------------------------------忽略部分信息-----------------------------------

                             Last_IO_Errno: 1236

                             Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

-----------------------------------忽略部分信息-----------------------------------

处理方式:

stop slave;

reset slave;

set global sql_slave_skip_counter =1;

start slave;


三台 slave 服务器设置 read_only 状态

从库对外只提供读服务,只所以没有写进 mysql 配置文件,是因为随时 server02 会提升为 master

[root@server02 ~]# mysql -uroot -p123456 -e 'set global read_only=1'

[root@server03 ~]# mysql -uroot -p123456 -e 'set global read_only=1'

[root@server04 ~]# mysql -uroot -p123456 -e 'set global read_only=1'


创建监控用户(111-114 主机上的操作):

grant all privileges on *.* to 'root'@'192.168.200.%' identified by '123456';

flush privileges;

为自己的主机名授权:

grant all privileges on *.* to 'root'@'server04' identified by '123456';

flush privileges;

到这里整个 mysql 主从集群环境已经搭建完毕


配置 MHA 环境

创建 MHA 的工作目录及相关配置文件

Server05(192.168.200.115):在软件包解压后的目录里面有样例配置文件

mkdir /etc/masterha

cp mha4mysql-manager-0.56/samples/conf/app1.cnf /etc/masterha

修改 app1.cnf 配置文件

/usr/local/bin/master_ip_failover 脚本需要根据自己环境修改 ip 和网卡名称等。

vim /etc/masterha/app1.cnf

[server default]

#设置 manager 的工作日志

manager_workdir=/var/log/masterha/app1

#设置 manager 的日志,这两条都是默认存在的

manager_log=/var/log/masterha/app1/manager.log

#设置 master 默认保存 binlog 的位置,以便 MHA 可以找到 master 日志

master_binlog_dir=/var/lib/mysql

#设置自动 failover 时候的切换脚本

master_ip_failover_script= /usr/local/bin/master_ip_failover

#设置 mysql 中 root 用户的密码

password=123456

user=root

#ping 包的时间间隔

ping_interval=1

#设置远端 mysql 在发生切换时保存 binlog 的具体位置

remote_workdir=/tmp

#设置复制用户的密码和用户名

repl_password=123456

repl_user=repl

[server1]

hostname=server01

port=3306

[server2]

hostname=server02

candidate_master=1

port=3306

check_repl_delay=0

[server3]

hostname=server03

port=3306

[server4]

hostname=server04

port=3306

配置故障转移脚本

[root@server05 ~]# vim /usr/local/bin/master_ip_failover

#!/usr/bin/env perl

use strict;

use warnings FATAL => 'all';

use Getopt::Long;

my (

$command, $ssh_user, $orig_master_host, $orig_master_ip,

$orig_master_port, $new_master_host, $new_master_ip, $new_master_port,

);

my $vip = '192.168.200.100';                                                # 写入 VIP

my $key = "1";                                                                       #非 keepalived 方式切换脚本使用的

my $ssh_start_vip = "/sbin/ifconfig ens32:$key $vip";

my $ssh_stop_vip = "/sbin/ifconfig ens32:$key down";         #那么这里写服务的开关命令

$ssh_user = "root";

GetOptions(

'command=s' => \$command,

'ssh_user=s' => \$ssh_user,

'orig_master_host=s' => \$orig_master_host,

'orig_master_ip=s' => \$orig_master_ip,

'orig_master_port=i' => \$orig_master_port,

'new_master_host=s' => \$new_master_host,

'new_master_ip=s' => \$new_master_ip,

'new_master_port=i' => \$new_master_port,

);

exit &main();

sub main {

print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";

if ( $command eq "stop" || $command eq "stopssh" ) {

# $orig_master_host, $orig_master_ip, $orig_master_port are passed.

# If you manage master ip address at global catalog database,

# invalidate orig_master_ip here.

my $exit_code = 1;

#eval {

# print "Disabling the VIP on old master: $orig_master_host \n";

# &stop_vip();

# $exit_code = 0;

#};

eval {

print "Disabling the VIP on old master: $orig_master_host \n";

#my $ping=`ping -c 1 10.0.0.13 | grep "packet loss" | awk -F',' '{print $3}' | awk '{print $1}'`;

#if ( $ping le "90.0%"&& $ping gt "0.0%" ){

#$exit_code = 0;

#}

#else {

&stop_vip();

# updating global catalog, etc

$exit_code = 0;

#}

};

if ($@) {

warn "Got Error: $@\n";

exit $exit_code;

}

exit $exit_code;

}

elsif ( $command eq "start" ) {

# all arguments are passed.

# If you manage master ip address at global catalog database,

# activate new_master_ip here.

# You can also grant write access (create user, set read_only=0, etc) here.

my $exit_code = 10;

eval {

print "Enabling the VIP - $vip on the new master - $new_master_host \n";

&start_vip();

$exit_code = 0;

};

if ($@) {

warn $@;

exit $exit_code;

}

exit $exit_code;

}

elsif ( $command eq "status" ) {

print "Checking the Status of the script.. OK \n";

`ssh $ssh_user\@$orig_master_ip \" $ssh_start_vip \"`;

exit 0;

}

else {

&usage();

exit 1;

}

}

# A simple system call that enable the VIP on the new master

sub start_vip() {

`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;

}

# A simple system call that disable the VIP on the old_master

sub stop_vip() {

`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;

}

sub usage {

print

"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port -- new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }

[root@server05 ~]# chmod +x /usr/local/bin/master_ip_failover

设置从库 relay log 的清除方式(112-114):

mysql -uroot -p123456 -e 'set global relay_log_purge=0;'

注意:

       MHA 在故障切换的过程中,从库的恢复过程依赖于 relay log 的相关信息,所以这里要 将 relay log 的自动清除设置为 OFF,采用手动清除 relay log 的方式。在默认情况下,从服务 器上的中继日志会在 SQL 线程执行完毕后被自动删除。但是在 MHA 环境中,这些中继日志 在恢复其他从服务器时可能会被用到,因此需要禁用中继日志的自动清除功能。定期清除中 继日志需要考虑到复制延时的问题。在 ext3 的文件系统下,删除大的文件需要一定的时间, 会导致严重的复制延时。为了避免复制延时,需要暂时为中继日志创建硬链接,因为在 linux 系统中通过硬链接删除大文件速度会很快。(在 mysql 数据库中,删除大表时,通常也采用 建立硬链接的方式)

配置从库(112-114)relay_log 清除脚本加入计划任务

     MHA 节点中包含了 pure_relay_logs 命令工具,它可以为中继日志创建硬链接,执行 SET GLOBAL relay_log_purge=1,等待几秒钟以便 SQL 线程切换到新的中继日志,再执行 SET GLOBAL relay_log_purge=0。

vim purge_relay_log.sh

#!/bin/bash

user=root

passwd=123456        #注意:数据库要有密码,填自己所设置的密码就可以,前面设置过 port=3306

log_dir='/tmp'

work_dir='/tmp'

purge='/usr/local/bin/purge_relay_logs'

if [ ! -d $log_dir ]

then

           mkdir $log_dir -p

fi

$purge --user=$user --password=$passwd --disable_relay_log_purge --port=$port --workdir=$work_dir >> $log_dir/purge_relay_logs.log 2>&1

chmod +x purge_relay_log.sh

crontab -e

0           4            *            *                   *                          /bin/bash /root/purge_relay_log.sh

pure_relay_logs 脚本参数如下所示:

--user mysql                用户名

--password mysql        密码

--port                            端口号

--workdir                      指定创建 relay log 的硬链接的位置,默认是/var/tmp,由于系统不同 分区创建硬链接文件会失败,故需要执行硬链接具体位置,成功执行脚本后,硬链接的中继 日志文件被删除 --disable_relay_log_purge 默认情况下,如果 relay_log_purge=1,脚本会什么都不清理,自 动退出,通过设定这个参数,当 relay_log_purge=1 的情况下会将 relay_log_purge 设置为 0。 清理 relay log 之后,最后将参数设置为 OFF。

手动清除中继日志

purge_relay_logs --user=root --password=123456 --disable_relay_log_purge --port=3306 --workdir=/tmp

2017-08-31 21:33:52: purge_relay_logs script started. Found relay_log.info: /usr/local/mysql/data/relay-log.info Removing hard linked relay log files slave-relay-bin* under /tmp.. done. Current relay log file: /usr/local/mysql/data/slave-relay-bin.000002 Archiving unused relay log files (up to /usr/local/mysql/data/slave-relay-bin.000001) ... Creating hard link for /usr/local/mysql/data/slave-relay-bin.000001 under /tmp/slave-relay-bin.000001 .. ok. Creating hard links for unused relay log files completed. Executing SET GLOBAL relay_log_purge=1; FLUSH LOGS; sleeping a few seconds so that SQL thread can delete older relay log files (if i t keeps up); SET GLOBAL relay_log_purge=0; .. ok. Removing hard linked relay log files slave-relay-bin* under /tmp.. done. 2017-08-31 21:33:56: All relay log purging operations succeeded.

检查 MHA ssh 通信状态

[root@server05 ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf

Sat Dec 29 16:03:57 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.

Sat Dec 29 16:03:57 2018 - [info] Reading application default configurations from /etc/masterha/app1.cnf..

Sat Dec 29 16:03:57 2018 - [info] Reading server configurations from /etc/masterha/app1.cnf..

Sat Dec 29 16:03:57 2018 - [info] Starting SSH connection tests..

Sat Dec 29 16:04:02 2018 - [debug]

Sat Dec 29 16:03:58 2018 - [debug] Connecting via SSH from root@server02(192.168.200.112:22) to root@server01(192.168.200.111:22)..

Sat Dec 29 16:03:59 2018 - [debug] ok.

Sat Dec 29 16:03:59 2018 - [debug] Connecting via SSH from root@server02(192.168.200.112:22) to root@server03(192.168.200.113:22)..

Sat Dec 29 16:04:00 2018 - [debug] ok.

Sat Dec 29 16:04:00 2018 - [debug] Connecting via SSH from root@server02(192.168.200.112:22) to root@server04(192.168.200.114:22)..

Sat Dec 29 16:04:02 2018 - [debug] ok.

Sat Dec 29 16:04:02 2018 - [debug]

Sat Dec 29 16:03:58 2018 - [debug] Connecting via SSH from root@server03(192.168.200.113:22) to root@server01(192.168.200.111:22)..

Sat Dec 29 16:04:00 2018 - [debug] ok.

Sat Dec 29 16:04:00 2018 - [debug] Connecting via SSH from root@server03(192.168.200.113:22) to root@server02(192.168.200.112:22)..

Sat Dec 29 16:04:01 2018 - [debug] ok.

Sat Dec 29 16:04:01 2018 - [debug] Connecting via SSH from root@server03(192.168.200.113:22) to root@server04(192.168.200.114:22)..

Sat Dec 29 16:04:02 2018 - [debug] ok.

Sat Dec 29 16:04:02 2018 - [debug]

Sat Dec 29 16:03:57 2018 - [debug] Connecting via SSH from root@server01(192.168.200.111:22) to root@server02(192.168.200.112:22)..

Sat Dec 29 16:03:59 2018 - [debug] ok.

Sat Dec 29 16:03:59 2018 - [debug] Connecting via SSH from root@server01(192.168.200.111:22) to root@server03(192.168.200.113:22)..

Sat Dec 29 16:04:00 2018 - [debug] ok.

Sat Dec 29 16:04:00 2018 - [debug] Connecting via SSH from root@server01(192.168.200.111:22) to root@server04(192.168.200.114:22)..

Sat Dec 29 16:04:01 2018 - [debug] ok.

Sat Dec 29 16:04:02 2018 - [debug]

Sat Dec 29 16:03:59 2018 - [debug] Connecting via SSH from root@server04(192.168.200.114:22) to root@server01(192.168.200.111:22)..

Sat Dec 29 16:04:00 2018 - [debug] ok.

Sat Dec 29 16:04:00 2018 - [debug] Connecting via SSH from root@server04(192.168.200.114:22) to root@server02(192.168.200.112:22)..

Sat Dec 29 16:04:01 2018 - [debug] ok.

Sat Dec 29 16:04:01 2018 - [debug] Connecting via SSH from root@server04(192.168.200.114:22) to root@server03(192.168.200.113:22)..

Sat Dec 29 16:04:02 2018 - [debug] ok.

Sat Dec 29 16:04:02 2018 - [info] All SSH connection tests passed successfully.

最后会返回 successfully 表示没有问题

检查整个集群的状态

[root@server05 ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf

Sat Dec 29 16:04:53 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.

Sat Dec 29 16:04:53 2018 - [info] Reading application default configurations from /etc/masterha/app1.cnf..

Sat Dec 29 16:04:53 2018 - [info] Reading server configurations from /etc/masterha/app1.cnf..

Sat Dec 29 16:04:53 2018 - [info] MHA::MasterMonitor version 0.56. Creating directory /var/log/masterha/app1.. done.

Sat Dec 29 16:04:55 2018 - [info] Dead Servers:

Sat Dec 29 16:04:55 2018 - [info] Alive Servers:

Sat Dec 29 16:04:55 2018 - [info] server01(192.168.200.111:3306)

Sat Dec 29 16:04:55 2018 - [info] server02(192.168.200.112:3306)

Sat Dec 29 16:04:55 2018 - [info] server03(192.168.200.113:3306)

Sat Dec 29 16:04:55 2018 - [info] server04(192.168.200.114:3306)

Sat Dec 29 16:04:55 2018 - [info] Alive Slaves:

Sat Dec 29 16:04:55 2018 - [info] server02(192.168.200.112:3306) Version=5.5.56-MariaDB (oldest major version between slaves) log-bin:enabled

Sat Dec 29 16:04:55 2018 - [info] Replicating from 192.168.200.111(192.168.200.111:3306)

Sat Dec 29 16:04:55 2018 - [info] Primary candidate for the new Master (candidate_master is set)

Sat Dec 29 16:04:55 2018 - [info] server03(192.168.200.113:3306) Version=5.5.56-MariaDB (oldest major version between slaves) log-bin:enabled

Sat Dec 29 16:04:55 2018 - [info] Replicating from 192.168.200.111(192.168.200.111:3306)

Sat Dec 29 16:04:55 2018 - [info] server04(192.168.200.114:3306) Version=5.5.56-MariaDB (oldest major version between slaves) log-bin:enabled

Sat Dec 29 16:04:55 2018 - [info] Replicating from 192.168.200.111(192.168.200.111:3306)

Sat Dec 29 16:04:55 2018 - [info] Current Alive Master: server01(192.168.200.111:3306)

Sat Dec 29 16:04:55 2018 - [info] Checking slave configurations..

Sat Dec 29 16:04:55 2018 - [info] Checking replication filtering settings..

Sat Dec 29 16:04:55 2018 - [info] binlog_do_db= , binlog_ignore_db=

Sat Dec 29 16:04:55 2018 - [info] Replication filtering check ok.

Sat Dec 29 16:04:55 2018 - [info] Starting SSH connection tests..

Sat Dec 29 16:05:00 2018 - [info] All SSH connection tests passed successfully.

Sat Dec 29 16:05:00 2018 - [info] Checking MHA Node version..

Sat Dec 29 16:05:02 2018 - [info] Version check ok.

Sat Dec 29 16:05:02 2018 - [info] Checking SSH publickey authentication settings on the current master..

Sat Dec 29 16:05:03 2018 - [info] HealthCheck: SSH to server01 is reachable.

Sat Dec 29 16:05:04 2018 - [info] Master MHA Node version is 0.56.

Sat Dec 29 16:05:04 2018 - [info] Checking recovery script configurations on the current master..

Sat Dec 29 16:05:04 2018 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/var/lib/mysql --output_file=/tmp/save_binary_logs_test -- manager_version=0.56 --start_file=master-bin.000001

Sat Dec 29 16:05:04 2018 - [info] Connecting to root@server01(server01).. Creating /tmp if not exists.. ok. Checking output directory is accessible or not.. ok. Binlog found at /var/lib/mysql, up to master-bin.000001

Sat Dec 29 16:05:04 2018 - [info] Master setting check done.

Sat Dec 29 16:05:04 2018 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..

Sat Dec 29 16:05:04 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='root' --slave_host=server02 --slave_ip=192.168.200.112 --slave_po rt=3306 --workdir=/tmp --target_version=5.5.56-MariaDB --manager_version=0.56 --relay_log_info=/var/lib/mysql/relay-log.info --relay_dir=/var/lib/mysql/ --slave_pass=xxx

Sat Dec 29 16:05:04 2018 - [info] Connecting to [email protected](server02:22).. Checking slave recovery environment settings.. Opening /var/lib/mysql/relay-log.info ... ok. Relay log found at /var/lib/mysql, up to mariadb-relay-bin.000002 Temporary relay log file is /var/lib/mysql/mariadb-relay-bin.000002 Testing mysql connection and privileges.. done. Testing mysqlbinlog output.. done. Cleaning up test file(s).. done.

Sat Dec 29 16:05:05 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='root' --slave_host=server03 --slave_ip=192.168.200.113 --slave_po rt=3306 --workdir=/tmp --target_version=5.5.56-MariaDB --manager_version=0.56 --relay_log_info=/var/lib/mysql/relay-log.info --relay_dir=/var/lib/mysql/ --slave_pass=xxx

Sat Dec 29 16:05:05 2018 - [info] Connecting to [email protected](server03:22).. Checking slave recovery environment settings.. Opening /var/lib/mysql/relay-log.info ... ok. Relay log found at /var/lib/mysql, up to slave-relay-bin.000002 Temporary relay log file is /var/lib/mysql/slave-relay-bin.000002 Testing mysql connection and privileges.. done. Testing mysqlbinlog output.. done. Cleaning up test file(s).. done.

Sat Dec 29 16:05:07 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='root' --slave_host=server04 --slave_ip=192.168.200.114 --slave_po rt=3306 --workdir=/tmp --target_version=5.5.56-MariaDB --manager_version=0.56 --relay_log_info=/var/lib/mysql/relay-log.info --relay_dir=/var/lib/mysql/ --slave_pass=xxx

Sat Dec 29 16:05:07 2018 - [info] Connecting to [email protected](server04:22).. Checking slave recovery environment settings.. Opening /var/lib/mysql/relay-log.info ... ok. Relay log found at /var/lib/mysql, up to slave-relay-bin.000002 Temporary relay log file is /var/lib/mysql/slave-relay-bin.000002 Testing mysql connection and privileges.. done. Testing mysqlbinlog output.. done. Cleaning up test file(s).. done.

Sat Dec 29 16:05:08 2018 - [info] Slaves settings check done.

Sat Dec 29 16:05:09 2018 - [info]

server01 (current master)

+--server02

+--server03

+--server04

Sat Dec 29 16:05:09 2018 - [info] Checking replication health on server02.. Sat Dec 29 16:05:09 2018 - [info] ok.

Sat Dec 29 16:05:09 2018 - [info] Checking replication health on server03.. Sat Dec 29 16:05:09 2018 - [info] ok.

Sat Dec 29 16:05:09 2018 - [info] Checking replication health on server04.. Sat Dec 29 16:05:09 2018 - [info] ok.

Sat Dec 29 16:05:09 2018 - [info] Checking master_ip_failover_script status:

Sat Dec 29 16:05:09 2018 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=server01 --orig_master_ip=192.168.200.111 --orig_m aster_port=3306

IN SCRIPT TEST====/etc/init.d/keepalived stop==/etc/init.d/keepalived start===

Checking the Status of the script.. OK

bash: /etc/init.d/keepalived: 没有那个文件或目录

Sat Dec 29 16:05:09 2018 - [info] OK.

Sat Dec 29 16:05:09 2018 - [warning] shutdown_script is not defined.

Sat Dec 29 16:05:09 2018 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

返回 OK 表示也没有问题

VIP 配置管理

           Master vip 配置有两种方式,一种是通过 keepalived 或者 heartbeat 类似的软件的方式管 理 VIP 的浮动,另一种为通过命令方式管理。

通过命令方式管理 VIP 地址:

打开在前面编辑过的文件/etc/masterha/app1.cnf,检查如下行是否正确,再检查集群状态。

[root@server05 ~]# grep -n 'master_ip_failover_script' /etc/masterha/app1.cnf

9:master_ip_failover_script= /usr/local/bin/master_ip_failover

Primary Master(192.168.200.111)

[root@server01 ~]# ip a | grep ens32

2: ens32: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.200.111/24 brd 192.168.200.255 scope global ens32 inet 192.168.200.100/24 brd 192.168.200.255 scope global secondary ens32:1

Server05(192.168.200.115)修改故障转移脚本

[root@server05 ~]# head -13 /usr/local/bin/master_ip_failover

#!/usr/bin/env perl

use strict;

use warnings FATAL => 'all';

use Getopt::Long;

my (

$command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port,

);

my $vip = '192.168.200.100';                                                 # 写入 VIP

my $key = "1";                                                                        #非 keepalived 方式切换脚本使用的

my $ssh_start_vip = "/sbin/ifconfig ens32:$key $vip";           #若是使用 keepalived

my $ssh_stop_vip = "/sbin/ifconfig ens32:$key down";         #那么这里写服务的开关命令

           /usr/local/bin/master_ip_failover 文件的内容意思是当主库发生故障时,会触发 MHA 切 换,MHA manager 会停掉主库上的 ens32:1 接口,触发虚拟 ip 漂移到备选从库,从而完成 切换。

Server05(192.168.200.115) 检查 manager 状态

masterha_check_status --conf=/etc/masterha/app1.cnf app1

is stopped(2:NOT_RUNNING).

注意:如果正常会显示"PING_OK",否则会显示"NOT_RUNNING",代表 MHA 监控没有开启。

Server05(192.168.200.115) 开启 manager 监控

[root@server05 ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dea d_master_conf --ignore_last_failover< /dev/null >/var/log/masterha/app1/manager.log 2>&1 &

[1] 65837

启动参数介绍:

--remove_dead_master_conf                    该参数代表当发生主从切换后,老的主库的 ip 将会从配置 文件中移除。

--manger_log                                             日志存放位置

--ignore_last_failover                                 在缺省情况下,如果 MHA 检测到连续发生宕机,且两次宕 机间隔不足 8 小时的话,则不会进行 Failover,之所以这样限制是为了避免 ping-pong 效应。 该参数代表忽略上次 MHA 触发切换产生的文件,默认情况下,MHA 发生切换后会在日志目 录,也就是上面我设置的/data 产生 app1.failover.complete 文件,下次再次切换的时候如果 发现该目录下存在该文件将不允许触发切换,除非在第一次切换后收到删除该文件,为了方 便,这里设置为--ignore_last_failover。

Server05(192.168.200.115)查看 Server05 监控是否正常:

[root@monitor ~]# masterha_check_status --conf=/etc/masterha/app1.cnf

app1 (pid:65837) is running(0:PING_OK), master:server01

可以看见已经在监控了

Server05(192.168.200.115)查看启动日志

[root@server05 ~]# cat /var/log/masterha/app1/manager.log

Sat Dec 29 16:09:50 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.

Sat Dec 29 16:09:50 2018 - [info] Reading application default configurations from /etc/masterha/app1.cnf..

Sat Dec 29 16:09:50 2018 - [info] Reading server configurations from /etc/masterha/app1.cnf..

Sat Dec 29 16:09:50 2018 - [info] MHA::MasterMonitor version 0.56. Sat Dec 29 16:09:51 2018 - [info] Dead Servers:

Sat Dec 29 16:09:51 2018 - [info] Alive Servers:

Sat Dec 29 16:09:51 2018 - [info] server01(192.168.200.111:3306)

Sat Dec 29 16:09:51 2018 - [info] server02(192.168.200.112:3306)

Sat Dec 29 16:09:51 2018 - [info] server03(192.168.200.113:3306)

Sat Dec 29 16:09:51 2018 - [info] server04(192.168.200.114:3306)

Sat Dec 29 16:09:51 2018 - [info] Alive Slaves:

Sat Dec 29 16:09:51 2018 - [info] server02(192.168.200.112:3306) Version=5.5.56-MariaDB (oldest major version between slaves) log-bin:enabled

Sat Dec 29 16:09:51 2018 - [info] Replicating from 192.168.200.111(192.168.200.111:3306)

Sat Dec 29 16:09:51 2018 - [info] Primary candidate for the new Master (candidate_master is set)

Sat Dec 29 16:09:51 2018 - [info] server03(192.168.200.113:3306) Version=5.5.56-MariaDB (oldest major version between slaves) log-bin:enabled

Sat Dec 29 16:09:51 2018 - [info] Replicating from 192.168.200.111(192.168.200.111:3306)

Sat Dec 29 16:09:51 2018 - [info] server04(192.168.200.114:3306) Version=5.5.56-MariaDB (oldest major version between slaves) log-bin:enabled

Sat Dec 29 16:09:51 2018 - [info] Replicating from 192.168.200.111(192.168.200.111:3306)

Sat Dec 29 16:09:51 2018 - [info] Current Alive Master: server01(192.168.200.111:3306)

Sat Dec 29 16:09:51 2018 - [info] Checking slave configurations..

Sat Dec 29 16:09:51 2018 - [info] Checking replication filtering settings..

Sat Dec 29 16:09:51 2018 - [info] binlog_do_db= , binlog_ignore_db=

Sat Dec 29 16:09:51 2018 - [info] Replication filtering check ok.

Sat Dec 29 16:09:51 2018 - [info] Starting SSH connection tests..

Sat Dec 29 16:09:58 2018 - [info] All SSH connection tests passed successfully.

Sat Dec 29 16:09:58 2018 - [info] Checking MHA Node version..

Sat Dec 29 16:10:01 2018 - [info] Version check ok.

Sat Dec 29 16:10:01 2018 - [info] Checking SSH publickey authentication settings on the current master..

Sat Dec 29 16:10:02 2018 - [info] HealthCheck: SSH to server01 is reachable.

Sat Dec 29 16:10:03 2018 - [info] Master MHA Node version is 0.56.

Sat Dec 29 16:10:03 2018 - [info] Checking recovery script configurations on the current master..

Sat Dec 29 16:10:03 2018 - [info] Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/var/lib/mysql --output_file=/tmp/save_binary_logs_test -- manager_version=0.56 --start_file=master-bin.000001

Sat Dec 29 16:10:03 2018 - [info] Connecting to root@server01(server01).. Creating /tmp if not exists.. ok. Checking output directory is accessible or not.. ok. Binlog found at /var/lib/mysql, up to master-bin.000001

Sat Dec 29 16:10:03 2018 - [info] Master setting check done.

Sat Dec 29 16:10:03 2018 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..

Sat Dec 29 16:10:03 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='root' --slave_host=server02 --slave_ip=192.168.200.112 --slave_po rt=3306 --workdir=/tmp --target_version=5.5.56-MariaDB --manager_version=0.56 --relay_log_info=/var/lib/mysql/relay-log.info --relay_dir=/var/lib/mysql/ --slave_pass=xxx

Sat Dec 29 16:10:03 2018 - [info] Connecting to [email protected](server02:22).. Checking slave recovery environment settings.. Opening /var/lib/mysql/relay-log.info ... ok. Relay log found at /var/lib/mysql, up to mariadb-relay-bin.000002  Temporary relay log file is /var/lib/mysql/mariadb-relay-bin.000002 Testing mysql connection and privileges.. done. Testing mysqlbinlog output.. done. Cleaning up test file(s).. done.

Sat Dec 29 16:10:04 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='root' --slave_host=server03 --slave_ip=192.168.200.113 --slave_po rt=3306 --workdir=/tmp --target_version=5.5.56-MariaDB --manager_version=0.56 --relay_log_info=/var/lib/mysql/relay-log.info --relay_dir=/var/lib/mysql/ --slave_pass=xxx

Sat Dec 29 16:10:04 2018 - [info] Connecting to [email protected](server03:22).. Checking slave recovery environment settings.. Opening /var/lib/mysql/relay-log.info ... ok. Relay log found at /var/lib/mysql, up to slave-relay-bin.000002 Temporary relay log file is /var/lib/mysql/slave-relay-bin.000002 Testing mysql connection and privileges.. done. Testing mysqlbinlog output.. done. Cleaning up test file(s).. done.

Sat Dec 29 16:10:05 2018 - [info] Executing command : apply_diff_relay_logs --command=test --slave_user='root' --slave_host=server04 --slave_ip=192.168.200.114 --slave_po rt=3306 --workdir=/tmp --target_version=5.5.56-MariaDB --manager_version=0.56 --relay_log_info=/var/lib/mysql/relay-log.info --relay_dir=/var/lib/mysql/ --slave_pass=xxx

Sat Dec 29 16:10:05 2018 - [info] Connecting to [email protected](server04:22).. Checking slave recovery environment settings.. Opening /var/lib/mysql/relay-log.info ... ok. Relay log found at /var/lib/mysql, up to slave-relay-bin.000002 Temporary relay log file is /var/lib/mysql/slave-relay-bin.000002 Testing mysql connection and privileges.. done. Testing mysqlbinlog output.. done. Cleaning up test file(s).. done.

Sat Dec 29 16:10:06 2018 - [info] Slaves settings check done.

Sat Dec 29 16:10:06 2018 - [info]

server01 (current master)

+--server02

+--server03

+--server04

Sat Dec 29 16:10:06 2018 - [info] Checking master_ip_failover_script status:

Sat Dec 29 16:10:06 2018 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=server01 --orig_master_ip=192.168.200.111 --orig_m aster_port=3306

IN SCRIPT TEST====/etc/init.d/keepalived stop==/etc/init.d/keepalived start=== Checking the Status of the script.. OK

bash: /etc/init.d/keepalived: 没有那个文件或目录

Sat Dec 29 16:10:07 2018 - [info] OK.

Sat Dec 29 16:10:07 2018 - [warning] shutdown_script is not defined.

Sat Dec 29 16:10:07 2018 - [info] Set master ping interval 1 seconds.

Sat Dec 29 16:10:07 2018 - [warning] secondary_check_script is not defined. It is highly recommended setting it to check master reachability from two or more routes.

Sat Dec 29 16:10:07 2018 - [info] Starting ping health check on server01(192.168.200.111:3306)..

Thu Aug 31 21:55:23 2017 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..

注意:其中"Ping(SELECT) succeeded, waiting until MySQL doesn't respond.."说明整个系统已经 开始监控了。

关闭 MHA manager 监控,忽略操作

masterha_stop --conf=/etc/masterha/app1.cnf

发现已经将 VIP:192.168.200.100 绑定在网卡 ens32。

[root@server01 ~]# ip a | grep ens32

2: ens32: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.200.111/24 brd 192.168.200.255 scope global ens32 inet 192.168.200.100/24 brd 192.168.200.255 scope global secondary ens32:1

Primary Master(192.168.200.111) 模拟主库故障

[root@server01 ~]# systemctl stop mariadb

[root@server01 ~]# netstat -lnpt | grep :3306

[root@server01 ~]# ip a | grep ens32

2: ens32: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.200.111/24 brd 192.168.200.255 scope global ens32

slave1(192.168.200.113)状态:

MariaDB [(none)]> show slave status\G

*************************** 1. row ***************************

   Slave_IO_State: Waiting for master to send event

              Master_Host: 192.168.200.112

              Master_User: repl 

             Master_Port: 3306

          Connect_Retry: 60

          Master_Log_File: master-bin.000001

          Read_Master_Log_Pos: 1372

          Relay_Log_File: slave-relay-bin.000002

             Relay_Log_Pos: 530

       Relay_Master_Log_File: master-bin.000001

              Slave_IO_Running: Yes

             Slave_SQL_Running: Yes

slave2(192.168.200.114)状态:

MariaDB [(none)]> show slave status\G

*************************** 1. row ***************************

                     Slave_IO_State: Waiting for master to send event

                        Master_Host: 192.168.200.112

                        Master_User: repl

                        Master_Port: 3306

                   Connect_Retry: 60

            Master_Log_File: master-bin.000001

                 Read_Master_Log_Pos: 1372

         Relay_Log_File: slave-relay-bin.000002

             Relay_Log_Pos: 530

            Relay_Master_Log_File: master-bin.000001

                 Slave_IO_Running: Yes

                 Slave_SQL_Running: Yes

Server05(192.168.200.115) 监控已经自动关闭:

[root@server05 ~]#                                                   (回车)

[1]+    完成    nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1

Server05(192.168.200.115) 查看监控配置文件已经发生了变化(server01 的配置已被删除):

[root@server05 ~]# cat /etc/masterha/app1.cnf

[server default]

manager_log=/var/log/masterha/app1/manager.log

manager_workdir=/var/log/masterha/app1

master_binlog_dir=/var/lib/mysql

master_ip_failover_script=/usr/local/bin/master_ip_failover

password=123456 ping_interval=1

remote_workdir=/tmp

repl_password=123456

repl_user=repl

user=root

[server2]

candidate_master=1

check_repl_delay=0

hostname=server02

port=3306

[server3]

hostname=server03

port=3306

[server4]

hostname=server04

port=3306

Server05(192.168.200.115) 故障切换过程中的日志文件内容如下:

[root@server05 ~]# tail -f /var/log/masterha/app1/manager.log

Selected server02 as a new master.

server02: OK: Applying all logs succeeded.

server02: OK: Activated master IP address.

server04: This host has the latest relay log events.

server03: This host has the latest relay log events.

Generating relay diff files from the latest slave succeeded.

server04: OK: Applying all logs succeeded. Slave started, replicating from server02.

server03: OK: Applying all logs succeeded. Slave started, replicating from server02.

server02: Resetting slave info succeeded.

Master failover to server02(192.168.200.112:3306) completed successfully.

故障主库修复及 VIP 切回测试

Primary Master(192.168.200.111):

[root@server01 ~]# systemctl start mariadb

[root@server01 ~]# netstat -lnpt | grep :3306

tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 6131/mysqld

Primary Master(192.168.200.111) 指向新的主库

[root@server01 ~]# mysql -u root -p123456

stop slave;

CHANGE MASTER TO

MASTER_HOST='192.168.200.112',

MASTER_USER='repl',

MASTER_PASSWORD='123456';

start slave;

show slave status\G

*************************** 1. row ***************************

                 Slave_IO_State: Waiting for master to send event

                       Master_Host: 192.168.200.112

                       Master_User: repl

                       Master_Port: 3306

               Connect_Retry: 60

                      Master_Log_File: master-bin.000001

                  Read_Master_Log_Pos: 1372

                     Relay_Log_File: mariadb-relay-bin.000002

                       Relay_Log_Pos: 1208

                   Relay_Master_Log_File: master-bin.000001

                       Slave_IO_Running: Yes

                       Slave_SQL_Running: Yes

Server05(192.168.200.115) 修改监控配置文件添加 server1 配置:

[root@server05 ~]# vim /etc/masterha/app1.cnf

[server01]

hostname=server01

port=3306

Server05(192.168.200.115) 检查集群状态:

[root@server05 ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf

-----------------------------------忽略部分信息-----------------------------------

Thu Aug 31 22:20:30 2017 - [info] Alive Servers:

Thu Aug 31 22:20:30 2017 - [info] server01(192.168.200.111:3306)

Thu Aug 31 22:20:30 2017 - [info] server02(192.168.200.112:3306)

Thu Aug 31 22:20:30 2017 - [info] server03(192.168.200.113:3306)

Thu Aug 31 22:20:30 2017 - [info] server04(192.168.200.114:3306)

-----------------------------------忽略部分信息-----------------------------------

server02 (current master)

+--server01

+--server03

+--server04

-----------------------------------忽略部分信息-----------------------------------

MySQL Replication Health is OK.

Server05(192.168.200.115) 开启监控

[root@server05 ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2 >&1 &

[1] 68199

 

 

发布了0 篇原创文章 · 获赞 27 · 访问量 8万+

猜你喜欢

转载自blog.csdn.net/yimenglin/article/details/104813171