122. MySQL MHA+Atlas高可用架构部署

1. 主从复制架构演变介绍

1.1 基本结构

1)一主一从
2)一主多从
3)多级主从
4)双主
5)循环复制

1.2 高级应用架构演变

1.2.1 架构演变

读写分离: 读多写少的业务类型
Atlas , ProxySQL ,Maxscale .....
高可用:  解决业务主库宕机时,还能继续提供原有服务
架构:MHA,PXC,MGC,MGR,MIC(8.0)
分布式架构: 将逻辑单元拆分到不同的节点中,分担存储压力和业务压力.
Mycat,DBLE,Sharding-jdbc
NewSQL架构: 合久必分,分久必合.
			TiDB , Polardb ,TDSQL

1.2.2 高可用架构标准评估

1)单活:MMM架构——mysql-mmm(google)
(2)单活:MHA架构——mysql-master-ha(日本DeNa),T-MHA
(3)多活:MGR ——5.7 新特性 MySQL Group replication(5.7.17) --->Innodb Cluster  4)多活:MariaDB Galera Cluster架构,(PXC)Percona XtraDB Cluster、MySQL Cluster(Oracle rac)架构
99.9%             0.1%         365*24*60*0.001= 525.6 min
99.99%            0.01%        365*24*60*0.001= 52.56 min
99.999%           0.001%	   365*24*60*0.001= 5.256 min
99.9999% 		  0.0001%      365*24*60*0.001= 0.5256 min
负载集群:   39 
主备集群:   49   MHA 
多活集群:   59   MySQL Cluster, MIC, PXC ,MGC
Real Application Cluster : Oracle  RAC   sysbase cluster

2. 高可用MHA架构搭建

2.1 架构工作原理

主库宕机处理过程

1. 监控节点 (通过配置文件获取所有节点信息)
   系统,网络,SSH连接性
   主从状态,重点是主库

2. 选主
(1) 如果判断从库(position或者GTID),数据有差异,最接近于Master的slave,成为备选主
(2) 如果判断从库(position或者GTID),数据一致,按照配置文件顺序,选主.
(3) 如果设定有权重(candidate_master=1),按照权重强制指定备选主.
    1. 默认情况下如果一个slave落后master 100M的relay logs的话,即使有权重,也会失效.
    2. 如果check_repl_delay=0的化,即使落后很多日志,也强制选择其为备选主

3. 数据补偿
(1) 当SSH能连接,从库对比主库GTID 或者position号,立即将二进制日志保存至各个从节点并且应用(save_binary_logs )
(2) 当SSH不能连接, 对比从库之间的relaylog的差异(apply_diff_relay_logs) 

4. Failover
将备选主进行身份切换,对外提供服务
其余从库和新主库确认新的主从关系

5. 应用透明(VIP)
6. 故障切换通知(send_reprt)
7. 二次数据补偿(binlog_server)
8. 自愈自治(待开发...)

2.2 架构介绍:

12从GTID环境,master:db01   slave:db02   db03 ):
MHA 高可用方案软件构成
Manager软件:选择一个从节点安装;线上环境单独机器安装;这里只做测试
Node软件:所有节点都要安装

在这里插入图片描述

2.3 MHA软件构成

Manager工具包主要包括以下几个工具:
masterha_manger             启动MHA 
masterha_check_ssh      检查MHASSH配置状况 
masterha_check_repl         检查MySQL复制状况 
masterha_master_monitor     检测master是否宕机 
masterha_check_status       检测当前MHA运行状态 
masterha_master_switch  控制故障转移(自动或者手动)
masterha_conf_host      添加或删除配置的server信息

Node工具包主要包括以下几个工具:
这些工具通常由MHA Manager的脚本触发,无需人为操作
save_binary_logs            保存和复制master的二进制日志 
apply_diff_relay_logs       识别差异的中继日志事件并将其差异的事件应用于其他的
purge_relay_logs            清除中继日志(不会阻塞SQL线程)

2.4 MHA环境搭建

2.4.1 环境规划:

主库: 51    node 
从库: 
52      node
53      node    manager

2.4.2 准备环境(1主2从GTID)
已搭建
2.4.3 配置关键程序软连接【所有节点】

ln -s /usr/local/mysql57/bin/mysqlbinlog    /usr/bin/mysqlbinlog
ln -s /usr/local/mysql57/bin/mysql          /usr/bin/mysql

2.4.4 配置各节点互信【3个节点都要实现免密登录】

db01:
rm -rf /root/.ssh 
ssh-keygen
cd /root/.ssh 
mv id_rsa.pub authorized_keys
scp  -r  /root/.ssh  10.0.0.52:/root 
scp  -r  /root/.ssh  10.0.0.53:/root 

各节点验证
db01:
ssh 10.0.0.51 date
ssh 10.0.0.52 date
ssh 10.0.0.53 date
db02:
ssh 10.0.0.51 date
ssh 10.0.0.52 date
ssh 10.0.0.53 date
db03:
ssh 10.0.0.51 date
ssh 10.0.0.52 date
ssh 10.0.0.53 date

2.4.5 安装软件

下载mha软件
mha官网:https://code.google.com/archive/p/mysql-master-ha/
github下载地址:https://github.com/yoshinorim/mha4mysql-manager/wiki/Downloads

1、所有节点安装Node软件依赖包
yum install perl-DBD-MySQL -y
rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm

2、在db01主库中创建mha需要的用户
 grant all privileges on *.* to mha@'10.0.0.%' identified by 'mha';
 
3、Manager软件安装(db03)
yum install -y perl-Config-Tiny epel-release perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes
rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm

2.4.6 配置文件准备(db03)

1、创建配置文件目录
 mkdir -p /etc/mha
 
2、创建日志目录
 mkdir -p /var/log/mha/app1
 
3、编辑mha配置文件
vim /etc/mha/app1.cnf	#生产环境建议安装业务名取名
[server default]
manager_log=/var/log/mha/app1/manager   #日志目录     
manager_workdir=/var/log/mha/app1  #工作目录          
master_binlog_dir=/data/binlog     #主库二进制位置点
user=mha                #管理员用户                   
password=mha                               
ping_interval=2		#每间隔2秒ping主机的心跳  会检测4次  8秒
repl_password=123	#复制密码
repl_user=repl		#复制用户
ssh_user=root       #传输数据和日志用的                        
[server1]           #以下是各个节点的配置信息                        
hostname=10.0.0.51
port=3306                                  
[server2]            
hostname=10.0.0.52
port=3306
[server3]
hostname=10.0.0.53
port=3306

2.4.7 状态检查

互信检查:
[root@db03 ~]# masterha_check_ssh  --conf=/etc/mha/app1.cnf 
Fri Jan  3 11:23:58 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Jan  3 11:23:58 2020 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Fri Jan  3 11:23:58 2020 - [info] Reading server configuration from /etc/mha/app1.cnf..
Fri Jan  3 11:23:58 2020 - [info] Starting SSH connection tests..
Fri Jan  3 11:23:59 2020 - [debug] 
Fri Jan  3 11:23:58 2020 - [debug]  Connecting via SSH from root@10.0.0.51(10.0.0.51:22) to root@10.0.0.52(10.0.0.52:22)..
Fri Jan  3 11:23:59 2020 - [debug]   ok.
Fri Jan  3 11:23:59 2020 - [debug]  Connecting via SSH from root@10.0.0.51(10.0.0.51:22) to root@10.0.0.53(10.0.0.53:22)..
Fri Jan  3 11:23:59 2020 - [debug]   ok.
Fri Jan  3 11:24:00 2020 - [debug] 
Fri Jan  3 11:23:59 2020 - [debug]  Connecting via SSH from root@10.0.0.52(10.0.0.52:22) to root@10.0.0.51(10.0.0.51:22)..
Fri Jan  3 11:23:59 2020 - [debug]   ok.
Fri Jan  3 11:23:59 2020 - [debug]  Connecting via SSH from root@10.0.0.52(10.0.0.52:22) to root@10.0.0.53(10.0.0.53:22)..
Fri Jan  3 11:23:59 2020 - [debug]   ok.
Fri Jan  3 11:24:01 2020 - [debug] 
Fri Jan  3 11:23:59 2020 - [debug]  Connecting via SSH from root@10.0.0.53(10.0.0.53:22) to root@10.0.0.51(10.0.0.51:22)..
Fri Jan  3 11:24:00 2020 - [debug]   ok.
Fri Jan  3 11:24:00 2020 - [debug]  Connecting via SSH from root@10.0.0.53(10.0.0.53:22) to root@10.0.0.52(10.0.0.52:22)..
Fri Jan  3 11:24:00 2020 - [debug]   ok.
Fri Jan  3 11:24:01 2020 - [info] All SSH connection tests passed successfully.

主从状态检查:

[root@db03 ~]# masterha_check_repl  --conf=/etc/mha/app1.cnf
Fri Jan  3 11:24:21 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Jan  3 11:24:21 2020 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Fri Jan  3 11:24:21 2020 - [info] Reading server configuration from /etc/mha/app1.cnf..
Fri Jan  3 11:24:21 2020 - [info] MHA::MasterMonitor version 0.56.
Fri Jan  3 11:24:22 2020 - [info] GTID failover mode = 1
Fri Jan  3 11:24:22 2020 - [info] Dead Servers:
Fri Jan  3 11:24:22 2020 - [info] Alive Servers:
Fri Jan  3 11:24:22 2020 - [info]   10.0.0.51(10.0.0.51:3306)
Fri Jan  3 11:24:22 2020 - [info]   10.0.0.52(10.0.0.52:3306)
Fri Jan  3 11:24:22 2020 - [info]   10.0.0.53(10.0.0.53:3306)
Fri Jan  3 11:24:22 2020 - [info] Alive Slaves:
Fri Jan  3 11:24:22 2020 - [info]   10.0.0.52(10.0.0.52:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Fri Jan  3 11:24:22 2020 - [info]     GTID ON
Fri Jan  3 11:24:22 2020 - [info]     Replicating from 10.0.0.51(10.0.0.51:3306)
Fri Jan  3 11:24:22 2020 - [info]   10.0.0.53(10.0.0.53:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Fri Jan  3 11:24:22 2020 - [info]     GTID ON
Fri Jan  3 11:24:22 2020 - [info]     Replicating from 10.0.0.51(10.0.0.51:3306)
Fri Jan  3 11:24:22 2020 - [info] Current Alive Master: 10.0.0.51(10.0.0.51:3306)
Fri Jan  3 11:24:22 2020 - [info] Checking slave configurations..
Fri Jan  3 11:24:22 2020 - [info]  read_only=1 is not set on slave 10.0.0.52(10.0.0.52:3306).
Fri Jan  3 11:24:22 2020 - [info]  read_only=1 is not set on slave 10.0.0.53(10.0.0.53:3306).
Fri Jan  3 11:24:22 2020 - [info] Checking replication filtering settings..
Fri Jan  3 11:24:22 2020 - [info]  binlog_do_db= , binlog_ignore_db= 
Fri Jan  3 11:24:22 2020 - [info]  Replication filtering check ok.
Fri Jan  3 11:24:22 2020 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Fri Jan  3 11:24:22 2020 - [info] Checking SSH publickey authentication settings on the current master..
Fri Jan  3 11:24:22 2020 - [info] HealthCheck: SSH to 10.0.0.51 is reachable.
Fri Jan  3 11:24:22 2020 - [info] 
10.0.0.51(10.0.0.51:3306) (current master)
 +--10.0.0.52(10.0.0.52:3306)
 +--10.0.0.53(10.0.0.53:3306)

Fri Jan  3 11:24:22 2020 - [info] Checking replication health on 10.0.0.52..
Fri Jan  3 11:24:22 2020 - [info]  ok.
Fri Jan  3 11:24:22 2020 - [info] Checking replication health on 10.0.0.53..
Fri Jan  3 11:24:22 2020 - [info]  ok.
Fri Jan  3 11:24:22 2020 - [warning] master_ip_failover_script is not defined.
Fri Jan  3 11:24:22 2020 - [warning] shutdown_script is not defined.
Fri Jan  3 11:24:22 2020 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

2.4.8 启动MHA(db03):

[root@db03 ~]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null> /var/log/mha/app1/manager.log 2>&1 & [1] 2659

2.4.9 查看MHA状态

判断MHA的状态:OK则为正常
[root@db03 ~]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:2659) is running(0:PING_OK), master:10.0.0.51

[root@db03 ~]# mysql -umha -pmha -h 10.0.0.51 -e "show variables like 'server_id'"
mysql: [Warning] Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 51    |
+---------------+-------+

[root@db03 ~]#  mysql -umha -pmha -h 10.0.0.52 -e "show variables like 'server_id'"
mysql: [Warning] Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 52    |
+---------------+-------+

[root@db03 ~]# mysql -umha -pmha -h 10.0.0.53 -e "show variables like 'server_id'"
mysql: [Warning] Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id     | 53    |
+---------------+-------+

3. MHA 的工作原理

1. 通过masterha_manger脚本启动MHA高可用功能
2. 通过masterha_master_monitor监控主节点的状态,每ping_interval秒探测一次主库的心跳,一共检测4.如过.4次都没通,认为主库宕机
3. 进行选主工作
    算法一: 权重candidate_master=1
    算法二: 判断日志量
    算法三: 按照配置文件的顺序
4. 数据补偿
    ssh能连: 各个从库通过(save_binary_logs)立即保存缺失部分binlog到/var/tmp/xxxx,并补偿数据.
    ssh不能连:数据较少的从库,会通过apply_diff_relay_logs,计算差异,追平数据.
5. 通过masterha_master_switch 脚本切换.
    所有从节点,stop slave; reset slave all;
   s2节点,change master to s1 ,start slave
6. 调用masterha_conf_host脚本,从集群中将故障节点剔除.
7. manager 自杀.

缺点补足:
1. binlog server  日志冗余
2. 应用透明	vip漂移(可以选用keepalive,MHA自带vip功能-脚本)	
3. 故障提醒
4. 自愈(待开发...),RDS,TDSQL都具备自愈能力.

在这里插入图片描述

4. MHA 扩展功能应用

4.1 MHA 的vip功能
(1) 准备脚本
[root@db03 ~]# cp master_ip_failover.txt /usr/local/bin/master_ip_failover
[root@db03 ~]# cd /usr/local/bin/
[root@db03 bin]# chmod +x master_ip_failover
[root@db03 bin]# dos2unix master_ip_failover 

改配置文件:
[root@db03 bin]# vim master_ip_failover
...
my $vip = '10.0.0.55/24';     	#VIP地址【没有在网络中使用的ip】
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";	#网卡名称
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";
...

(2) manager配置文件修改
添加参数
[root@db03 bin]# vim /etc/mha/app1.cnf
master_ip_failover_script=/usr/local/bin/master_ip_failover

(3) 手动生成vip(主节点)
[root@db01 ~]# ifconfig eth0:1 10.0.0.55/24

(4) 在db03重启mha [先停止后启动]
masterha_stop --conf=/etc/mha/app1.cnf

nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &

(5)检测状态
masterha_check_status --conf=/etc/mha/app1.cnf
4.2 binlog server(db03)
4.2.1 参数:
vim /etc/mha/app1.cnf 
[binlog1]
no_master=1		#1代表不作为选主时使用
hostname=10.0.0.53
master_binlog_dir=/data/mysql/binlog

4.2.2 创建binlog server专用目录并授权
mkdir -p /data/mysql/binlog
chown -R mysql.mysql /data/*

4.2.3  拉取主库binlog日志
cd /data/mysql/binlog    
mysqlbinlog  -R --host=10.0.0.51 --user=mha --password=mha --raw  --stop-never mysql-bin.000001 &

4.2.4 重启MHA【先停再起】
masterha_stop --conf=/etc/mha/app1.cnf
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
4.3 邮件提醒
1. 参数:
report_script=/usr/local/bin/send

2. 准备邮件脚本
send_report
(1)准备发邮件的脚本(上传 email_2019-最新.zip中的脚本,到/usr/local/bin/中并给x权限)
(2)将准备好的脚本添加到mha配置文件中,让其调用

3. 修改manager配置文件,调用邮件脚本
vi /etc/mha/app1.cnf
report_script=/usr/local/bin/send

4. 停止MHA
masterha_stop --conf=/etc/mha/app1.cnf

5. 开启MHA    
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
        
6. 关闭主库,看警告邮件  
6.1 检测VIP是否存在
 inet 10.0.0.55/24 brd 10.0.0.255 scope global secondary eth0:1
 
6.2 关闭主库mysql
pkill mysqld

6.3 邮件脚本:
[root@db03 bin]# cat /usr/local/bin/testpl 
#!/bin/bash
/usr/local/bin/sendEmail -o tls=no -f 发件人邮箱 -t 991540698@qq.com -s smtp.126.com:25 -xu 用户名 -xp 密码 -u "MHA Waring" -m "YOUR MHA MAY BE FAILOVER" &>/tmp/sendmail.log
--------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------
7. MHA 故障修复:
故障:VIP已飘逸至db02
 inet 10.0.0.55/24 brd 10.0.0.255 scope global secondary eth0:1
 
db03 [(none)]>show slave status \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.0.0.52
                  Master_User: repl
                  Master_Port: 3306

[root@db03 bin]# masterha_check_status --conf=/etc/mha/app1.cnf
                  NOT RUNNING

故障修复思路:                  
7.1. 恢复故障节点db01
/etc/init.d/mysqld start 

7.2. 修复主从 【db01操作】
将db01 加入到主从环境中作为从库角色 
change master to 
master_host='10.0.0.52',
master_user='repl',
master_password='123' ,
MASTER_AUTO_POSITION=1;

start slave;

7.3. 修复binlog_server(db03)
[root@db03 binlog]# cd /data/mysql/binlog/
[root@db03 binlog]# rm -rf *

从现有的主库拉取日志
[root@db03 binlog]#  mysqlbinlog  -R --host=10.0.0.52 --user=mha --password=mha --raw  --stop-never mysql-bin.000001 &
 
7.4. 检查主库vip
[root@db02 ~]# ifconfig -a
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.55  netmask 255.255.255.0  broadcast 10.0.0.255
        ether 00:0c:29:18:0c:cf  txqueuelen 1000  (Ethernet)

7.5. 检查配置文件节点信息(db03)
[root@db03 bin]# vim /etc/mha/app1.cnf 
.....
[server1]
hostname=10.0.0.51
port=3306

[server2]
hostname=10.0.0.52
port=3306

[server3]
hostname=10.0.0.53
port=3306
.....

7.6. 状态检查
互信检查
masterha_check_ssh  --conf=/etc/mha/app1.cnf 

主从检查
masterha_check_repl  --conf=/etc/mha/app1.cnf 

7.7. 启动MHA 
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &

7.8 健康检测
[root@db03 bin]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:11716) is running(0:PING_OK), master:10.0.0.52

7.9 至此,故障修复完毕

5. 读写分离Atlas+MHA应用

5.1 介绍

 Atlas是由 Qihoo 360, Web平台部基础架构团队开发维护的一个基于MySQL协议的数据中间层项目。
它是在mysql-proxy 0.8.2版本的基础上,对其进行了优化,增加了一些新的功能特性。
360内部使用Atlas运行的mysql业务,每天承载的读写请求数达几十亿条。
下载地址
https://github.com/Qihoo360/Atlas/releases
注意:
1、Atlas只能安装运行在64位的系统上
2、Centos 5.X安装 Atlas-XX.el5.x86_64.rpm,Centos 6.X安装Atlas-XX.el6.x86_64.rpm。
3、后端mysql版本应大于5.1,建议使用Mysql 5.6以上

5.2 安装配置【db03】

1. 安装
yum install -y Atlas*

cd /usr/local/mysql-proxy/conf
mv test.cnf test.cnf.bak

2. 编写配置文件
vi test.cnf

[mysql-proxy]
admin-username = user  #管理用户
admin-password = pwd	#管理密码
proxy-backend-addresses = 10.0.0.55:3306   #写的节点MHA VIP
proxy-read-only-backend-addresses = 10.0.0.51:3306,10.0.0.53:3306     #从节点
pwds = repl:3yb5jEku5h4=,mha:O2jBXONX098=   #后端数据库节点真实的用户和密码
daemon = true     #后台运行
keepalive = true    #是否坚持心跳
event-threads = 8   #并发连接数
log-level = message
log-path = /usr/local/mysql-proxy/log
sql-log=ON     #记录atlas的sql语句
proxy-address = 0.0.0.0:33060		#对外服务端口
admin-address = 0.0.0.0:2345         #管理服务端口
charset=utf8     #字符集

3. 启动atlas
/usr/local/mysql-proxy/bin/mysql-proxyd test start

[root@db03 bin]# netstat -lntup |grep proxy
tcp        0      0 0.0.0.0:33060           0.0.0.0:*               LISTEN      12930/mysql-proxy   
tcp        0      0 0.0.0.0:2345            0.0.0.0:*               LISTEN      12930/mysql-proxy   

PS:如果在MySQL8.0环境中,不要使用33060端口,会和8.0系统的一个X协议端口冲突;

5.3 Atlas功能测试【db03】

1、测试读操作:
mysql -umha -pmha  -h 10.0.0.53 -P 33060 
db03 [(none)]>select @@server_id;
结果显示都在5153 	从节点

2、测试写操作:
mysql> begin;select @@server_id;commit;		#主节点

5.4 Atlas的管理

1、登录后端管理
[root@db03 ~]# mysql -uuser -ppwd -h10.0.0.53 -P2345
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.0.99-agent-admin

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

2、查看帮助信息
db03 [(none)]>select * from help;
+----------------------------+---------------------------------------------------------+
| SELECT * FROM help         | shows this help                                         |
| SELECT * FROM backends     | lists the backends and their state                      |
| SET OFFLINE $backend_id    | offline backend server, $backend_id is backend_ndx's id |
| SET ONLINE $backend_id     | online backend server, ...                              |
| ADD MASTER $backend        | example: "add master 127.0.0.1:3306", ...               |
| ADD SLAVE $backend         | example: "add slave 127.0.0.1:3306", ...                |
| REMOVE BACKEND $backend_id | example: "remove backend 1", ...                        |
| SELECT * FROM clients      | lists the clients                                       |
| ADD CLIENT $client         | example: "add client 192.168.1.2", ...                  |
| REMOVE CLIENT $client      | example: "remove client 192.168.1.2", ...               |
| SELECT * FROM pwds         | lists the pwds                                          |
| ADD PWD $pwd               | example: "add pwd user:raw_password", ...               |
| ADD ENPWD $pwd             | example: "add enpwd user:encrypted_password", ...       |
| REMOVE PWD $pwd            | example: "remove pwd user", ...                         |
| SAVE CONFIG                | save the backends to config file                        |
| SELECT VERSION             | display the version of Atlas                            |
+----------------------------+-----------------------------------

3、查询后端所有节点信息
db03 [(none)]>SELECT * FROM backends;
+-------------+----------------+-------+------+
| backend_ndx | address        | state | type |
+-------------+----------------+-------+------+
|           1 | 10.0.0.55:3306 | up    | rw   |
|           2 | 10.0.0.51:3306 | up    | ro   |
|           3 | 10.0.0.53:3306 | up    | ro   |
+-------------+----------------+-------+------+
3 rows in set (0.00 sec)

4、关闭某个节点服务
db03 [(none)]>SET OFFLINE 3;
+-------------+----------------+---------+------+
| backend_ndx | address        | state   | type |
+-------------+----------------+---------+------+
|           3 | 10.0.0.53:3306 | offline | ro   |
+-------------+----------------+---------+------+
1 row in set (0.00 sec)

5、启动节点
db03 [(none)]>SET ONLINE 3;
+-------------+----------------+---------+------+
| backend_ndx | address        | state   | type |
+-------------+----------------+---------+------+
|           3 | 10.0.0.53:3306 | unknown | ro   |
+-------------+----------------+---------+------+
1 row in set (0.00 sec)

6、删除一个节点
db03 [(none)]>REMOVE BACKEND 3;
Empty set (0.00 sec)

db03 [(none)]>SELECT * FROM backends;
+-------------+----------------+-------+------+
| backend_ndx | address        | state | type |
+-------------+----------------+-------+------+
|           1 | 10.0.0.55:3306 | up    | rw   |
|           2 | 10.0.0.51:3306 | up    | ro   |
+-------------+----------------+-------+------+
2 rows in set (0.00 sec)
PS:节点没有down掉,只是atlas服务里没有了它

7、添加节点
db03 [(none)]>ADD SLAVE 10.0.0.53:3306;
Empty set (0.00 sec)

db03 [(none)]>SELECT * FROM backends;
+-------------+----------------+-------+------+
| backend_ndx | address        | state | type |
+-------------+----------------+-------+------+
|           1 | 10.0.0.55:3306 | up    | rw   |
|           2 | 10.0.0.51:3306 | up    | ro   |
|           3 | 10.0.0.53:3306 | up    | ro   |
+-------------+----------------+-------+------+
3 rows in set (0.00 sec)

8、查看标识后端数据库管理用户的信息
db03 [(none)]>SELECT * FROM pwds;
+----------+--------------+
| username | password     |
+----------+--------------+
| repl     | 3yb5jEku5h4= |
| mha      | O2jBXONX098= |
+----------+--------------+
2 rows in set (0.00 sec)

9、添加用户
db02 [(none)]>grant all on *.* to test@'10.0.0.%' identified by '123';
Query OK, 0 rows affected, 1 warning (0.00 sec)

db03 [(none)]>ADD PWD test:123;
Empty set (0.00 sec)

db03 [(none)]>SELECT * FROM pwds;
+----------+--------------+
| username | password     |
+----------+--------------+
| repl     | 3yb5jEku5h4= |
| mha      | O2jBXONX098= |
| test     | 3yb5jEku5h4= |
+----------+--------------+
3 rows in set (0.00 sec)

PS:添加用户需要两步:
(1)后台数据库
(2)atlas里添加;可以看出自动加密
只有满足这两步,Atlas才可以正常调用
以上操作只是临时的,永久配置需要执行“SAVE CONFIG” 立即将配置刷新到配置文件;下次登录继续存在
发布了148 篇原创文章 · 获赞 65 · 访问量 7619

猜你喜欢

转载自blog.csdn.net/chengyinwu/article/details/103817608
今日推荐