ansible-playbook实战之部署redis+sentinel+twemproxy

简介

twemproxy,也叫nutcraker。是一个twtter开源的一个redis 和memcache 快速/轻量级代理服务器;Twemproxy是一个快速的单线程代理程序,支持Memcached ASCII协议和更新的Redis协议
Twemproxy 通过引入一个代理层,可以将其后端的多台 Redis 或 Memcached 实例进行统一管理与分配,使应用程序只需要在 Twemproxy 上进行操作,而不用关心后面具体有多少个真实的 Redis 或 Memcached 存储。
本次使用redis+sentinel+twemproxy实现如下功能:
1.前端使用twemproxy (主备节点)做代理,将其后端的多台Redis实例分片进行统一管理与分配;
2.每一个分片节点的redis slave 都是redis master的副本且只读;
3.redis sentinel 持续不断的监控每个分片节点的master,当master出现故障且不可用状态时,sentinel 会通知/启动自动故障转移等动作;
4.sentinel 可以在发生故障转移动作后触发相应脚本(通过 client-reconfig-script 参数配置 ),脚本获取到最新的master来修改 twemproxy 配置并重启 twemproxy;

架构图

redis集群架构

集群名 角色 IP 端口 组件
redis1 master 10.20.10.43 6379 redis+sentinel
slave 10.20.10.44 6380
redis2 master 10.20.10.44 6379 redis+sentinel
slave 10.20.10.45 6380
redis3 master 10.20.10.45 6379 redis+sentinel
slave 10.20.10.43 6380
代理 proxy1 10.20.10.46 22121 twemproxy+sentinel
proxy2 10.20.10.47 22121
proxy3 10.20.10.48 22121

如上图所示:
1.在43/44/45上分别安装redis+sentinel,并且3台机器之间两两做主从,以保证任何一台机器出问题不会影响整个集群。
2.在46/47/48上分别安装twemproxy+sentinel,当3组redis主从中,sentinle会监控每组主从中的master,当master失效sentinel会进行failover,并且sentinel会通过 client-reconfig-script 来触发client-reconfig.sh,将twemproxy代理的群组切换为新的master并重启,以使twemproxy对外正常提供服务。
此处由于在每台服务器上同时存在主从、proxy与redis隔离,为便于后续自动化安装,我们统一定义master的端口为6379、slave的端口为6380,proxy上的sentinel单独安装。

安装思路

1.变量定义
我们将需要安装的服务器分为3个角色:master、slave、proxy,在master和slave上安装redis+sentinel;在proxy上安装sentinel+twemproxy;由于master、slave和proxy可能使用应用用户启动而非root用户启动,因此我们将安装路径、启动用户及其他都提前定义到变量中;
2.安装脚本
我们使用redis_install.sh在master和slave上安装redis、sentinel并启动服务;在proxy安装sentinel并启动服务;
我们使用twemproxy.sh在proxy上安装twemproxy,并启动应用;另我们还使用此脚本设置sentinel的 client-reconfig-script 来设置切换脚本client-reconfig.sh

playbook的目录结构

├── hosts
├── redis_cluster.yml
└── roles
    └── redis_cluster_install
        ├── files
        │   └── redis
        │       ├── autoconf-2.69.tar.gz
        │       ├── automake-1.15.tar.gz
        │       ├── libtool-2.4.6.tar.gz
        │       ├── redis-3.2.9.tar.gz
        │       ├── redis.conf
        │       ├── redis.conf.bak
        │       ├── sentinel.conf
        │       ├── sentinel.conf.bak
        │       └── twemproxy-master.zip
        ├── handlers
        ├── meta
        ├── tasks
        │   └── main.yml
        ├── templates
        │   ├── client-reconfig.sh
        │   ├── redis_install.sh
        │   └── twemproxy_install.sh
        └── vars
            └── main.yml

说明:
1.file中的autoconf-2.69.tar.gz、automake-1.15.tar.gz、libtool-2.4.6.tar.gz、twemproxy-master.zip是安装twemproxy的源码文件;redis.conf、sentinel.conf是我们提前定义好的模板配置文件,以便我们进行变量替换;
2.template中的redis_install.sh是在master、slave、proxy上分别安装redis主从、sentinle的脚本;client-reconfig.sh是sentinel中client-reconfig-script 设置的切换脚本;twemproxy_install.sh是twemproxy的安装脚本

具体操作

1.创建资产hosts

vim hosts
[redis_master]
10.20.10.43 ansible_ssh_user=root ansible_ssh_pass=root1234
10.20.10.44 ansible_ssh_user=root ansible_ssh_pass=root1234
10.20.10.45 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_slave1]
10.20.10.43 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_slave2]
10.20.10.44 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_slave3]
10.20.10.45 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_proxy]
10.20.10.46 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_proxy]
10.20.10.47 ansible_ssh_user=root ansible_ssh_pass=root1234
[redis_proxy]
10.20.10.48 ansible_ssh_user=root ansible_ssh_pass=root1234

注意:此处的slave1、slave、slave3没有特别含义,可随意指定;具体的对应关系我们是在下面的角色文件中定义。
2.创建角色文件

vim redis_cluster.yml
#角色为master
- hosts: redis_master
  remote_user: root
  gather_facts: False
  roles:
    - {role: redis_cluster_install,redis_role: master}
#角色为slave,隶属集群3
- hosts: redis_slave1
  remote_user: root
  gather_facts: False
  roles:
    - {role: redis_cluster_install,redis_role: slave,cluster_no: 3}
#角色为slave,隶属集群1
- hosts: redis_slave2
  remote_user: root
  gather_facts: False
  roles:
    - {role: redis_cluster_install,redis_role: slave,cluster_no: 1}
#角色为slave,隶属集群2
- hosts: redis_slave3
  remote_user: root
  gather_facts: False
  roles:
    - {role: redis_cluster_install,redis_role: slave,cluster_no: 2}
#角色为proxy
- hosts: redis_proxy
  remote_user: root
  gather_facts: False
  roles:
    - {role: redis_cluster_install,redis_role: proxy}

其中:
slave1的ip为10.20.10.43,我们根据架构图中的表格定义属于数据集群redis3,因此cluster_no为3
slave1的ip为10.20.10.44,我们根据架构图中的表格定义属于数据集群redis1,因此cluster_no为1
slave1的ip为10.20.10.45,我们根据架构图中的表格定义属于数据集群redis3,因此cluster_no为2
3.创建变量文件

vim roles/redis_cluster/vars/main.yml

#master和slave上的启动用户及安装目录、源码目录;app为应用用户家目录
#redis server dir
redis_user: app
install_dir: /home/ap/app
source_dir: /home/ap/app/src/

#proxy上的启动用户及安装目录、源码目录;appwb为应用用户家目录
#proxy server dir
proxy_user: appwb
proxy_install_dir: /home/ap/appwb
proxy_source_dir: /home/ap/appwb/src/

#redis主从的端口
redis_master_port: 6379
redis_slave_port: 6380
maxmemory: 2gb

#sentinel的端口及quorum数量
sen_port: 26379
sen_quorum: 3

#集群列表
cluster1:
- masterip: 10.20.10.43
  sen_mastername: redis1
cluster2:
- masterip: 10.20.10.44
  sen_mastername: redis2
cluster3:
- masterip: 10.20.10.45
  sen_mastername: redis3

#twemproxy端口
tw_port: 22121

如下我们根据角色文件中的cluster_no,分别列举了3组集群的名称和主机ip,以用于twemproxy

4.创建任务文件

#在master和slave上拷贝一份安装源码用于安装redis,在此以master为主
- name: copy redis dir to client
  copy: src=redis dest={{source_dir}} owner=root group=root
  when: redis_role == "master"

#在proxy上贝一份安装源码用于安装redis
- name: copy redis dir to client
  copy: src=redis dest={{proxy_source_dir}} owner=root group=root
  when: redis_role == "proxy"

#在主或从上拷贝redis_install.sh
- name: copy redis_install script to client
  template: src=redis_install.sh dest={{source_dir}}/redis/ owner=root group=root mode=0775
  when: redis_role == "master" or redis_role == "slave"

#在proxy上拷贝redis_install.sh
- name: copy redis_install script to client
  template: src=redis_install.sh dest={{proxy_source_dir}}/redis/ owner=root group=root mode=0775
  when: redis_role == "proxy"

#在主或从上安装redis_install.sh
- name: install redis and sentinel
  shell: bash {{source_dir}}/redis/redis_install.sh
  when: redis_role == "master" or redis_role == "slave"

#在proxy上安装redis_install.sh
- name: install redis and sentinel
  shell: bash {{proxy_source_dir}}/redis/redis_install.sh
  when: redis_role == "proxy"

#在proxy上拷贝client-reconfig.sh
- name: copy client-reconfig script to client
  template: src=client-reconfig.sh dest={{proxy_install_dir}}/redis/ owner={{proxy_user}} group={{proxy_user}} mode=0775
  when: redis_role == "proxy"

#在proxy上拷贝twemproxy_install.sh
- name: copy twemproxy_install script to client
  template: src=twemproxy_install.sh dest={{proxy_source_dir}}/redis/ owner=root group=root mode=0775
  when: redis_role == "proxy"

#在proxy上安装twemproxy_install.sh
- name: install twemproxy
  shell: bash {{proxy_source_dir}}/redis/twemproxy_install.sh owner={{proxy_user}} group={{proxy_user}} mode=0775
  when: redis_role == "proxy"

注意:我们为什么在每个task中都要指定相应的节点角色?
因为我们在redis_install.sh中是根据相应的master、slave、proxy角色来安装的,虽然redis_install.sh的名字一样,但是由于角色的不同,从ansible服务器端推送至客户端的redis_install.sh的脚本内容也是不同的。因此我们根据不同的角色推送,实现在同一个脚本中实现不同的角色安装配置。

5.创建模板脚本

(1)vim templates/redis_install.sh

#!/bin/bash
#author: yanggd
#content: install redis and sentinel
#redis目录相关
source_dir={{source_dir}}
install_dir={{install_dir}}
redis_user={{redis_user}}
#proxy目录相关
proxy_source_dir={{proxy_source_dir}}
proxy_install_dir={{proxy_install_dir}}
proxy_user={{proxy_user}}

#redis内存
maxmemory={{maxmemory}}
#redis主从端口
redis_master_port={{redis_master_port}}
redis_slave_port={{redis_slave_port}}
#sentinel端口及quorum数量
sen_port={{sen_port}}
sen_quorum={{sen_quorum}}

#在master上安装redis
{% if redis_role == "master" %}
yum install make gcc gcc-c++ -y
id $redis_user &> /dev/null
if [ $? -ne 0 ];then
        useradd -d $install_dir $redis_user
fi
#install redis
cd $source_dir/redis
tar -zxf redis-3.2.9.tar.gz
cd redis-3.2.9
make MALLOC=libc
make PREFIX=$install_dir/redis install
#init redis dir
mkdir $install_dir/redis/{data,conf,logs}
{% endif %}

#在proxy上安装redis
{% if redis_role == "proxy" %}
yum install make gcc gcc-c++ -y
id $proxy_user &> /dev/null
if [ $? -ne 0 ];then
        useradd -d $proxy_install_dir $proxy_user
fi
#install redis
cd $proxy_source_dir/redis
tar -zxf redis-3.2.9.tar.gz
cd redis-3.2.9
make MALLOC=libc
make PREFIX=$proxy_install_dir/redis install
#init redis dir
mkdir $proxy_install_dir/redis/{data,conf,logs}
#change install_dir owner
chown -R $proxy_user.$proxy_user $proxy_install_dir
{% endif %}

#获取监听的ip地址
ip=`ifconfig eth1|grep "inet addr"|awk '{print $2}'|awk -F: '{print $2}'`

#根据模板配置文件替换变量
#modify redis conf
cd $install_dir/redis/conf

#配置master配置文件
{% if redis_role == "master" %}
cp $source_dir/redis/redis.conf redis_$redis_master_port.conf
sed -i "s:bind:bind $ip:g" redis_$redis_master_port.conf
sed -i "s:port:port $redis_master_port:g" redis_$redis_master_port.conf
sed -i "s:pidfile:pidfile '$install_dir/redis/redis_$redis_master_port.pid':g" redis_$redis_master_port.conf
sed -i "s:logfile:logfile '$install_dir/redis/logs/redis_$redis_master_port.log':g" redis_$redis_master_port.conf
sed -i "s:dump:redis_$redis_master_port:g" redis_$redis_master_port.conf
sed -i "s:dir:dir '$install_dir/redis/data/':g" redis_$redis_master_port.conf
sed -i "s:memsize:$maxmemory:g" redis_$redis_master_port.conf

#change install_dir owner
chown -R $redis_user.$redis_user $install_dir

#modify kernel
cat > /etc/sysctl.conf << EOF
net.core.somaxconn = 10240
vm.overcommit_memory = 1
EOF

sysctl -p

echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
#启动master
#start redis master 
su $redis_user -c "$install_dir/redis/bin/redis-server $install_dir/redis/conf/redis_$redis_master_port.conf"
{% endif %}

#配置slave配置文件
{% if redis_role == "slave" %}
cp $source_dir/redis/redis.conf redis_$redis_slave_port.conf
sed -i "s:bind:bind $ip:g" redis_$redis_slave_port.conf
sed -i "s:port:port $redis_slave_port:g" redis_$redis_slave_port.conf
sed -i "s:pidfile:pidfile '$install_dir/redis/redis_$redis_slave_port.pid':g" redis_$redis_slave_port.conf
sed -i "s:logfile:logfile '$install_dir/redis/logs/redis_$redis_slave_port.log':g" redis_$redis_slave_port.conf
sed -i "s:dump:redis_$redis_slave_port:g" redis_$redis_slave_port.conf
sed -i "s:dir:dir '$install_dir/redis/data/':g" redis_$redis_slave_port.conf
sed -i "s:memsize:$maxmemory:g" redis_$redis_slave_port.conf

#根据角色中cluster_no找到对应的master
{% if cluster_no == 1 %}
{% for i in cluster1 %}
echo "slaveof {{i.masterip}} $redis_master_port" >> redis_$redis_slave_port.conf
{% endfor %}
{% endif %}

{% if cluster_no == 2 %}
{% for i in cluster2 %}
echo "slaveof {{i.masterip}} $redis_master_port" >> redis_$redis_slave_port.conf
{% endfor %}
{% endif %}

{% if cluster_no == 3 %}
{% for i in cluster3 %}
echo "slaveof {{i.masterip}} $redis_master_port" >> redis_$redis_slave_port.conf
{% endfor %}
{% endif %}

#change install_dir owner
chown -R $redis_user.$redis_user $install_dir

#启动slave服务
#start redis slave
su $redis_user -c "$install_dir/redis/bin/redis-server $install_dir/redis/conf/redis_$redis_slave_port.conf"
{% endif %}

#配置哨兵监控3组主从
#modify sentinel conf
{% if redis_role == "master" %}
cd $install_dir/redis/conf
cp $source_dir/redis/sentinel.conf sentinel.conf
{%for i in cluster1 %}
sed -i "s:sen_port:$sen_port:g" sentinel.conf
sed -i "s:pidfile:pidfile '$install_dir/redis/sentinel.pid':g" sentinel.conf
sed -i "s:dir:dir '$install_dir/redis/data/':g" sentinel.conf
sed -i "s:logfile:logfile '$install_dir/redis/logs/sentinel.log':g" sentinel.conf
sed -i "s:master1_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master1_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}
{%for i in cluster2 %}
sed -i "s:master2_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master2_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}
{%for i in cluster3 %}
sed -i "s:master3_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master3_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}

#change install_dir owner
chown -R $redis_user.$redis_user $install_dir

#启动哨兵服务
#start sentinel
su $redis_user -c "$install_dir/redis/bin/redis-sentinel $install_dir/redis/conf/sentinel.conf"
{% endif %}

#在proxy上配置哨兵
{% if redis_role == "proxy" %}
cd $proxy_install_dir/redis/conf
cp $proxy_source_dir/redis/sentinel.conf sentinel.conf
{%for i in cluster1 %}
sed -i "s:sen_port:$sen_port:g" sentinel.conf
sed -i "s:pidfile:pidfile '$proxy_install_dir/redis/sentinel.pid':g" sentinel.conf
sed -i "s:dir:dir '$proxy_install_dir/redis/data/':g" sentinel.conf
sed -i "s:logfile:logfile '$proxy_install_dir/redis/logs/sentinel.log':g" sentinel.conf
sed -i "s:master1_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master1_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}
{%for i in cluster2 %}
sed -i "s:master2_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master2_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}
{%for i in cluster3 %}
sed -i "s:master3_name:{{i.sen_mastername}}:g" sentinel.conf
sed -i "s:master3_host:{{i.masterip}}:g" sentinel.conf
sed -i "s:master_port:$redis_master_port:g" sentinel.conf
sed -i "s:quorum:$sen_quorum:g" sentinel.conf
{% endfor %}

#change install_dir owner
chown -R $proxy_user.$proxy_user $proxy_install_dir

#在proxy上启动哨兵
su $proxy_user -c "$proxy_install_dir/redis/bin/redis-sentinel $proxy_install_dir/redis/conf/sentinel.conf"

{% endif %}

(2)vim client-reconfig.sh

#!/bin/bash
#content: modify twemproxy when sentinel failover
#<master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>

proxy_install_dir={{proxy_install_dir}}

monitor_name="$1"
master_old_ip="$4"
master_old_port="$5"
master_new_ip="$6"
master_new_port="$7"

tw_bin=$proxy_install_dir/twemproxy/sbin/nutcracker
tw_conf=$proxy_install_dir/twemproxy/conf/nutcracker.yml
tw_log=$proxy_install_dir/twemproxy/logs/twemproxy.log
tw_cmd="$tw_bin -c $tw_conf -o $tw_log -v 11 -d"

#modify twemproxy conf
sed -i "s/${master_old_ip}:${master_old_port}/${master_new_ip}:${master_new_port}/g" $tw_conf

#kill twemproxy
ps -ef|grep twemproxy|grep -v grep|awk '{print $2}'|xargs kill

#start twemproxy
$tw_cmd

sleep 1
ps -ef|grep twemproxy|grep -v grep

(3)vim twemproxy_install.sh

#!/bin/bash
#author: yanggd
#content: install twemproxy and add sentinel client-reconfig.sh
#在proxy上安装twemproxy
proxy_source_dir={{proxy_source_dir}}
proxy_install_dir={{proxy_install_dir}}

sen_port={{sen_port}}
redis_master_port={{redis_master_port}}

#redis
proxy_user={{proxy_user}}
redis_master_port={{redis_master_port}}

#twemproxy
tw_port={{tw_port}}

#install twemproxy
cd $proxy_source_dir/redis
tar -zxf autoconf-2.69.tar.gz
cd autoconf-2.69
./configure
make && make install

cd ..
tar -zxf automake-1.15.tar.gz
cd automake-1.15
./configure
make && make install

cd ..
tar -zxf libtool-2.4.6.tar.gz
cd libtool-2.4.6
./configure
make && make install

cd ..
unzip twemproxy-master.zip
cd twemproxy-master
aclocal
autoreconf -f -i -Wall,no-obsolete
./configure --prefix=$proxy_install_dir/twemproxy
make && make install

#init twemproxy
mkdir -p $proxy_install_dir/twemproxy/{conf,logs}

ip=`ifconfig eth1|grep "inet addr"|awk '{print $2}'|awk -F: '{print $2}'`

#生成twemproxy的配置文件
cat >> $proxy_install_dir/twemproxy/conf/nutcracker.yml <<EOF
redis_proxy:
  listen: $ip:$tw_port
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  server_retry_timeout: 2000
  server_failure_limit: 3
  servers:
   - masterip1:redis_master_port:1 sen_mastername1
   - masterip2:redis_master_port:1 sen_mastername2
   - masterip3:redis_master_port:1 sen_mastername3
EOF

#配置文件中加入代理的3组集群的master、端口及集群名
{% for i in cluster1 %}
sed -i "s#masterip1#{{i.masterip}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
sed -i "s#redis_master_port#$redis_master_port#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
sed -i "s#sen_mastername1#${{i.sen_mastername}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
{% endfor %}
{% for i in cluster2 %}
sed -i "s#masterip2#{{i.masterip}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
sed -i "s#sen_mastername2#${{i.sen_mastername}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
{% endfor %}
{% for i in cluster3 %}
sed -i "s#masterip3#{{i.masterip}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
sed -i "s#sen_mastername3#${{i.sen_mastername}}#g" $proxy_install_dir/twemproxy/conf/nutcracker.yml
{% endfor %}

#change twemproxy owner
chown -R $proxy_user.$proxy_user $proxy_install_dir

#start twemproxy
su $proxy_user -c "$proxy_install_dir/twemproxy/sbin/nutcracker -c $proxy_install_dir/twemproxy/conf/nutcracker.yml -o $proxy_install_dir/twemproxy/logs/twemproxy.log -v 11 -d"

#mv client-reconfig.sh
mv $proxy_install_dir/redis/client-reconfig.sh $proxy_install_dir/twemproxy/client-reconfig.sh

#配置sentinel的client-reconfig-script
#config sentinel
$proxy_install_dir/redis/bin/redis-cli -h $ip -p $sen_port sentinel set ccbredis1 client-reconfig-script $proxy_install_dir/twemproxy/client-reconfig.sh
$proxy_install_dir/redis/bin/redis-cli -h $ip -p $sen_port sentinel set ccbredis2 client-reconfig-script $proxy_install_dir/twemproxy/client-reconfig.sh
$proxy_install_dir/redis/bin/redis-cli -h $ip -p $sen_port sentinel set ccbredis3 client-reconfig-script $proxy_install_dir/twemproxy/client-reconfig.sh

最后的sentinel的client-reconfig-script也可以在sentinel.conf中配置,如:

sentinel client-reconfig-script  redis1 $proxy_install_dir/twemproxy/client-reconfig.sh
sentinel client-reconfig-script  redis2 $proxy_install_dir/twemproxy/client-reconfig.sh
sentinel client-reconfig-script  redis3 $proxy_install_dir/twemproxy/client-reconfig.sh

6.安装

#检查语法
ansible-playbook -C redis_cluster.yml
#安装
ansible-playbook redis_cluster.yml

安装完成后,整个集群也就随之启动了,我们可以在各个节点上检查配置,在此我们就不赘述了。

7.安装排错
ansible-playbook在安装过程中可能会报错中断,如果遇到这种问题,我们可以在相应的节点上的的安装目录查看相应的脚本,来帮助我们排查,如:

cd /home/ap/app/src/
vim redis_install.sh

此时的redis_install.sh就是当前节点使用的安装脚本。

总结

此次自动化部署实现了3组redis主从和twemproxy的组合,我们可以通过变量的形式根据不同的环境要求进行安装,降低了我们手动部署的出错率。另外我们可以通过此次部署延伸出一主两从+haproxy+sentinel等各个组合方式。更重要的是通过此次部署,我们能够将其运用到其他环境的自动化部署的场景中。

最后是redis和sentinel的模板配置文件:
(1)redis.conf

bind 127.0.0.1
protected-mode yes
port
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
supervised no
pidfile
loglevel warning
logfile
databases 16
#save 900 1
#save 300 10
#save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
maxclients 4096
maxmemory memsize
maxmemory-policy allkeys-lru

(2)sentinel.conf

bind 0.0.0.0
daemonize yes
port sen_port
loglevel notice
pidfile
dir
logfile
sentinel monitor master1_name master1_host master_port quorum
sentinel down-after-milliseconds master1_name 6000
sentinel failover-timeout master1_name 18000
sentinel parallel-syncs master1_name 1

sentinel monitor master2_name master2_host master_port quorum
sentinel down-after-milliseconds master2_name 6000
sentinel failover-timeout master2_name 18000
sentinel parallel-syncs master2_name 1

sentinel monitor master3_name master3_host master_port quorum
sentinel down-after-milliseconds master3_name 6000
sentinel failover-timeout master3_name 18000
sentinel parallel-syncs master3_name 1

猜你喜欢

转载自blog.csdn.net/yanggd1987/article/details/78864818