ansible-playbook基于角色一键交付wordpress+zrlog+phpmyadmin项目

一、 总览

在这里插入图片描述
1.系统初始化:指的是操作系统安装完毕后,都需要使用到的初始配置,比如安装基础软件、调整内核参数、设置yum仓库等。
⒉.功能模块:指的是生产使用到的应用软件,比如Nginx、PHP、Haproxy、Keepalived等这类应用服务的安装和管理,每一个功能我们创建一个roles角色来存放,我们把这个目录的集合称之为“功能模块"。
3.业务模块:在功能模块中我们编写了大量基础的功能状态,在业务层面直接进行引用,所以功能模块就是尽可能的全、而且独立。
而业务模块,不同的业务类型就可以调用不同的roles,使得每个业务使用自己独特的配置文件。最终在site.yml里面我们指定业务的状态即可。

地址规划

主机名 外网eth0-NAT 内网eth1-LAN 网关gateway 角色
route 10.0.0.200(开启Forward SNAT) 172.16.1.200 10.0.0.2 routes
dns-master 10.0.0.91 172.16.1.91 10.0.0.2 dnsservers
dns-slave 10.0.0.92 172.16.1.92 10.0.0.2 dnsservers
LVS-Master 172.16.1.3 172.16.1.200 lbservers
LVS-Backup 172.16.1.4 172.16.1.200 lbservers
haproxy-node1(或nginx-node1) 172.16.1.5 172.16.1.200 proxyservers
haproxy-node2(或nginx-node2) 172.16.1.6 172.16.1.200 proxyservers
web-node1 172.16.1.7 172.16.1.200 webservers
web-node2 172.16.1.8 172.16.1.200 webservers
web-node3 172.16.1.9 172.16.1.200 webservers
mysql-master 172.16.1.51 172.16.1.200 mysqlservers
mysql-slave 172.16.1.52 172.16.1.200 mysqlservers
redis-cluster 172.16.1.41 172.16.1.200 redservers
redis-cluster 172.16.1.42 172.16.1.200 redservers
redis-cluster 172.16.1.43 172.16.1.200 redservers
nfs-server 172.16.1.32 172.16.1.200 nfservers
rsync 172.16.1.31 172.16.1.200 rsyncservers
jumpserver 172.16.1.61 172.16.1.200
openvpn 10.0.0.60 172.16.1.60 172.16.1.200

ansible目录准备

[root@manager /]# mkdir /ansible/roles -p
[root@manager /]# cd ansible/roles/
[root@manager roles]# cp /etc/ansible/hosts ./
[root@manager roles]# cp /etc/ansible/ansible.cfg ./

ansible.cfg文件,将facts缓存在本地的redis中

[root@manager roles]# cat ansible.cfg 
inventory  = ./host
host_key_checking = False
gathering = smart  
fact_caching_timeout = 86400
fact_caching = redis
fact_caching_connection = 172.16.1.62:6379
forks = 50

hosts主机清单文件:

[root@manager roles]# cat hosts
[routes]
172.16.1.200

[dnsservers]
172.16.1.91
172.16.1.92

[lbservers]
172.16.1.3
172.16.1.4

[proxyservers]
172.16.1.5
172.16.1.6

[webservers]
172.16.1.7
172.16.1.8
172.16.1.9

[mysqlservers]
172.16.1.51

[redisservers]
172.16.1.41

[nfsservers]
172.16.1.32

[rsyncservers]
172.16.1.31

测试连通性,如果不连通则说明没有推送公钥,需要事先把ansible主机秘钥推送一份给各个节点。

[root@manager roles]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.1.51
[root@manager roles]# ansible all -m ping

[root@manager roles]# ansible all --list-host # 查看当前hosts所有主机
  hosts (14):
    172.16.1.7
    172.16.1.8
    172.16.1.9
    172.16.1.31
    172.16.1.3
    172.16.1.4
    172.16.1.5
    172.16.1.6
    172.16.1.41
    172.16.1.200
    172.16.1.32
    172.16.1.91
    172.16.1.92
    172.16.1.51



[root@manager roles]# mkdir group_vars # 创建全局变量文件
[root@manager roles]# touch group_vars/all

初始化网络

[root@manager roles]# cat network_init.yml 
- hosts: all:!dnsservers:!routes
  tasks:
  
    - name: delete default gateway
      lineinfile:
        path:  /etc/sysconfig/network-scripts/ifcfg-eth1
        regexp: '^GATEWAY='
        state: absent

    - name: add new gateway
      lineinfile:
        path:  /etc/sysconfig/network-scripts/ifcfg-eth1
        line: GATEWAY=172.16.1.200

    - name: delete default dns
      lineinfile:
        path:  /etc/sysconfig/network-scripts/ifcfg-eth1
        regexp: '^DNS'
        state: absent

    - name: add new dns
      lineinfile:
        path:  /etc/sysconfig/network-scripts/ifcfg-eth1
        line: DNS=223.5.5.5

    - name: restart network
      systemd:
        name: network
        state: restarted

查看是否符合预期

[root@manager roles]# ansible-playbook network_init.yml
[root@manager roles]# ansible all -m shell -a "cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep DNS"
[root@manager roles]# ansible all -m shell -a "cat /etc/sysconfig/network-scripts/ifcfg-eth1 | grep GATEWAY"

二、ansible基础模块安装

当我们的服务器上架并安装好操作系统后,都会有一些基础的操作,建议将所有服务器都会涉及的基础配置都存放在名为roles中的base 下。我们称其为“初始化模块”
1.关闭防火墙Firewalld selinux
2.创建统一用户www, uid为666 gid为666
3.添加base、 epel仓库
4.特定主机需要添加特定的仓库源nginx 、php、 mysql、zabbix、 elk …
5.安装基础软件包, rsync、 nfs-utils、 net-tools lrzsz、 wget、 unzip、 vim、 tree. …
6.内核升级\内核参数调整\文件描述符调整

2.1 关闭防火墙

[root@manager roles]# mkdir base/{
    
    tasks,templates,files,handlers} -p

[root@manager roles]# cat  base/tasks/firewalld.yml
- name: disable selinux 
  selinux:
    state: disabled

- name: disable firewall
  systemd:
    name: firewalld
    state: stopped
    enabled: no

2.2 创建统一用户

[root@manager roles]# cat base/tasks/user.yml
- name: create uniform user group www
  group:
    name: "{
    
    { all_group }}"
    gid: "{
    
    { gid }}"
    system: yes
    state: present

- name: create uniform user www
  user:
    name: "{
    
    { all_user }}"
    group: "{
    
    { all_group }}"
    uid: "{
    
    { uid }}"
    system: yes
    state: present

2.3 创建统一仓库

[root@manager roles]# cat base/tasks/yum_repository.yml 
- name: Add Base Yum Repository
  yum_repository:
    name: base
    description: Base Aliyun Repository
    baseurl: http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
    gpgcheck: yes
    gpgkey: http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

- name: Add Epel Yum Repository
  yum_repository:
    name: epel
    description: Epel Aliyun Repository
    baseurl: http://mirrors.aliyun.com/epel/7/$basearch
    gpgcheck: no

- name: Add Nginx Yum Repository
  yum_repository:
    name: nginx
    description: Nginx Repository
    baseurl: http://nginx.org/packages/centos/7/$basearch/
    gpgcheck: no
  #when: (ansible_hostname is match('web*')) or (ansible_hostname is match('lb*'))
- name: Add PHP Yum Repository
  yum_repository:
    name: php71w
    description: php Repository
    baseurl: http://us-east.repo.webtatic.com/yum/el7/x86_64/
    gpgcheck: no


- name: Add Haproxy Yum Repository
  yum_repository:
    name: haproxy
    description: haproxy repository
    baseurl: https://repo.ius.io/archive/7/$basearch/
    gpgcheck: yes
    gpgkey: https://repo.ius.io/RPM-GPG-KEY-IUS-7

2.4 所有主机需要安装基础yum包

[root@manager roles]# cat base/tasks/yum_pkg.yml 
- name: Installed Packages All
  yum:
    name: "{
    
    { item }}"
    state: present
  loop:
      - rsync
      - nfs-utils
      - net-tools
      - wget
      - tree
      - lrzsz
      - vim
      - unzip
      - httpd-tools
      - bash-completion
      - iftop
      - iotop
      - glances
      - gzip
      - psmisc
      - MySQL-python
      - bind-utils
      - python-setuptools
      - python-pip
      - gcc
      - gcc-c++ 
      - autoconf
      - sudo
      - net-tools
      - iptables
			

2.8 调整内核参数 pam_limits, 文件描述符

[root@manager roles]# cat base/tasks/limits.yml 
- name: Change Limit /etc/security/limit.conf
  pam_limits:
    domain: "*"
    limit_type: "{
    
    { item.limit_type }}"
    limit_item: "{
    
    { item.limit_item }}"
    value: "{
    
    { item.value  }}"
  loop:
    - {
    
     limit_type: 'soft', limit_item: 'nofile',value: '100000' }
    - {
    
     limit_type: 'hard', limit_item: 'nofile',value: '100000' }

2.9 调整内核参数 sysctl

[root@manager roles]# cat base/tasks/kernel.yml 
- name: Change Port Range
  sysctl:
    name: net.ipv4.ip_local_port_range
    value: '1024 65000'
    sysctl_set: yes

- name: Enabled Forward
  sysctl:
    name: net.ipv4.ip_forward
    value: '1'
    sysctl_set: yes

- name: Enabled tcp_reuse
  sysctl:
    name: net.ipv4.tcp_tw_reuse
    value: '1'
    sysctl_set: yes

- name: Chanage tcp tw_buckets
  sysctl:
    name: net.ipv4.tcp_max_tw_buckets
    value: '5000'
    sysctl_set: yes


- name: Chanage tcp_syncookies
  sysctl:
    name: net.ipv4.tcp_syncookies
    value: '1'
    sysctl_set: yes

- name: Chanage tcp max_syn_backlog
  sysctl:
    name: net.ipv4.tcp_max_syn_backlog
    value: '8192'
    sysctl_set: yes


- name: Chanage tcp Established Maxconn
  sysctl:
    name: net.core.somaxconn
    value: '32768'
    sysctl_set: yes
    state: present


- name: Chanage tcp_syn_retries
  sysctl:
    name: net.ipv4.tcp_syn_retries
    value: '2'
    sysctl_set: yes
    state: present


- name: Chanage net.ipv4.tcp_synack_retries
  sysctl:
    name: net.ipv4.tcp_synack_retries
    value: '2'
    sysctl_set: yes
    state: present

2.10 统一入口文件 main.yml

[root@manager roles]# cat base/tasks/main.yml 
- name: firewalld
  include: firewalld.yml
 

- name: kernel
  include: kernel.yml

- name: limits
  include: limits.yml

- name: user
  include: user.yml
  tags: create_user

- name: yum_repository
  include: yum_repository.yml

- name: yum_package
  include: yum_pkg.yml


2.11 top 文件

[root@manager roles]# cat top.yml 
- hosts: all
  roles:
    - role: base

2.12 测试所有主机的连通性

[root@manager roles]# tree /ansible/roles/
/ansible/roles/
├── ansible.cfg
├── base
│   ├── files
│   ├── handlers
│   ├── tasks
│   │   ├── firewalld.yml
│   │   ├── kernel.yml
│   │   ├── limits.yml
│   │   ├── main.yml
│   │   ├── user.yml
│   │   ├── yum_pkg.yml
│   │   └── yum_repository.yml
│   └── templates
├── group_vars
│   └── all
├── hosts
├── network_init.yml
└── top.yml


[root@manager roles]# ansible-playbook top.yml

三、 安装具体的服务

3.1 nfs安装

root@manager roles]# mkdir nfs-server/{
    
    tasks,templates,handlers,files} -p
[root@manager roles]# cat nfs-server/tasks/main.yml 
- name: create nfs share  directory # 创建两个共享目录
  file:
    path: "{
    
    { item }}"
    owner: "{
    
    {all_user}}"
    group: "{
    
    {all_group}}"
    state: directory
    mode: "0755"
    recurse: yes
  loop:
    - "{
    
    {nfs_share_blog}}"
    - "{
    
    {nfs_share_zrlog}}"


- name: configure nfs server  
  template:
    src: exports.j2
    dest: /etc/exports

  notify: restart nfs server



- name: start nfs server
  systemd:
    name: nfs
    state: started
    enabled: yes


[root@manager roles]# cat nfs-server/templates/exports.j2 
{
    
    {
    
     nfs_share_blog }} {
    
    {
    
     nfs_allow_ip_range }}(rw,sync,all_squash,anonuid={
    
    {
    
     uid }},anongid={
    
    {
    
     gid }})
{
    
    {
    
     nfs_share_zrlog }} {
    
    {
    
     nfs_allow_ip_range }}(rw,sync,all_squash,anonuid={
    
    {
    
     uid }},anongid={
    
    {
    
     gid }})



[root@manager roles]# cat nfs-server/handlers/main.yml 
- name: restart nfs server
  systemd:
    name: nfs
    state: restarted


[root@manager roles]# cat group_vars/all 
# base
all_user: www
all_group: www
uid: 666
gid: 666

# nfs
nfs_share_zrlog: /data/zrlog
nfs_allow_ip_range: 172.16.1.0/24
nfs_share_blog: /data/blog

运行

[root@manager roles]# cat top.yml 
#- hosts: all
#  roles:
#    - role: base


- hosts: nfsservers
  roles:
    - role: nfs-server



[root@manager roles]# ansible-playbook top.ym

3.2 rsync 安装

默认启动用www 用户,且初始化阶段已经安装了rsync。

root@manager roles]# mkdir rsync-server/{
    
    tasks,templates,handlers,files} -p

[root@manager roles]# cat rsync-server/tasks/main.yml 
[root@manager roles]# cat rsync-server/tasks/main.yml 
- name: copy virtual user passwd file
  copy:
    content: "{
    
    { rsync_virtual_user }}:{
    
    { rsync_virtual_passwd }}"
    dest: "{
    
    { rsync_virtual_path }}"
    mode: "0600"



- name: create rsync module dir
  file: 
    path: "{
    
    { item }}"
    owner: "{
    
    { all_user }}"
    group: "{
    
    { all_group }}"
    state: directory
    recurse: yes

  loop:
    - "{
    
    { rsync_module_path1 }}" # 创建两个目录用于两个项目
    - "{
    
    { rsync_module_path2 }}"


- name: configure rsync
  template:
    src: rsyncd.conf.j2
    dest: /etc/rsyncd.conf
  notify: restart rsyncd


- name: start rsyncd
  systemd:
    name: rsyncd
    state: started
    enabled: yes


root@manager roles]# cat rsync-server/templates/rsyncd.conf.j2 
uid = {
    
    {
    
    all_user}}
gid = {
    
    {
    
    all_group}}
port = {
    
    {
    
    rsync_port}}
fake super = yes
use chroot = no
max connections = {
    
    {
    
    rsync_max_conn}}
timeout = 600
ignore errors
read only = false
list = true
log file = /var/log/rsyncd.log
auth users = {
    
    {
    
    rsync_virtual_user}}
secrets file = {
    
    {
    
     rsync_virtual_path}}
# 创建两个模块用于两个项目
[{
    
    {
    
    rsync_module_name1}}]
path = {
    
    {
    
    rsync_module_path1}}

[{
    
    {
    
    rsync_module_name2}}]
path = {
    
    {
    
    rsync_module_path2}}


[root@manager roles]# cat rsync-server/handlers/main.yml 
- name: restart rsyncd
  systemd:
    name: rsyncd
    state: restarted

运行测试:

扫描二维码关注公众号,回复: 13238767 查看本文章

[root@manager roles]# cat top.yml 
- hosts: rsyncservers
  roles:
    - role: rsync-server

[root@manager roles]# ansible-playbook top.yml

3.3 sersync实时同步

监测nfs端的共享目录,如果有变化实时同步到rsync备份服务器。
安装sersync 、rsync、 inotify-tools

[root@manager roles]# cat sersync-server/tasks/main.yml 
- name: install inotify-tools
  yum:
    name: inotify-tools
    state: present


- name: unarchive sersync.tar.gz to remote
  unarchive:
    src: sersync2.5.4_64bit_binary_stable_final.tar.gz
    dest: "{
    
    { rsync_path }}"



- name: configure confxml.xml.j2 to remote
  template: 
    src: confxml1.xml.j2
    dest: "{
    
    { rsync_path }}/GNU-Linux-x86/confxml1.xml"

- name: configure confxml.xml1.j2 to remote
  template: 
    src: confxml2.xml.j2
    dest: "{
    
    { rsync_path }}/GNU-Linux-x86/confxml2.xml"


- name: create rsync_client passwd file  
  copy:
    content: "{
    
    { rsync_virtual_passwd }}"
    dest: "{
    
    { rsync_virtual_path }}"
    mode: 0600

- name: start sersync1
  shell: "{
    
    { rsync_path }}/GNU-Linux-x86/sersync2 -dro {
    
    { rsync_path }}/GNU-Linux-x86/confxml1.xml"

- name: start sersync2
  shell: "{
    
    { rsync_path }}/GNU-Linux-x86/sersync2 -dro {
    
    { rsync_path }}/GNU-Linux-x86/confxml2.xml"



[root@manager roles]# cat sersync-server/templates/confxml1.xml.j2 
<?xml version="1.0" encoding="ISO-8859-1"?>
<head version="2.5">
    <host hostip="localhost" port="8008"></host>
    <debug start="false"/>
    <fileSystem xfs="true"/> 
    <filter start="false">   
	<exclude expression="(.*)\.svn"></exclude>
	<exclude expression="(.*)\.gz"></exclude>
	<exclude expression="^info/*"></exclude>
	<exclude expression="^static/*"></exclude>
    </filter>
    <inotify> 
	<delete start="true"/>
	<createFolder start="true"/>
	<createFile start="false"/>
	<closeWrite start="true"/>
	<moveFrom start="true"/>
	<moveTo start="true"/>
	<attrib start="true"/>
	<modify start="true"/>
    </inotify>

    <sersync>
	<localpath watch="{
     
     { nfs_share_blog }}"> 
	    <remote ip="{
     
     { rsync_ip }}" name="{
     
     { rsync_module_name1 }}"/>
	</localpath>
	<rsync> 
	    <commonParams params="-avz"/>
	    <auth start="true" users=" {
     
     { rsync_virtual_user }}" passwordfile="{
     
     { rsync_virtual_path }}"/>
	    <userDefinedPort start="false" port="874"/><!-- port=874 -->
	    <timeout start="true" time="100"/>
	    <ssh start="false"/>
	</rsync>
	<failLog path="/tmp/rsync_fail_log.sh" timeToExecute="{
     
     { timeToExecute }}"/><!-- ?60????????faillog-->
	<crontab start="false" schedule="600"><!--600mins--> 
	    <crontabfilter start="false">
		<exclude expression="*.php"></exclude>
		<exclude expression="info/*"></exclude>
	    </crontabfilter>
	</crontab>
	<plugin start="false" name="command"/>
    </sersync>

    <plugin name="command">
	<param prefix="/bin/sh" suffix="" ignoreError="true"/>	<!--prefix /opt/tongbu/mmm.sh suffix-->
	<filter start="false">
	    <include expression="(.*)\.php"/>
	    <include expression="(.*)\.sh"/>
	</filter>
    </plugin>

    <plugin name="socket">
	<localpath watch="/opt/tongbu">
	    <deshost ip="192.168.138.20" port="8009"/>
	</localpath>
    </plugin>
    <plugin name="refreshCDN">
	<localpath watch="/data0/htdocs/cms.xoyo.com/site/">
	    <cdninfo domainname="ccms.chinacache.com" port="80" username="xxxx" passwd="xxxx"/>
	    <sendurl base="http://pic.xoyo.com/cms"/>
	    <regexurl regex="false" match="cms.xoyo.com/site([/a-zA-Z0-9]*).xoyo.com/images"/>
	</localpath>
    </plugin>
</head>

[root@manager roles]# cat sersync-server/templates/confxml2.xml.j2 
<?xml version="1.0" encoding="ISO-8859-1"?>
<head version="2.5">
    <host hostip="localhost" port="8008"></host>
    <debug start="false"/>
    <fileSystem xfs="true"/> <!--????-->
    <filter start="false">   <!--????????-->
	<exclude expression="(.*)\.svn"></exclude>
	<exclude expression="(.*)\.gz"></exclude>
	<exclude expression="^info/*"></exclude>
	<exclude expression="^static/*"></exclude>
    </filter>
    <inotify> <!-- ??????? -->
	<delete start="true"/>
	<createFolder start="true"/>
	<createFile start="false"/>
	<closeWrite start="true"/>
	<moveFrom start="true"/>
	<moveTo start="true"/>
	<attrib start="true"/>
	<modify start="true"/>
    </inotify>

    <sersync>
	<localpath watch="{
     
     { nfs_share_zrlog }}"> <!-- ?????? -->
	    <remote ip="{
     
     { rsync_ip }}" name="{
     
     { rsync_module_name2 }}"/><!--rsync????IP?????-->
	</localpath>
	<rsync> <!-- rsync??? -->
	    <commonParams params="-avz"/>
	    <auth start="true" users=" {
     
     { rsync_virtual_user }}" passwordfile="{
     
     { rsync_virtual_path }}"/><!--?????????-->
	    <userDefinedPort start="false" port="874"/><!-- port=874 -->
	    <timeout start="true" time="100"/><!-- ????100s-->
	    <ssh start="false"/><!--??rsync??-->
	</rsync>
	<failLog path="/tmp/rsync_fail_log.sh" timeToExecute="{
     
     { timeToExecute }}"/><!-- ?60????????faillog-->
	<crontab start="false" schedule="600"><!--600mins--> <!--????????-->
	    <crontabfilter start="false">
		<exclude expression="*.php"></exclude>
		<exclude expression="info/*"></exclude>
	    </crontabfilter>
	</crontab>
	<plugin start="false" name="command"/>
    </sersync>

    <plugin name="command">
	<param prefix="/bin/sh" suffix="" ignoreError="true"/>	<!--prefix /opt/tongbu/mmm.sh suffix-->
	<filter start="false">
	    <include expression="(.*)\.php"/>
	    <include expression="(.*)\.sh"/>
	</filter>
    </plugin>

    <plugin name="socket">
	<localpath watch="/opt/tongbu">
	    <deshost ip="192.168.138.20" port="8009"/>
	</localpath>
    </plugin>
    <plugin name="refreshCDN">
	<localpath watch="/data0/htdocs/cms.xoyo.com/site/">
	    <cdninfo domainname="ccms.chinacache.com" port="80" username="xxxx" passwd="xxxx"/>
	    <sendurl base="http://pic.xoyo.com/cms"/>
	    <regexurl regex="false" match="cms.xoyo.com/site([/a-zA-Z0-9]*).xoyo.com/images"/>
	</localpath>
    </plugin>
</head>

[root@manager roles]# ls sersync-server/files/sersync2.5.4_64bit_binary_stable_final.tar.gz 
sersync-server/files/sersync2.5.4_64bit_binary_stable_final.tar.gz

[root@manager roles]# cat group_vars/all 
# base
all_user: www
all_group: www
uid: 666
gid: 666

# nfs
nfs_share_zrlog: /data/zrlog
nfs_allow_ip_range: 172.16.1.0/24
nfs_share_blog: /data/blog 


# rsync
rsync_virtual_user: rsync_backup
rsync_virtual_path: /etc/rsync.pass
rsync_module_name1: blog
rsync_module_name2: zrlog

rsync_module_path1: /data/blog
rsync_module_path2: /data/zrlog
rsync_virtual_passwd: 123
rsync_port: 873
rsync_max_conn: 200


# sersync
rsync_ip: 172.16.1.31
timeToExecute: 60
rsync_path: /usr/local

3.4 安装mysql

[root@manager roles]# cat mysql-server/tasks/main.yml +47
- name: install mariadb mariadb-server
  yum:
    name: "{
    
    { item }}"
    state: present
  loop:
    - mariadb
    - mariadb-server
    - MySQL-python

- name: start mariadb
  systemd:
    name: mariadb
    state: started
    enabled: yes

- name: Removes all anonymous user accounts
  mysql_user:
    name: ''
    host_all: yes
    state: absent

- name: Create database user with name '{
    
    {
    
    mysql_super_user}}' and password '{
    
    {
    
    mysql_super_user_passwd}}' with all database privileges
  mysql_user:
    name: "{
    
    {mysql_super_user}}"
    password: "{
    
    {mysql_super_user_passwd}}"
    priv: '{
    
    {mysql_super_user_priv}}'
    host: "{
    
    {allow_ip}}"
    state: present

# 循环创建两个数据库,并导入数据
- name: Create a new database with name 'wordpress and zrlog'
  mysql_db:
    login_host: "{
    
    { mysql_ip }}"
    login_user: "{
    
    { mysql_super_user }}"
    login_password: "{
    
    { mysql_super_user_passwd }}"
    name: "{
    
    { item }}"
    state: present
  loop:
    - wordpress
    - zrlog
                                                                                                                                            
- name: Import file.sql similar to mysql -u <username> -p <password> < hostname.sql
  mysql_db:
      login_host: "{
    
    { mysql_ip }}"
      login_user: "{
    
    { mysql_super_user }}"
      login_password: "{
    
    { mysql_super_user_passwd }}"
      state: import
      name: "{
    
    { item.name }}"
      target: "{
    
    { item.target }}"
  loop:
    - {
    
     name: 'wordpress', target: '/tmp/wordpress.sql' }
    - {
    
     name: 'zrlog', target: '/tmp/zrlog.sql' }




[root@manager roles]# cat group_vars/all
#mysql
mysql_super_user: app
mysql_super_user_passwd: "123456"
mysql_super_user_priv: '*.*:ALL'
allow_ip: '172.16.1.%'

3.5 安装redis

[root@manager roles]# mkdir redis-server/{tasks,templates,handlers} -p
[root@manager roles]# cat redis-server/tasks/main.yml 
- name: install redis server
  yum:
    name: redis
    state: present


- name: configure redis server
  template:
    src: redis.conf.j2 
    dest: /etc/redis.conf
    owner: redis
    group: root
    mode: 0640

  notify: restart redis server



- name: start redis server
  systemd:
    name: redis
    state: started
    enabled: yes


[root@manager roles]# cat redis-server/handlers/main.yml 
- name: restart redis server
  systemd:
    name: redis
    state: restarted

3.6 安装nginx

[root@manager roles]# mkdir nginx-server/{
    
    tasks,templates,handlers,files} -p
[root@manager roles]# cat nginx-server/tasks/main.yml 
- name: install nginx server
  yum:
    name: nginx
    state: present
    enablerepo: nginx



- name: configure nginx server
  template:
    src: nginx.conf.j2
    dest: "{
    
    {nginx_conf_path}}"
  notify: restart nginx server


- name: start nginx server
  systemd:
    name: nginx
    state: started
    enabled: yes



root@manager roles]# cat nginx-server/templates/nginx.conf.j2 

user {
    
    {
    
    all_user}};
worker_processes  {
    
    {
    
    ansible_processor_vcpus}};

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;


events {
    
    
		worker_connections  {
    
    {
    
     ansible_processor_vcpus*1024 }};
}


http {
    
    
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" "$http_x_via"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  {
    
    {
    
    keepalive_timeout}};

    #gzip  on;

    include {
    
    {
    
    nginx_include_path}}
}


[root@manager roles]# cat nginx-server/handlers/main.yml 
- name: restart nginx server
  systemd:
    name: nginx
    state: restarted

3.7 安装php

[root@manager roles]# mkdir php-fpm/{
    
    tasks,templates,handlers,files} -p
[root@manager roles]# cat php-fpm/tasks/main.yml 
- name: install php-fpm
  yum:
    name: "{
    
    { item }}"
    enablerepo: php71w
    state: present

  loop:
    - php71w
    - php71w-cli 
    - php71w-common 
    - php71w-devel 
    - php71w-embedded 
    - php71w-gd 
    - php71w-mcrypt 
    - php71w-mbstring 
    - php71w-pdo 
    - php71w-xml 
    - php71w-fpm 
    - php71w-mysqlnd 
    - php71w-opcache 
    - php71w-pecl-memcached 
    - php71w-pecl-redis 
    - php71w-pecl-mongodb



- name: configure php.ini php-fpm file
  template:
    src: "{
    
    {item.src}}"
    dest: "{
    
    {item.dest}}"
  loop:
    - {
    
    src: php.ini.j2,dest: "{
    
    {php_ini_path}}"}
    - {
    
    src: www.conf.j2,dest: "{
    
    {php_fpm_path}}"}
  notify: restart php-fpm

- name: start php-fpm
  systemd:
    name: php-fpm
    state: started
    enabled: yes

配置1
[root@manager roles]# cat php-fpm/templates/php.ini.j2 | egrep -v "^;|^$"
# 修改如下两处,其他默认不变
session.save_handler = {
    
    {
    
    session_method}}
session.save_path = "tcp://{
    
    {bind_ip}}:6379"


配置2
[root@manager roles]# cat php-fpm/templates/www.conf.j2 
[{
    
    {
    
    all_user}}]
user = {
    
    {
    
    all_user}}
group = {
    
    {
    
    all_user}}
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = {
    
    {
    
    pm_max_children}}
pm.start_servers = {
    
    {
    
    pm_start_servers}}
pm.min_spare_servers = {
    
    {
    
    pm_min_spare_servers}}
pm.max_spare_servers = {
    
    {
    
    pm_max_spare_servers}}
slowlog = /var/log/php-fpm/www-slow.log
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
php_value[soap.wsdl_cache_dir]  = /var/lib/php/wsdlcache

[root@manager roles]# cat nfs-server/handlers/main.yml 
- name: restart nfs server
  systemd:
    name: nfs
    state: restarted


变量表
[root@manager roles]# cat group_vars/all
#php-fpm
php_ini_path: /etc/php.ini
php_fpm_path: /etc/php-fpm.d/www.conf
session_method: redis

pm_max_children: 50
pm_start_servers: 5
pm_min_spare_servers: 5                                                                                                                   
pm_max_spare_servers: 35

3.8 安装 haproxy

[root@manager roles]# mkdir php-fpm/{tasks,templates,handlers,files} -p
[root@manager roles]# cat haproxy/tasks/main.yml 
- name: install haproxy
  yum:
    name: haproxy22
    enablerepo: haproxy
    state: present



- name: configure haproxy
  template:
    src: haproxy.cfg.j2
    dest: /etc/haproxy/haproxy.cfg

  notify: restarted haproxy server



- name: configure  conf.d dir to systemctl 
  template: 
    src: haproxy.service.j2
    dest: /usr/lib/systemd/system/haproxy.service

- name: create conf.d dir    
  file:
    path: /etc/haproxy/conf.d
    state: directory

- name: start haproxy
  systemd:
    name: haproxy
    daemon_reload: yes
    state: started
    enabled: yes


# 根据需要自己设置变量
[root@manager roles]# cat haproxy/templates/haproxy.cfg.j2 
global
		
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     {
   
   {maxconn}}
    user        {
   
   {all_user}}
    group       {
   
   {all_group}}
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats level admin
    #nbproc 4
    #cpu-map 1 0
    #cpu-map 2 1
    #cpu-map 3 2
    #cpu-map 4 3
    nbthread 8
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    #option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------

listen haproxy-stats
        bind *:{
   
   {stat_port}}
        stats enable
	stats refresh 1s
        stats hide-version
        stats uri /haproxy?stats
        stats realm "HAProxy statistics"
        stats auth admin:123456
        stats admin if TRUE


[root@manager roles]# cat haproxy/templates/haproxy.service.j2 
[Unit]
Description=HAProxy Load Balancer
After=network-online.target
Wants=network-online.target

[Service]
EnvironmentFile=-/etc/sysconfig/haproxy
Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
Environment="CONFIG_D=/etc/haproxy/conf.d" 
ExecStartPre=/usr/sbin/haproxy -f $CONFIG -f $CONFIG_D -c -q $OPTIONS
ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -f $CONFIG_D -p $PIDFILE $OPTIONS
ExecReload=/usr/sbin/haproxy -f $CONFIG -f $CONFIG_D -c -q $OPTIONS
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
SuccessExitStatus=143
Type=notify

[Install]
WantedBy=multi-user.target

[root@manager roles]# cat haproxy/handlers/main.yml 
- name: restarted haproxy server
  systemd:
    name: haproxy
    state: restarted

[root@manager roles]# tree haproxy/
haproxy/
├── handlers
│   └── main.yml
├── tasks
│   └── main.yml
└── templates
    ├── haproxy.cfg.j2
    └── haproxy.service.j2

[root@manager roles]# cat group_vars/all
# haproxy
stat_port: 8888
maxconn: 4000

3.9 安装keepalived

[root@manager roles]# mkdir keepalived/{
    
    tasks,template,handlers,files} -p



[root@manager roles]# cat keepalived/tasks/main.yml 
- name: install keepalived
  yum:
    name: keepalived
    state: present



- name: configure keepalived
  template:
    src: keepalived.conf.j2
    dest: "{
    
    { keepalived_conf_path }}"
  notify: restart keepalived


- name: start keeplived
  systemd:
    name: keepalived
    state: started
    enabled: yes



[root@manager roles]# cat keepalived/templates/keepalived.conf.j2 
global_defs {
    
         
    router_id {
    
    {
    
     ansible_hostname }}
}

vrrp_instance VI_1 {
    
    

{
    
    % if ansible_hostname == "proxy01" %}
    state MASTER
    priority 200
{
    
    % elif ansible_hostname == "proxy02" %}
    state BACKUP
    priority 100
{
    
    % endif %}

    interface  eth0            # 绑定当前虚拟路由使用的物理接口;
    virtual_router_id 49            # 当前虚拟路由标识,VRID;
    advert_int 3                    # vrrp通告时间间隔,默认1s;
    #nopreempt
    authentication {
    
    
        auth_type PASS              # 密码类型,简单密码;
        auth_pass 1111              # 密码不超过8位字符;
    }
    
    virtual_ipaddress {
    
    
       {
    
    {
    
     proxy_vip }}
    }
}


[root@manager roles]# cat keepalived/handlers/main.yml 
- name: restart keepalived
  systemd:
    name: keepalived
    state: restarted

# 变量
[root@manager roles]# cat keepalived
#keepalived
keepalived_conf_path: /etc/keepalived/keepalived.conf 
proxy_vip: 10.0.0.100


# 目录树
[root@manager roles]# tree keepalived/ 
keepalived/
├── files
├── handlers
│   └── main.yml
├── tasks
│   └── main.yml
└── templates
    └── keepalived.conf.j2

3.10 安装 LVS

四层LVS需要依赖keepalived

[root@manager roles]# mkdir lvs/{
    
    tasks,templates,handlers,meta} -p



[root@manager roles]# cat lvs/tasks/main.yml 
- name: install ipvsadm packages
  yum:
    name: ipvsadm
    state: present
- name: configure LVS keepalived
  template:
    src: keepalived.conf.j2
    dest: /etc/keepalived/keepalived.conf
  notify: restart keepalived

- name: start LVS Keepalived
  systemd:
    name: keepalived
    state: started
    enabled: yes

[root@manager roles]# cat lvs/templates/keepalived.conf.j2 
global_defs {
    
    
    router_id {
    
    {
    
     ansible_hostname }}
}

vrrp_instance VI_1 {
    
    
{
    
    % if ansible_hostname == "lvs-master" %}
    state MASTER
    priority 200
{
    
    % elif ansible_hostname == "lvs-slave" %}
    state BACKUP
    priority 100
{
    
    % endif %}

    interface eth1
    virtual_router_id 50
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
 }
    virtual_ipaddress {
    
    
        {
    
    {
    
     lvs_vip }}
    }
}
# 配置集群地址访问的IP+Port

virtual_server {
    
    {
    
     lvs_vip }} {
    
    {
    
     lvs_http_port }} {
    
    
    # 健康检查的时间,单位:秒
    delay_loop 6
    # 配置负载均衡的算法
    lb_algo rr
    # 设置LVS的模式 NAT|TUN|DR
    lb_kind DR

    # 设置协议
    protocol TCP

    # 负载均衡后端的真实服务节点RS-1
{
    
    % for host in groups["proxyservers"]%}
    real_server {
    
    {
    
     host }} {
    
    {
    
     lvs_http_port }} {
    
    
        # 权重配比设置为1
        weight 1
        # 设置健康检查
        TCP_CHECK {
    
    
            # 检测后端80端口
            connect_port 80
            # 超时时间
            connect_timeout  3
            # 重试次数2次
            nb_get_retry 2
            # 间隔时间3s
            delay_beefore_retry 3
        }
    }
{
    
    % endfor %}
}



# 配置集群地址访问的IP+Port
virtual_server {
    
    {
    
     lvs_vip }} {
    
    {
    
     lvs_https_port }} {
    
    
    # 健康检查的时间,单位:秒
    delay_loop 6
    # 配置负载均衡的算法
    lb_algo rr
    # 设置LVS的模式 NAT|TUN|DR
    lb_kind DR

    # 设置协议
    protocol TCP
{
    
    % for host in groups["proxyservers"] %}
    # 负载均衡后端的真实服务节点RS-1
    real_server {
    
    {
    
     host }} {
    
    {
    
     lvs_https_port }} {
    
    
        # 权重配比设置为1
        weight 1
        # 设置健康检查
        TCP_CHECK {
    
    
            # 检测后端443端口
            connect_port 443
            # 超时时间
            connect_timeout  3
            # 重试次数2次
            nb_get_retry 2
            # 间隔时间3s
            delay_beefore_retry 3
        }
    }
{
    
    % endfor %}
}

# 依赖keepalived
[root@manager roles]# cat lvs/meta/mail.yml 
dependencies:
 - {
    
     role: keepalived }


[root@manager roles]# cat lvs/handlers/main.yml 
- name: restart keepalived
  systemd:
    name: keepalived
    state: restarted


[root@manager roles]# cat group_vars/all 
#lvs
lvs_vip: 172.16.1.101
lvs_http_port: 80
lvs_https_port: 443

3.11 LVS后端七层代理集群(RS)节点配置VIP与ARP抑制

思路,需要在rs节点上添加虚拟网卡,绑定在lo:0上,重启该虚拟网卡,开启arp抑制

[root@manager roles]# mkdir lvs-rs/{
    
    tasks,templates,handlers} -p



[root@manager roles]# cat lvs-rs/tasks/main.yml 
- name: configure VIP for lo:0
  template:
    src: ifcfg-lo:0.j2
    dest: /etc/sysconfig/network-scripts/ifcfg-lo:0
  notify: restart network

- name: configure arp_ignore
  sysctl:
    name: "{
    
    { item }}"
    value: "1" 
    sysctl_set: yes
  loop:
    - net.ipv4.conf.default.arp_ignore
    - net.ipv4.conf.all.arp_ignore
    - net.ipv4.conf.lo.arp_ignore
    
    
- name: configure arp_announce
  sysctl:
    name: "{
    
    { item }}"
    value: "2"
    sysctl_set: yes
  loop:
    - net.ipv4.conf.default.arp_announce
    - net.ipv4.conf.all.arp_announce
    - net.ipv4.conf.lo.arp_announce


[root@manager roles]# cat lvs-rs/templates/ifcfg-lo\:0.j2 
DEVICE={
    
    {
    
     lvs_rs_network }}
IPADDR={
    
    {
    
     lvs_vip }}
NETMASK=255.0.0.0
ONBOOT=yes
NAME=loopback


[root@manager roles]# cat lvs-rs/handlers/main.yml 
- name: restart network
  service: # centos6用service centos7用systemd
    name: network
    state: restarted
    args: lo:0
    

# 参数
[root@manager roles]# cat group_vars/all 
#lvs
lvs_vip: 172.16.1.100
lvs_http_port: 80
lvs_https_port: 443
lvs_rs_network: lo:0

3.12 配置route

  1. 开启DNAT
  2. 开启SNAT
  3. 开启forword (基础阶段已经开启)
[root@manager roles]# cat route/tasks/main.yml
- name: iptables SNAT to share network
  iptables:
    table: nat
    chain: POSTROUTING
    source: 172.16.1.0/24
    jump: SNAT
    to_source: "{
    
    { ansible_eth0.ipv4.address }}"



- name: iptables DNAT http 80 port
  iptables:
    table: nat
    chain: PREROUTING
    protocol: tcp
    destination: " {
    
    { ansible_eth0.ipv4.address }}"
    destination_port: "{
    
    { lvs_http_port|int }}"
    jump: DNAT
    to_destination: "{
    
    { lvs_vip }}:{
    
    { lvs_http_port }}"

- name: iptables DNAT https 443 port 
  iptables:
    table: nat
    chain: PREROUTING
    protocol: tcp
    destination: "{
    
    { ansible_eth0.ipv4.address }}"
    destination_port: "{
    
    { lvs_https_port|int }}"
    jump: DNAT
    to_destination: "{
    
    { lvs_vip }}:{
    
    { lvs_https_port }}"


3.1.3 配置dns

[root@manager roles]# mkdir dns/{
    
    tasks,templates,handlers} -p

[root@manager roles]# cat dns/tasks/main.yml 
- name: install bind
  yum:
    name: "{
    
    { item }}"
    state: present

  loop:
    - bind
    - bind-utils


- name: configure dns
  template:
    src: named.conf.j2
    dest: /etc/named.conf
    group: named
    owner: root
    mode: "0640"
  notify: restart named

- name: cofigure zone file
  template:
    src: "{
    
    { bertwu_online_zone_name }}.j2"
    dest: "{
    
    { dns_zone_file_path }}/{
    
    { bertwu_online_zone_name }}"
  when: ( ansible_hostname == "dns-master" ) # 区域文件只需要拷贝到主上,会同步一份给从
  notify: restart named

- name: start named
  systemd:
    name: named
    state: started
    enabled: yes


# 配置文件,主从配置不一样,需要判断生成不同配置文件
[root@manager roles]# cat dns/templates/named.conf.j2 
options {
    
    
	listen-on port 53 {
    
     any; };
	directory 	"/var/named";
	dump-file 	"/var/named/data/cache_dump.db";
	statistics-file "/var/named/data/named_stats.txt";
	memstatistics-file "/var/named/data/named_mem_stats.txt";
	recursing-file  "/var/named/data/named.recursing";
	secroots-file   "/var/named/data/named.secroots";
	allow-query     {
    
     any; };


{
    
    % if ansible_hostname == "dns-master" %}
 	 allow-transfer {
    
    172.16.1.92;};   //允许哪些`IP`地址能同步Master配置信息;
	 also-notify {
    
    172.16.1.92;};       //Master主动通知Slave域名变发生了变更      
{
    
    % elif ansible_hostname == "dns-slave" %}
	masterfile-format text;
{
    
    % endif %}

	recursion yes;
	allow-recursion {
    
    172.16.1.0/24;};
	dnssec-enable yes;
	dnssec-validation yes;
	bindkeys-file "/etc/named.root.key";
	managed-keys-directory "/var/named/dynamic";
	pid-file "/run/named/named.pid";
	session-keyfile "/run/named/session.key";
};

logging {
    
    
        channel default_debug {
    
    
                file "data/named.run";
                severity dynamic;
        };
};

    zone "." IN {
    
    
        type hint;
        file "named.ca";
    };

    zone "bertwu.online" IN {
    
    
{
    
    % if ansible_hostname == "dns-master" %}
	type master;
	file "{
    
    { bertwu_online_zone_name }}";
{
    
    % elif ansible_hostname == "dns-slave" %}
	type slave;
	file "slaves/{
    
    { bertwu_online_zone_name }}";
	masters {
    
     {
    
    {
    
     dns_master_ip }}; };
{
    
    % endif %}
    };

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";


# 区域配置文件,注意此文件的注释为; 报错调试了半天就是因为写成了#
[root@manager roles]# cat dns/templates/bertwu.online.zone.j2 
$TTL 600;

bertwu.online. IN SOA  ns.bertwu.online. qq.bertwu.online. (
	2021090999
	10800
	900 
	604800
	86400	
)

bertwu.online. IN NS ns1.bertwu.online.
bertwu.online. IN NS ns2.bertwu.online.

ns1.bertwu.online. IN A {
    
    {
    
     dns_master_ip }}
ns2.bertwu.online. IN A {
    
    {
    
     dns_slave_ip }}


blog.bertwu.online. IN A 10.0.0.200
zrlog.bertwu.online. IN A 10.0.0.200


[root@manager roles]# cat dns/handlers/main.yml 
- name: restart named
  systemd:
    name: named
    state: restarted


# 变量
[root@manager roles]# cat group_vars/all
# dns
bertwu_online_zone_name: bertwu.online.zone

dns_master_ip: 172.16.1.91
dns_slave_ip: 172.16.1.92

dns_zone_file_path: /var/named

三 、部署wordpress 博客系统

3.1 web站点层

0.备份以前原有数据库
   [root@localhost ~]# mysqldump -B wordpress >/tmp/wordpress.sql
1.创建存放项目代码的目录(/webcode)
2.解压本地打包好的wordpress 项目到远程指定的代码目录(里面包含连接数据配置信息,避免跳出向导页面)
3.推送nginx代理配置文件(要与首次连接数据库成功时的地址和端口保持一致,因为其已经写入了数据库,否则会有未知错误)
4.创建数据库wordpress
5.恢复数据库之前备份出来的sql文件
(注意,建议4,5 步,在数据库初始化阶段(安装配置阶段)完成,因为实际生产环境中数据库和数据本来就已经配置好,运维只需要连接即可。否则需要把wordpress.sql 分别推送到web站点服务器指定目录,例如 /tmp/wordpress.sql),并且要在站点安装mariadb客户端,才能成功。
[root@manager roles]# mkdir blog-web/{
    
    tasks,meta,templates,handlers,files} -p


[root@manager roles]# cat wordpress-web/tasks/main.yml 
- name: create code directory
  file:
    path: "{
    
    { wordpress_code_path }}"
    owner: "{
    
    { all_user }}"
    group: "{
    
    { all_group }}"
    mode: "0755"
    state: directory
    recurse: yes


- name: unarchive wordpress_web_code to remote
  unarchive:
    src: wordpress.tar.gz
    dest: "{
    
    { wordpress_code_path }}" 
    owner: "{
    
    { all_user }}"
    group: "{
    
    { all_group }}"
    mode: "0755"
    creates: "{
    
    { wordpress_code_path }}/wordpress/wp-config.php" # 避免重复解压




- name: create wordpress nginx config
  template: 
    src: wordpress.conf.j2 
    dest: "{
    
    { nginx_include_dir }}/wordpress.conf"
  notify: restart nginx server # 在安装nginx阶段已经定义,此处可以不用写handlers

# 设置挂载点
- name: configure mount nfs
  mount:
    src: "172.16.1.32:{
    
    { nfs_share_blog }}"
    path: "{
    
    { wordpress_code_path}}/wordpress/wp-content/uploads/"
    fstype: nfs
    opts: defaults
    state: mounted  


# 以下四部已经在安装mariadb服务器时候创建,放在这是因为提供思路,本身不用执行

#-name: 推送wordpress.sql 到web站点指定目录
# 略

#- name: web站点安装mariadb客户端
# 略

#- name: Create a new database with name 'wordpress'
#  mysql_db:
#      login_host: "{
     
     { mysql_ip }}"
#      login_user: "{
     
     { mysql_super_user }}"
#      login_password: "{
     
     { mysql_super_user_passwd }}"
#      name: wordpress
#      state: present
#
#- name: Import file.sql similar to mysql -u <username> -p <password> < hostname.sql
#  mysql_db:
#      login_host: "{
     
     { mysql_ip }}"
#      login_user: "{
     
     { mysql_super_user }}"
#      login_password: "{
     
     { mysql_super_user_passwd }}"
#      state: import
#      name: wordpress
#      target: /tmp/wordpress.sql 



# 依赖
[root@manager roles]# cat wordpress-web/meta/main.yml 
dependencies:
  - {
    
     role: nginx-server }
  - {
    
     role: php-fpm }



[root@manager roles]# cat wordpress-web/templates/wordpress.conf.j2 
##
server {
    
    
	listen {
    
    {
    
     nginx_http_listen_port }};
	server_name {
    
    {
    
     wordpress_domain }};
	client_max_body_size 100m;
	root {
    
    {
    
     wordpress_code_path }}/wordpress;
	charset utf-8;
	location / {
    
    
		index index.php index.html;
	}
	location ~* .*\.php$ {
    
    
		fastcgi_pass 127.0.0.1:9000;
		fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
		include /etc/nginx/fastcgi_params;
	 			 fastcgi_param HTTPS {
    
    {
    
     https_state }};
	}
}


[root@manager roles]# ls wordpress-web/files/
wordpress.tar.gz # 项目压缩包


# 参数
[root@manager roles]# cat group_vars/all 
# wordpress
wordpress_domain: blog.bertwu.online
nginx_http_listen_port: 888 
wordpress_code_path: /webcode
https_state: "off"

3.2 七层负载均衡 haproxy, 备用nginx

如果架构只做七层负载均衡,则需要依赖keepalived, 如果前面还需要接入四层负载均衡,则此处只做集群,不需要keepalived。在四层上面接入keepalived即可。

[root@manager roles]# mkdir wordpress-proxy/{
    
    tasks,handlers,templates,meta} -p


[root@manager roles]# cat wordpress-proxy/tasks/main.yml 
- name: create https key dir
  file:
    path: "{
    
    { https_key_dir }}"
    state: directory

- name: copy https key_per 
  copy:
    src: blog_key.pem.j2
    dest: "{
    
    { https_key_dir }}/blog_key.pem"



- name: wordpress haproxy configure file
  template:
    src: haproxy.cfg.j2
    dest: "{
    
    { haproxy_include_path }}/wordpress_haproxy.cfg"
  notify: restarted haproxy server


[root@manager roles]# cat wordpress-proxy/templates/haproxy.cfg.j2 
frontend web
	bind *:{
    
    {
    
     haproxy_port }}
	bind *:443 ssl crt {
    
    {
    
      https_key_dir }}/blog_key.pem 
	mode http

	#acl 规则将blog 调度到blog_cluster 
	acl blog_domain hdr(host) -i {
    
    {
    
     wordpress_domain }}
	redirect scheme https code 301 if !{
    
     ssl_fc } blog_domain
	use_backend blog_cluster if blog_domain


backend blog_cluster
	balance roundrobin
	option httpchk HEAD / HTTP/1.1\r\nHost:\ {
    
    {
    
     wordpress_domain }}
	{
    
    % for host in groups["webservers"] %}
	server {
    
    {
    
     host }} {
    
    {
    
     host }}:{
    
    {
    
     nginx_http_listen_port }} check port {
    
    {
    
      nginx_http_listen_port }} inter 3s rise 2 fall 3
	{
    
    % endfor %}

# 依赖
[root@manager roles]# cat wordpress-proxy/meta/main.yml 
dependencies:
  - {
    
     role: haproxy }


#参数
#wordpress haproxy 
haproxy_port: 80
https_key_dir: /ssl

# 证书
[root@manager roles]# ls wordpress-proxy/files/blog_key.pem.j2 
wordpress-proxy/files/blog_key.pem.j2


# 证书生成方式:
1.准备证书 需要将证书追加到一个文件中
[root@proxy01 ~]# cat /etc/nginx/ssl_key/6152893_blog.bertwu.online.pem > /ssl/blog_key.pem
[root@proxy01 ~]# cat /etc/nginx/ssl_key/6152893_blog.bertwu.online.key >> /ssl/blog_key.pem

在这里插入图片描述

四、部署java项目 zrlog

4.1 安装 tomcat

1.下载 rpm 格式jdk解压到remote
2.yum本地安装jdk
3.远程创建/soft 目录
4.远程解压tomcat到/soft目录
5.制作软连接
6.推送tomcat启停文件
7.推送配置文件 权限0600
8.启动tomcat

[root@manager roles]# cat tomcat/tasks/main.yml 
- name: copy jdk_rpm to webservers
  copy:
    src: "{
    
    { jdk_version }}.rpm"
    dest: "/tmp/{
    
    { jdk_version }}.rpm"


- name: install oraclejdk rmp packages
  yum:
    name: "/tmp/{
    
    { jdk_version }}.rpm"
    state: present

- name: create tomcat dir 
  file: 
    path: "{
    
    { tomcat_dir }}"
    state: directory


- name: unarchive tomcat package to remote 
  unarchive:
    src: "{
    
    { tomcat_version }}.tar.gz"
    dest: "{
    
    { tomcat_dir }}"
    #creates: "{
     
     { tomcat_dir }}/{
     
     { tomcat_version}}/conf/server.xml" # 避免重复解压

- name: make tomcat link
  file:
    src: "{
    
    { tomcat_dir }}/{
    
    { tomcat_version }}"
    dest: "{
    
    { tomcat_dir }}/{
    
    { tomcat_name }}"
    state: link


- name: copy systemctl manager file
  template: 
    src: tomcat.service.j2
    dest: /usr/lib/systemd/system/tomcat.service

- name: start tomcat
  systemd:
    name: tomcat
    state: started



- name: copy tomcat configure file
  template:
    src: server.xml.j2
    dest: "{
    
    {tomcat_dir}}/{
    
    {tomcat_name}}/conf/server.xml"
  notify: restart tomcat



# systemctl 管理启停文件
[root@manager roles]# cat tomcat/templates/tomcat.service.j2 
[Unit]
Description=tomcat - high performance web server
Documentation=https://tomcat.apache.org/
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target

[Service]
Type=forking
#Environment=JAVA_HOME=/usr/local/jdk
Environment=CATALINA_HOME={
    
    {
    
     tomcat_dir }}/{
    
    {
    
     tomcat_name }}
Environment=CATALINA_BASE={
    
    {
    
     tomcat_dir }}/{
    
    {
    
     tomcat_name }}

ExecStart={
    
    {
    
     tomcat_dir }}/{
    
    {
    
     tomcat_name }}/bin/startup.sh
ExecStop={
    
    {
    
     tomcat_dir }}/{
    
    {
    
     tomcat_name }}/bin/shutdown.sh


[Install]
WantedBy=multi-user.target

# tomcat默认配置文件
[root@manager roles]# cat tomcat/templates/server.xml.j2 
<?xml version="1.0" encoding="UTF-8"?>

<!--关闭tomcat的端口-->
<Server port="8005" shutdown="SHUTDOWN">

  <!--监听器 -->
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

  <!--全局资源限制-->
  <GlobalNamingResources>
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>


  <!--连接器-->
  <Service name="Catalina">
    <Connector port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />


    <!--引擎-->
    <Engine name="Catalina" defaultHost="localhost">

	<!--调用限制-->
      <Realm className="org.apache.catalina.realm.LockOutRealm">
        <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
               resourceName="UserDatabase"/>
      </Realm>

	<!--虚拟主机-->
      <Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true">
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log" suffix=".txt"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />
      </Host>
    </Engine>
  </Service>
</Server>


# 触发器,重启
[root@manager roles]# cat tomcat/handlers/main.yml 
- name: restart tomcat
  systemd:
    name: tomcat
    state: restarted

# 这两个文件需要事先下载好 tomcat是二进制安装
[root@manager roles]# ls tomcat/files/
apache-tomcat-8.5.71.tar.gz  jdk-8u281-linux-x64.rpm


# 参数
# tomcat
jdk_version: jdk-8u281-linux-x64
tomcat_dir: /soft
tomcat_version: apache-tomcat-8.5.71
tomcat_name: tomcat


项目运行截图:

4.2 部署zrlog 项目

[root@manager roles]# mkdir  zrlog-web/{
    
    tasks,templates,handlers,meta,files} -p
1.主文件
[root@manager roles]# cat zrlog-web/tasks/main.yml 
- name: create code path
  file:
    path: "{
    
    { zrlog_code_path }}"
    state: directory
    owner: "{
    
    { all_user }}"
    group: "{
    
    { all_group }}"
    mode: "0755"
    recurse: yes


- name: unarchive zrlog package to remote
  unarchive:
    src: zrlog.tar.gz
    dest: "{
    
    { zrlog_code_path }}"
    creates: "{
    
    { zrlog_code_path }}/zrlog/ROOT/favicon.ico" # 避免重复解压

- name: configuer  zrlog server.xml
  template:
    src: server.xml.j2
    dest: "{
    
    { tomcat_dir }}/{
    
    { tomcat_name }}/conf/server.xml"
  notify: restart tomcat


- name: configure mount nfs
  mount: 
    src: "172.16.1.32:{
    
    { nfs_share_zrlog }}"
    path: "{
    
    { zrlog_code_path }}/zrlog/ROOT/attached/"
    fstype: nfs
    opts: defaults
    state: mounted

2.配置文件
[root@manager roles]# cat zrlog-web/templates/server.xml.j2 
<?xml version="1.0" encoding="UTF-8"?>

<!--关闭tomcat的端口-->
<Server port="8005" shutdown="SHUTDOWN">

  <!--监听器 -->
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

  <!--全局资源限制-->
  <GlobalNamingResources>
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>


  <!--连接器-->
  <Service name="Catalina">
    <Connector port="{
    
    { tomcat_port }}" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />


    <!--引擎-->
    <Engine name="Catalina" defaultHost="localhost">

	<!--调用限制-->
      <Realm className="org.apache.catalina.realm.LockOutRealm">
        <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
               resourceName="UserDatabase"/>
      </Realm>

	<!--虚拟主机-->
      <Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true">
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log" suffix=".txt"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />
      </Host>


    <Host name="{
    
    { zrlog_domain }}"  appBase="{
    
    { zrlog_code_path }}/zrlog"
       unpackWARs="true" autoDeploy="true">
     <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
          prefix="zrlog_access_log" suffix=".txt"
           pattern="%h %l %u %t &quot;%r&quot; %s %b" />
     </Host>

    </Engine>
  </Service>
</Server>


3.# 事先把代码目录已经打包好了
[root@manager roles]# ls zrlog-web/files/zrlog.tar.gz 
zrlog-web/files/zrlog.tar.gz

4.# 依赖
[root@manager roles]# cat zrlog-web/meta/main.yml 
dependencies:
  - {
    
     role: tomcat }

5.# 参数
[root@manager roles]# cat group_vars/all 
# zrlog
zrlog_code_path: /webcode
tomcat_port: 8080
zrlog_domain: zrlog.bertwu.online

4.3 部署七层 zrlog-proxy 用haproxy做代理

如果架构只做七层负载均衡,则需要依赖keepalived, 如果前面还需要接入四层负载均衡,则此处只做集群,不需要keepalived。在四层上面接入keepalived即可。

1.创建目录
[root@manager roles]# mkdir zrlog-proxy/{
    
    tasks,templates,handlers,meta,files} -p

2.主配置文件
[root@manager roles]# cat zrlog-proxy/tasks/main.yml 
- name: create https key dir
  file: 
    path: "{
    
    { https_key_dir }}"
    state: directory


- name: copy https key_per to remote
  copy:
    src: zrlog_key.pem.j2
    dest: "{
    
    { https_key_dir }}/zrlog_key.pem"



- name: configuer zrlog haproxy flie 
  template:
    src: haproxy.cfg.j2
    dest: "{
    
    { haproxy_include_path }}/zrlog_haproxy.cfg"
  notify: restarted haproxy server

3.haproxy 配置文件
[root@manager roles]# cat zrlog-proxy/templates/haproxy.cfg.j2 
frontend webs # 多站点时候这个名字好像不能重复
	bind *:80
	bind *:443 ssl crt {
    
    {
    
     https_key_dir }}/zrlog_key.pem 
	mode http

	# acl 规则将blog 调度到blog_cluster 
	acl zrlog_domain hdr(host) -i {
    
    {
    
     zrlog_domain }}
	redirect scheme https code 301 if !{
    
     ssl_fc } zrlog_domain
	use_backend zrlog_cluster if zrlog_domain


backend zrlog_cluster
	balance roundrobin
	option httpchk HEAD / HTTP/1.1\r\nHost:\ {
    
    {
    
     zrlog_domain }}
	{
    
    % for host in groups['webservers'] %}
	server {
    
    {
    
     host }} {
    
    {
    
     host }}:{
    
    {
    
     tomcat_port }} check port {
    
    {
    
     tomcat_port }} inter 3s rise 2 fall 3
	{
    
    % endfor %}

4.依赖
[root@manager roles]# cat zrlog-proxy/meta/main.yml 
dependencies:
  - {
    
     role: haproxy }

5.证书目录
[root@manager roles]# ls zrlog-proxy/files/zrlog_key.pem.j2 
zrlog-proxy/files/zrlog_key.pem.j2

6.证书生成方式,在 证书目录下追加生成一个文件
[root@proxy01 ssl_key]# cat ./6181156_zrlog.bertwu.online.pem >/tmp/zrlog_key.pem
[root@proxy01 ssl_key]# cat ./6181156_zrlog.bertwu.online.key >>/tmp/zrlog_key.pem 

项目运行截图

在这里插入图片描述

五、部署phpmyadin项目

5.1 web端部署phpmyadmin

1.七层负载均衡为nginx, 四层为lvs
2.后端web 节点 phpmyadmin代码打包,解压到web端,因为里面已经包含了连接数据库文件。
3.依赖php-fpm ,连接redis的配置在安装php时候已经配置完毕。一个文件为/etc/php.ini,里面包含连接redis的信息。一个为/etc/php-fpm.d/www.conf,让配置session存储的方式(数据库或文件)
4.需要依赖php-fpm 和nginx-server

[root@manager roles]# mkdir phpmyadmin/{
    
    tasks,templates,handlers,files,meta} -p

1.主文件
[root@manager roles]# cat phpmyadmin/tasks/main.yml 
- name: create code path
  file: 
    path: "{
    
    { phpmyadmin_code_path }}"
    state: directory
    owner: "{
    
    { all_user }}"
    group: "{
    
    { all_group }}"
    mode: "0755"
    recurse: yes


- name: unarchive code package to remote
  unarchive:
    src: phpmyadmin.tar.gz
    dest: "{
    
    { phpmyadmin_code_path }}"
    creates: "{
    
    { phpmyadmin_code_path }}/phpmyadmin/config.inc.php "

- name: configuer nginx proxy file
  template: 
    src: phpmyadmin.conf.j2
    dest: "{
    
    { nginx_include_dir }}/phpmyadmin.conf"
  notify: restart nginx server


2.依赖
[root@manager roles]# cat phpmyadmin/meta/main.yml 
dependencies:
  - {
    
     role: php-fpm }
  - {
    
     role: nginx-server  }

3. nginx代理配置文件
[root@manager roles]# cat phpmyadmin/templates/phpmyadmin.conf.j2 
server {
    
    
	listen {
    
    {
    
     phpmyadmin_nginx_listen_port }};
	server_name {
    
    {
    
     phpmyadmin_domain }};
	client_max_body_size 100m;
	root {
    
    {
    
     phpmyadmin_code_path }}/phpmyadmin;
	charset utf-8;
	location / {
    
    
		index index.php index.html;
	}
	location ~* .*\.php$ {
    
    
		fastcgi_pass 127.0.0.1:9000;
		fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
		include /etc/nginx/fastcgi_params;
	 	fastcgi_param HTTPS {
    
    {
    
     php_https_state }};
 
	}
}  

4. 代码包,需要提前打包
[root@manager roles]# ls phpmyadmin/files/
phpmyadmin.tar.gz

5.参数
# phpmyadmin
phpmyadmin_code_path: /webcode
phpmyadmin_domain: phpmyadmin.bertwu.online
phpmyadmin_nginx_listen_port: 80
php_https_state: "off"    # 当前端开启https时候将其改为on            

5.2 七层负载均衡上用nginx(备用haproxy)

注意,需要手动停止先前运行的haproxy,否则(80和443)端口冲突导致nginx启动失败。或者nginx和haproxy监听不同的端口,前端lvs代理后端在重新定义一组端口集群,比较麻烦。而且生产环境中七层负载均衡也没有haproxy和nginx同时使用。本次纯粹是为了实验。

[root@manager roles]# mkdir phpmyadmin-proxy/{
    
    tasks,templates,meta,files} -p
1.主文件
root@manager roles]# cat phpmyadmin-proxy/tasks/main.yml 
- name: create certificate dir
  file:
    path: "{
    
    { ssl_key_dir }}"
    state: directory

- name: unarchive ssl_key to ssl_key_dir
  unarchive:
    src: 6386765_phpmyadmin.bertwu.online_nginx.zip
    dest: "{
    
    { ssl_key_dir }}"
    creates: "{
    
    { ssl_key_dir }}/6386765_phpmyadmin.bertwu.online.key"


- name: copy proxy_params file
  template:
    src: proxy_params.j2
    dest: /etc/nginx



- name: configure nginx file
  template:
    src: phpmyadmin.conf.j2
    dest: "{
    
    { nginx_include_dir }}/phpmyadmin.conf"
  notify: restart nginx server

2.依赖
[root@manager roles]# cat phpmyadmin-proxy/meta/main.yml 
dependencies:
  - {
    
     role: nginx-server }


3.# 七层负载均衡代理相关参数
root@manager roles]# cat phpmyadmin-proxy/templates/proxy_params.j2 
# ip
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

# http version
proxy_http_version 1.1;
proxy_set_header Connection "";

# timeout
proxy_connect_timeout 120s;
proxy_read_timeout 120s;
proxy_send_timeout 120s;

# buffer
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 8 8k;

4.# 七层负载均衡代理配置
[root@manager roles]# cat phpmyadmin-proxy/templates/phpmyadmin.conf.j2 
upstream  {
    
    {
    
     web_cluster_name }} {
    
    
	{
    
    % for host in groups["webservers"] %}
	server {
    
    {
    
     host }}:{
    
    {
    
     phpmyadmin_nginx_listen_port }};
	{
    
    % endfor %}
}


server {
    
    
	listen 443 ssl;
	server_name {
    
    {
    
     phpmyadmin_domain }};
	ssl_certificate {
    
    {
    
     ssl_key_dir }}/6386765_phpmyadmin.bertwu.online.pem; 
	ssl_certificate_key {
    
    {
    
     ssl_key_dir }}/6386765_phpmyadmin.bertwu.online.key;
	
	location / {
    
    
			proxy_pass http://{
    
    { web_cluster_name }};
			include proxy_params;
	}
}

server {
    
    
		listen {
    
    {
    
     nginx_proxy_port }};
		server_name {
    
    {
    
     phpmyadmin_domain }};
		return 302 https://$server_name$request_uri;
}

5.# 阿里云证书
[root@manager roles]# ls phpmyadmin-proxy/files/
6386765_phpmyadmin.bertwu.online_nginx.zip

运行截图
在这里插入图片描述

5.3 所有项目总参数汇总

[root@manager roles]# cat group_vars/all 
# base
all_user: www
all_group: www
uid: 666
gid: 666

# nfs
nfs_share_zrlog: /data/zrlog
nfs_allow_ip_range: 172.16.1.0/24
nfs_share_blog: /data/blog 


# rsync
rsync_virtual_user: rsync_backup
rsync_virtual_path: /etc/rsync.pass
rsync_module_name1: blog
rsync_module_name2: zrlog
rsync_module_path1: /data/blog
rsync_module_path2: /data/zrlog
rsync_virtual_passwd: 123
rsync_port: 873
rsync_max_conn: 200


# sersync
rsync_ip: 172.16.1.31
timeToExecute: 60
rsync_path: /usr/local

#mysql
mysql_ip: 172.16.1.51
mysql_super_user: app
mysql_super_user_passwd: "123456"
mysql_super_user_priv: '*.*:ALL'
allow_ip: '172.16.1.%'

# redis
bind_ip: 172.16.1.41


# nginx
nginx_include_path: /etc/nginx/conf.d/`*.conf;
nginx_include_dir: /etc/nginx/conf.d
keepalive_timeout: 65
nginx_conf_path: /etc/nginx/nginx.conf


#php-fpm
php_ini_path: /etc/php.ini
php_fpm_path: /etc/php-fpm.d/www.conf
session_method: redis
pm_max_children: 50
pm_start_servers: 5
pm_min_spare_servers: 5                                                                                                                   
pm_max_spare_servers: 35


# haproxy
stat_port: 8888
maxconn: 4000
haproxy_include_path: /etc/haproxy/conf.d


#keepalived
keepalived_conf_path: /etc/keepalived/keepalived.conf 
proxy_vip: 10.0.0.100


#lvs
lvs_vip: 172.16.1.100
lvs_http_port: 80
lvs_https_port: 443
lvs_rs_network: lo:0


# dns
bertwu_online_zone_name: bertwu.online.zone
dns_master_ip: 172.16.1.91
dns_slave_ip: 172.16.1.92
dns_zone_file_path: /var/named

# wordpress
wordpress_domain: blog.bertwu.online
nginx_http_listen_port: 80
wordpress_code_path: /webcode
https_state: "on"


#wordpress haproxy 
haproxy_port: 80
https_key_dir: /ssl



# tomcat
jdk_version: jdk-8u281-linux-x64
tomcat_dir: /soft
tomcat_version: apache-tomcat-8.5.71
tomcat_name: tomcat


# zrlog
zrlog_code_path: /webcode
tomcat_port: 8080
zrlog_domain: zrlog.bertwu.online

# zrlog-proxy

# phpmyadmin
phpmyadmin_code_path: /webcode
phpmyadmin_domain: phpmyadmin.bertwu.online
phpmyadmin_nginx_listen_port: 80
php_https_state: "on"

#phpmyadmin-proxy
ssl_key_dir: /etc/nginx/ssl_key
web_cluster_name: phpmyadmin
nginx_proxy_port: 80

5.4 top.yml 汇总

[root@manager roles]# cat top.yml 
#- hosts: all
#  roles:
#    - role: base


#- hosts: nfsservers
#  roles:
#- role: nfs-server


#- hosts: rsyncservers
#  roles:
#    - role: rsync-server

#- hosts: nfsservers
#  roles:
#    - role: sersync-server
#

#- hosts: mysqlservers
#  roles:
#    - role: mysql-server

#- hosts: redisservers
#  roles:
#    - role: redis-server


#- hosts: webservers
#  roles:
#   # - role: php-fpm
#   # - role: nginx-server
#    - role: wordpress-web

#- hosts: proxyservers
#  roles:
#    - role: wordpress-proxy

#- hosts: proxyservers
#  roles:
#    - role: keepalived

#- hosts: lbservers
#  roles:
#    - role: lvs

#- hosts: proxyservers
#  roles:
#    - role: lvs-rs

#- hosts: routes
#  roles:
#    - role: route

#- hosts: dnsservers
#  roles:
#    - role: dns


#- hosts: webservers
#  roles: 
#    - role: zrlog-web  

#- hosts: proxyservers
#  roles:
#    - role: zrlog-proxy

#- hosts: webservers
#  roles:
#    - role: phpmyadmin

#- hosts: proxyservers
#  roles:
#    - role: phpmyadmin-proxy

5.5 项目总文件目录层级结构

[root@manager /]# tree ansible
ansible
└── roles
    ├── ansible.cfg
    ├── base
    │   ├── files
    │   ├── handlers
    │   ├── tasks
    │   │   ├── firewalld.yml
    │   │   ├── kernel.yml
    │   │   ├── limits.yml
    │   │   ├── main.yml
    │   │   ├── user.yml
    │   │   ├── yum_pkg.yml
    │   │   └── yum_repository.yml
    │   └── templates
    ├── dns
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       ├── bertwu.online.zone.j2
    │       └── named.conf.j2
    ├── group_vars
    │   └── all
    ├── haproxy
    │   ├── 1score.awk
    │   ├── count1.awk
    │   ├── handlers
    │   │   └── main.yml
    │   ├── sca
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       ├── haproxy.cfg.j2
    │       └── haproxy.service.j2
    ├── hosts
    ├── keepalived
    │   ├── files
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── keepalived.conf.j2
    ├── lvs
    │   ├── handlers
    │   │   └── main.yml
    │   ├── meta
    │   │   └── mail.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── keepalived.conf.j2
    ├── lvs-rs
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── ifcfg-lo:0.j2
    ├── mysql-server
    │   ├── handlers
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    ├── network_init.yml
    ├── nfs-server
    │   ├── files
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── exports.j2
    ├── nginx-server
    │   ├── files
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── nginx.conf.j2
    ├── php-fpm
    │   ├── files
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       ├── php.ini.j2
    │       └── www.conf.j2
    ├── phpmyadmin
    │   ├── files
    │   │   └── phpmyadmin.tar.gz
    │   ├── handlers
    │   ├── meta
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── phpmyadmin.conf.j2
    ├── phpmyadmin-proxy
    │   ├── files
    │   │   └── 6386765_phpmyadmin.bertwu.online_nginx.zip
    │   ├── meta
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       ├── phpmyadmin.conf.j2
    │       └── proxy_params.j2
    ├── redis-server
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── redis.conf.j2
    ├── route
    │   └── tasks
    │       ├── main.yml
    │       └── main.yml1
    ├── rsync-server
    │   ├── files
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── rsyncd.conf.j2
    ├── sersync-server
    │   ├── files
    │   │   └── sersync2.5.4_64bit_binary_stable_final.tar.gz
    │   ├── handlers
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       ├── confxml1.xml.j2
    │       └── confxml2.xml.j2
    ├── tomcat
    │   ├── files
    │   │   ├── apache-tomcat-8.5.71.tar.gz
    │   │   └── jdk-8u281-linux-x64.rpm
    │   ├── handlers
    │   │   └── main.yml
    │   ├── meta
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       ├── server.xml.j2
    │       └── tomcat.service.j2
    ├── top.yml
    ├── wordpress-proxy
    │   ├── files
    │   │   └── blog_key.pem.j2
    │   ├── handlers
    │   ├── meta
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── haproxy.cfg.j2
    ├── wordpress-web
    │   ├── files
    │   │   └── wordpress.tar.gz
    │   ├── handlers
    │   ├── meta
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── wordpress.conf.j2
    ├── zrlog-proxy
    │   ├── files
    │   │   └── zrlog_key.pem.j2
    │   ├── handlers
    │   ├── meta
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       └── haproxy.cfg.j2
    └── zrlog-web
        ├── files
        │   └── zrlog.tar.gz
        ├── handlers
        ├── meta
        │   └── main.yml
        ├── tasks
        │   └── main.yml
        └── templates
            └── server.xml.j2

附录:

1.wordpress 七层负载均衡选用nginx配置语法:

[root@proxy01 conf.d]# cat wordpress.conf 
upstream wdblog {
    
    
	server 172.16.1.7:80;
	server 172.10.1.8:80;
	server 172.10.1.9:80;
}

server {
    
    
	listen 443 default ssl;
	server_name blog.bertwu.online;
	ssl_certificate ssl_key/6188098_blog.bertwu.online.pem;
	ssl_certificate_key ssl_key/6188098_blog.bertwu.online.key;

	location / {
    
    
		proxy_pass http://wdblog;
		include proxy_params;
	}
}


server  {
    
    
	listen 80;
	server_name blog.bertwu.online;
	return 302 https://$server_name$request_uri;
}

2.zrlog七层负载均衡选用nginx配置语法:

[root@proxy01 conf.d]# cat zrblog.conf 
upstream zrlog {
    
    
	server 172.16.1.7:8080;
	server 172.16.1.8:8080;
	#server 172.16.1.9:8080;
}
server {
    
    

	listen 443 ssl;

	server_name zrlog.bertwu.online;
	ssl_certificate ssl_key/6181156_zrlog.bertwu.online.pem;
	ssl_certificate_key ssl_key/6181156_zrlog.bertwu.online.key;
	ssl_session_timeout 100m;
	ssl_session_cache  shared:cache:10m;

	location / {
    
    
	 	proxy_pass http://zrlog;
		include proxy_params;
	}
}


server {
    
    

	listen 80;
	server_name zrlog.bertwu.online;
	return 302 https://$server_name$request_uri;
}

猜你喜欢

转载自blog.csdn.net/m0_46090675/article/details/120389294