源码下载地址:https://gitee.com/raymond9/kubernetes-ansible
1.高可用Kubernetes集群规划
角色 | 机器名 | 机器配置 | ip地址 | 安装软件 |
---|---|---|---|---|
ansible | ansible-server.example.local | 2C2G | 172.31.3.100 | ansible |
master1 | k8s-master01.example.local | 2C4G | 172.31.3.101 | chrony-client、docker、kube-controller-manager、kube-scheduler、kube-apiserver、kubelet、kube-proxy、kubectl |
master2 | k8s-master02.example.local | 2C4G | 172.31.3.102 | chrony-client、docker、kube-controller-manager、kube-scheduler、kube-apiserver、kubelet、kube-proxy、kubectl |
master3 | k8s-master03.example.local | 2C4G | 172.31.3.103 | chrony-client、docker、kube-controller-manager、kube-scheduler、kube-apiserver、kubelet、kube-proxy、kubectl |
ha1 | k8s-ha01.example.local | 2C2G | 172.31.3.104 172.31.3.188(vip) | chrony-server、haproxy、keepalived |
ha2 | k8s-ha02.example.local | 2C2G | 172.31.3.105 | chrony-server、haproxy、keepalived |
harbor1 | k8s-harbor01.example.local | 2C2G | 172.31.3.106 | chrony-client、docker、docker-compose、harbor |
harbor2 | k8s-harbor02.example.local | 2C2G | 172.31.3.107 | chrony-client、docker、docker-compose、harbor |
etcd1 | k8s-etcd01.example.local | 2C2G | 172.31.3.108 | chrony-client、docker、etcd |
etcd2 | k8s-etcd02.example.local | 2C2G | 172.31.3.109 | chrony-client、docker、etcd |
etcd3 | k8s-etcd03.example.local | 2C2G | 172.31.3.110 | chrony-client、docker、etcd |
node1 | k8s-node01.example.local | 2C4G | 172.31.3.111 | chrony-client、docker、kubelet、kube-proxy |
node2 | k8s-node02.example.local | 2C4G | 172.31.3.112 | chrony-client、docker、kubelet、kube-proxy |
node3 | k8s-node03.example.local | 2C4G | 172.31.3.113 | chrony-client、docker、kubelet、kube-proxy |
软件版本信息和Pod、Service网段规划:
配置信息 | 备注 |
---|---|
支持的操作系统版本 | CentOS 7.9/stream 8、Rocky 8、Ubuntu 18.04/20.04 |
Docker版本 | 20.10.14 |
kubernetes版本 | 1.22.8 |
Pod网段 | 192.168.0.0/12 |
Service网段 | 10.96.0.0/12 |
2.安装ansible和配置
2.1 安装ansible
#CentOS
[root@ansible-server ~]# yum -y install ansible
[root@ansible-server ~]# ansible --version
ansible 2.9.25
config file = /data/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
#ubuntu18.04安装最新版的ansible
root@ubuntu1804:~# apt update
root@ubuntu1804:~# apt -y install software-properties-common
root@ubuntu1804:~# apt-add-repository --yes --update ppa:ansible/ansible
root@ubuntu1804:~# apt -y install ansible
root@ubuntu1804:~# ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.17 (default, Feb 27 2021, 15:10:58) [GCC 7.5.0]
#ubuntu 20.04安装
[root@ubuntu ~]# apt -y install ansible
2.2 配置ansible
[root@ansible-server ~]# mkdir /data/ansible
[root@ansible-server ~]# cd /data/ansible
[root@ansible-server ansible]# vim ansible.cfg
[defaults]
inventory = ./inventory
forks = 10
roles_path = ./roles
remote_user = root
#下面的IP根据自己的k8s集群主机规划设置
[root@ansible-server ansible]# vim inventory
[master]
172.31.3.101 hname=k8s-master01
172.31.3.102 hname=k8s-master02
172.31.3.103 hname=k8s-master03
[ha]
172.31.3.104 hname=k8s-ha01
172.31.3.105 hname=k8s-ha02
[harbor]
172.31.3.106 hname=k8s-harbor01
172.31.3.107 hname=k8s-harbor02
[etcd]
172.31.3.108 hname=k8s-etcd01
172.31.3.109 hname=k8s-etcd02
172.31.3.110 hname=k8s-etcd03
[node]
172.31.3.111 hname=k8s-node01
172.31.3.112 hname=k8s-node02
172.31.3.113 hname=k8s-node03
[all:vars]
domain=example.local
[k8s_cluster:children]
master
node
[chrony_server:children]
ha
[chrony_client:children]
master
node
harbor
etcd
[keepalives_master]
172.31.3.104
[keepalives_backup]
172.31.3.105
[haproxy:children]
ha
[master01]
172.31.3.101
3.设置客户端网卡名和ip
#rocky8和centos系统设置
[root@172 ~]# bash reset.sh
************************************************************
* 初始化脚本菜单 *
* 1.禁用SELinux 12.修改IP地址和网关地址 *
* 2.关闭防火墙 13.设置主机名 *
* 3.优化SSH 14.设置PS1和系统环境变量 *
* 4.设置系统别名 15.禁用SWAP *
* 5.1-4全设置 16.优化内核参数 *
* 6.设置vimrc配置文件 17.优化资源限制参数 *
* 7.设置软件包仓库 18.Ubuntu设置root用户登录 *
* 8.Minimal安装建议安装软件 19.Ubuntu卸载无用软件包 *
* 9.安装邮件服务并配置邮件 20.重启系统 *
* 10.更改SSH端口号 21.退出 *
* 11.修改网卡名 *
************************************************************
请选择相应的编号(1-21): 11
Rocky 8.5 网卡名已修改成功,请重新启动系统后才能生效!
************************************************************
* 初始化脚本菜单 *
* 1.禁用SELinux 12.修改IP地址和网关地址 *
* 2.关闭防火墙 13.设置主机名 *
* 3.优化SSH 14.设置PS1和系统环境变量 *
* 4.设置系统别名 15.禁用SWAP *
* 5.1-4全设置 16.优化内核参数 *
* 6.设置vimrc配置文件 17.优化资源限制参数 *
* 7.设置软件包仓库 18.Ubuntu设置root用户登录 *
* 8.Minimal安装建议安装软件 19.Ubuntu卸载无用软件包 *
* 9.安装邮件服务并配置邮件 20.重启系统 *
* 10.更改SSH端口号 21.退出 *
* 11.修改网卡名 *
************************************************************
请选择相应的编号(1-21): 12
请输入IP地址:172.31.0.101
IP 172.31.0.101 available!
请输入子网掩码位数:21
请输入网关地址:172.31.0.2
IP 172.31.0.2 available!
Rocky 8.5 IP地址和网关地址已修改成功,请重新启动系统后生效!
************************************************************
* 初始化脚本菜单 *
* 1.禁用SELinux 12.修改IP地址和网关地址 *
* 2.关闭防火墙 13.设置主机名 *
* 3.优化SSH 14.设置PS1和系统环境变量 *
* 4.设置系统别名 15.禁用SWAP *
* 5.1-4全设置 16.优化内核参数 *
* 6.设置vimrc配置文件 17.优化资源限制参数 *
* 7.设置软件包仓库 18.Ubuntu设置root用户登录 *
* 8.Minimal安装建议安装软件 19.Ubuntu卸载无用软件包 *
* 9.安装邮件服务并配置邮件 20.重启系统 *
* 10.更改SSH端口号 21.退出 *
* 11.修改网卡名 *
************************************************************
请选择相应的编号(1-21): 21
#ubuntu系统设置
[C:\~]$ ssh [email protected]
Connecting to 172.31.7.3:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-156-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Mon Dec 27 13:56:42 CST 2021
System load: 0.17 Processes: 193
Usage of /: 2.1% of 91.17GB Users logged in: 1
Memory usage: 10% IP address for ens33: 172.31.7.3
Swap usage: 0%
* Super-optimized for small spaces - read how we shrank the memory
footprint of MicroK8s to make it the smallest full K8s around.
https://ubuntu.com/blog/microk8s-memory-optimisation
19 updates can be applied immediately.
18 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable
New release '20.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Mon Dec 27 13:56:31 2021
/usr/bin/xauth: file /home/raymond/.Xauthority does not exist
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
raymond@ubuntu1804:~$ bash reset.sh
************************************************************
* 初始化脚本菜单 *
* 1.禁用SELinux 12.修改IP地址和网关地址 *
* 2.关闭防火墙 13.设置主机名 *
* 3.优化SSH 14.设置PS1和系统环境变量 *
* 4.设置系统别名 15.禁用SWAP *
* 5.1-4全设置 16.优化内核参数 *
* 6.设置vimrc配置文件 17.优化资源限制参数 *
* 7.设置软件包仓库 18.Ubuntu设置root用户登录 *
* 8.Minimal安装建议安装软件 19.Ubuntu卸载无用软件包 *
* 9.安装邮件服务并配置邮件 20.重启系统 *
* 10.更改SSH端口号 21.退出 *
* 11.修改网卡名 *
************************************************************
请选择相应的编号(1-21): 18
请输入密码: 123456
[sudo] password for raymond: Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully
Ubuntu 18.04 root用户登录已设置完成,请重新登录后生效!
************************************************************
* 初始化脚本菜单 *
* 1.禁用SELinux 12.修改IP地址和网关地址 *
* 2.关闭防火墙 13.设置主机名 *
* 3.优化SSH 14.设置PS1和系统环境变量 *
* 4.设置系统别名 15.禁用SWAP *
* 5.1-4全设置 16.优化内核参数 *
* 6.设置vimrc配置文件 17.优化资源限制参数 *
* 7.设置软件包仓库 18.Ubuntu设置root用户登录 *
* 8.Minimal安装建议安装软件 19.Ubuntu卸载无用软件包 *
* 9.安装邮件服务并配置邮件 20.重启系统 *
* 10.更改SSH端口号 21.退出 *
* 11.修改网卡名 *
************************************************************
请选择相应的编号(1-21): 21
raymond@ubuntu1804:~$ exit
logout
Connection closed.
Disconnected from remote host(172.31.7.3:22) at 13:57:16.
Type `help' to learn how to use Xshell prompt.
[C:\~]$ ssh [email protected]
Connecting to 172.31.7.3:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-156-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Mon Dec 27 13:57:47 CST 2021
System load: 0.06 Processes: 199
Usage of /: 2.1% of 91.17GB Users logged in: 1
Memory usage: 11% IP address for ens33: 172.31.7.3
Swap usage: 0%
* Super-optimized for small spaces - read how we shrank the memory
footprint of MicroK8s to make it the smallest full K8s around.
https://ubuntu.com/blog/microk8s-memory-optimisation
19 updates can be applied immediately.
18 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable
New release '20.04.3 LTS' available.
Run 'do-release-upgrade' to upgrade to it.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
/usr/bin/xauth: file /root/.Xauthority does not exist
root@ubuntu1804:~# mv /home/raymond/reset.sh .
root@ubuntu1804:~# bash reset.sh
************************************************************
* 初始化脚本菜单 *
* 1.禁用SELinux 12.修改IP地址和网关地址 *
* 2.关闭防火墙 13.设置主机名 *
* 3.优化SSH 14.设置PS1和系统环境变量 *
* 4.设置系统别名 15.禁用SWAP *
* 5.1-4全设置 16.优化内核参数 *
* 6.设置vimrc配置文件 17.优化资源限制参数 *
* 7.设置软件包仓库 18.Ubuntu设置root用户登录 *
* 8.Minimal安装建议安装软件 19.Ubuntu卸载无用软件包 *
* 9.安装邮件服务并配置邮件 20.重启系统 *
* 10.更改SSH端口号 21.退出 *
* 11.修改网卡名 *
************************************************************
请选择相应的编号(1-21): 11
Ubuntu 18.04 网卡名已修改成功,请重新启动系统后才能生效!
************************************************************
* 初始化脚本菜单 *
* 1.禁用SELinux 12.修改IP地址和网关地址 *
* 2.关闭防火墙 13.设置主机名 *
* 3.优化SSH 14.设置PS1和系统环境变量 *
* 4.设置系统别名 15.禁用SWAP *
* 5.1-4全设置 16.优化内核参数 *
* 6.设置vimrc配置文件 17.优化资源限制参数 *
* 7.设置软件包仓库 18.Ubuntu设置root用户登录 *
* 8.Minimal安装建议安装软件 19.Ubuntu卸载无用软件包 *
* 9.安装邮件服务并配置邮件 20.重启系统 *
* 10.更改SSH端口号 21.退出 *
* 11.修改网卡名 *
************************************************************
请选择相应的编号(1-21): 12
请输入IP地址:172.31.0.103
IP 172.31.0.103 available!
请输入子网掩码位数:21
请输入网关地址:172.31.0.2
IP 172.31.0.2 available!
Ubuntu 18.04 IP地址和网关地址已修改成功,请重新启动系统后生效!
************************************************************
* 初始化脚本菜单 *
* 1.禁用SELinux 12.修改IP地址和网关地址 *
* 2.关闭防火墙 13.设置主机名 *
* 3.优化SSH 14.设置PS1和系统环境变量 *
* 4.设置系统别名 15.禁用SWAP *
* 5.1-4全设置 16.优化内核参数 *
* 6.设置vimrc配置文件 17.优化资源限制参数 *
* 7.设置软件包仓库 18.Ubuntu设置root用户登录 *
* 8.Minimal安装建议安装软件 19.Ubuntu卸载无用软件包 *
* 9.安装邮件服务并配置邮件 20.重启系统 *
* 10.更改SSH端口号 21.退出 *
* 11.修改网卡名 *
************************************************************
请选择相应的编号(1-21): 21
4.实现基于key验证的脚本
#下面的IP根据自己的k8s集群主机规划设置
[root@ansible-server ansible]# cat ssh_key.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2021-12-20
#FileName: ssh_key.sh
#URL: raymond.blog.csdn.net
#Description: ssh_key for CentOS 7/8 & Ubuntu 18.04/24.04 & Rocky 8
#Copyright (C): 2021 All rights reserved
#*********************************************************************************************
COLOR="echo -e \\033[01;31m"
END='\033[0m'
NET_NAME=`ip addr |awk -F"[: ]" '/^2: e.*/{print $3}'`
IP=`ip addr show ${
NET_NAME}| awk -F" +|/" '/global/{print $3}'`
export SSHPASS=123456
HOSTS="
172.31.3.101
172.31.3.102
172.31.3.103
172.31.3.104
172.31.3.105
172.31.3.106
172.31.3.107
172.31.3.108
172.31.3.109
172.31.3.110
172.31.3.111
172.31.3.112
172.31.3.113"
os(){
OS_ID=`sed -rn '/^NAME=/s@.*="([[:alpha:]]+).*"$@\1@p' /etc/os-release`
}
ssh_key_push(){
rm -f ~/.ssh/id_rsa*
ssh-keygen -f /root/.ssh/id_rsa -P '' &> /dev/null
if [ ${OS_ID} == "CentOS" -o ${OS_ID} == "Rocky" ] &> /dev/null;then
rpm -q sshpass &> /dev/null || {
${COLOR}"安装sshpass软件包"${END};yum -y install sshpass &> /dev/null; }
else
dpkg -S sshpass &> /dev/null || {
${COLOR}"安装sshpass软件包"${END};apt -y install sshpass &> /dev/null; }
fi
sshpass -e ssh-copy-id -o StrictHostKeyChecking=no ${IP} &> /dev/null
[ $? -eq 0 ] && echo ${IP} is finished || echo ${IP} is false
for i in ${HOSTS};do
sshpass -e scp -o StrictHostKeyChecking=no -r /root/.ssh root@${i}: &> /dev/null
[ $? -eq 0 ] && echo ${i} is finished || echo ${i} is false
done
for i in ${HOSTS};do
scp /root/.ssh/known_hosts ${i}:.ssh/ &> /dev/null
[ $? -eq 0 ] && echo ${i} is finished || echo ${i} is false
done
}
main(){
os
ssh_key_push
}
main
[root@ansible-server ansible]# bash ssh_key.sh
172.31.3.100 is finished
172.31.3.101 is finished
172.31.3.102 is finished
172.31.3.103 is finished
172.31.3.104 is finished
172.31.3.105 is finished
172.31.3.106 is finished
172.31.3.107 is finished
172.31.3.108 is finished
172.31.3.109 is finished
172.31.3.110 is finished
172.31.3.111 is finished
172.31.3.112 is finished
172.31.3.113 is finished
172.31.3.101 is finished
172.31.3.102 is finished
172.31.3.103 is finished
172.31.3.104 is finished
172.31.3.105 is finished
172.31.3.106 is finished
172.31.3.107 is finished
172.31.3.108 is finished
172.31.3.109 is finished
172.31.3.110 is finished
172.31.3.111 is finished
172.31.3.112 is finished
172.31.3.113 is finished
5.系统初始化和安装软件包
5.1 系统初始化
[root@ansible-server ansible]# mkdir -p roles/reset/{tasks,templates,vars}
[root@ansible-server ansible]# cd roles/reset/
[root@ansible-server reset]# ls
tasks templates vars
[root@ansible-server reset]# vim templates/yum8.repo.j2
[BaseOS]
name=BaseOS
{
% if ansible_distribution =="Rocky" %}
baseurl=https://{
{
ROCKY_URL }}/rocky/$releasever/BaseOS/$basearch/os/
{
% elif ansible_distribution=="CentOS" %}
baseurl=https://{
{
URL }}/centos/$releasever-stream/BaseOS/$basearch/os/
{
% endif %}
gpgcheck=1
{
% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{
% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{
% endif %}
[AppStream]
name=AppStream
{
% if ansible_distribution =="Rocky" %}
baseurl=https://{
{
ROCKY_URL }}/rocky/$releasever/AppStream/$basearch/os/
{
% elif ansible_distribution=="CentOS" %}
baseurl=https://{
{
URL }}/centos/$releasever-stream/AppStream/$basearch/os/
{
% endif %}
gpgcheck=1
{
% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{
% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{
% endif %}
[extras]
name=extras
{
% if ansible_distribution =="Rocky" %}
baseurl=https://{
{
ROCKY_URL }}/rocky/$releasever/extras/$basearch/os/
{
% elif ansible_distribution=="CentOS" %}
baseurl=https://{
{
URL }}/centos/$releasever-stream/extras/$basearch/os/
{
% endif %}
gpgcheck=1
{
% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{
% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{
% endif %}
{
% if ansible_distribution =="Rocky" %}
[plus]
{
% elif ansible_distribution=="CentOS" %}
[centosplus]
{
% endif %}
{
% if ansible_distribution =="Rocky" %}
name=plus
{
% elif ansible_distribution=="CentOS" %}
name=centosplus
{
% endif %}
{
% if ansible_distribution =="Rocky" %}
baseurl=https://{
{
ROCKY_URL }}/rocky/$releasever/plus/$basearch/os/
{
% elif ansible_distribution=="CentOS" %}
baseurl=https://{
{
URL }}/centos/$releasever-stream/centosplus/$basearch/os/
{
% endif %}
gpgcheck=1
{
% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{
% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{
% endif %}
[PowerTools]
name=PowerTools
{
% if ansible_distribution =="Rocky" %}
baseurl=https://{
{
ROCKY_URL }}/rocky/$releasever/PowerTools/$basearch/os/
{
% elif ansible_distribution=="CentOS" %}
baseurl=https://{
{
URL }}/centos/$releasever-stream/PowerTools/$basearch/os/
{
% endif %}
gpgcheck=1
{
% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{
% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{
% endif %}
[epel]
name=epel
{
% if ansible_distribution =="Rocky" %}
baseurl=https://{
{
ROCKY_URL }}/fedora/epel/$releasever/Everything/$basearch/
{
% elif ansible_distribution=="CentOS" %}
baseurl=https://{
{
URL }}/epel/$releasever/Everything/$basearch/
{
% endif %}
gpgcheck=1
{
% if ansible_distribution =="Rocky" %}
gpgkey=https://{
{
ROCKY_URL }}/fedora/epel/RPM-GPG-KEY-EPEL-$releasever
{
% elif ansible_distribution=="CentOS" %}
gpgkey=https://{
{
URL }}/epel/RPM-GPG-KEY-EPEL-$releasever
{
% endif %}
[root@ansible-server reset]# vim templates/yum7.repo.j2
[base]
name=base
baseurl=https://{
{
URL }}/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever
[extras]
name=extras
baseurl=https://{
{
URL }}/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever
[updates]
name=updates
baseurl=https://{
{
URL }}/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever
[centosplus]
name=centosplus
baseurl=https://{
{
URL }}/centos/$releasever/centosplus/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-$releasever
[epel]
name=epel
baseurl=https://{
{
URL }}/epel/$releasever/$basearch/
gpgcheck=1
gpgkey=https://{
{
URL }}/epel/RPM-GPG-KEY-EPEL-$releasever
[root@ansible-server reset]# vim templates/apt.list.j2
deb http://{
{
URL }}/ubuntu/ {
{
ansible_distribution_release }} main restricted universe multiverse
deb-src http://{
{
URL }}/ubuntu/ {
{
ansible_distribution_release }} main restricted universe multiverse
deb http://{
{
URL }}/ubuntu/ {
{
ansible_distribution_release }}-security main restricted universe multiverse
deb-src http://{
{
URL }}/ubuntu/ {
{
ansible_distribution_release }}-security main restricted universe multiverse
deb http://{
{
URL }}/ubuntu/ {
{
ansible_distribution_release }}-updates main restricted universe multiverse
deb-src http://{
{
URL }}/ubuntu/ {
{
ansible_distribution_release }}-updates main restricted universe multiverse
deb http://{
{
URL }}/ubuntu/ {
{
ansible_distribution_release }}-proposed main restricted universe multiverse
deb-src http://{
{
URL }}/ubuntu/ {
{
ansible_distribution_release }}-proposed main restricted universe multiverse
deb http://{
{
URL }}/ubuntu/ {
{
ansible_distribution_release }}-backports main restricted universe multiverse
deb-src http://{
{
URL }}/ubuntu/ {
{
ansible_distribution_release }}-backports main restricted universe multiverse
#下面VIP设置成自己的keepalived里的VIP(虚拟IP)地址,HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server reset]# vim vars/main.yml
VIP: 172.31.3.188
HARBOR_DOMAIN: harbor.raymonds.cc
ROCKY_URL: mirrors.ustc.edu.cn
URL: mirrors.cloud.tencent.com
[root@ansible-server reset]# vim tasks/set_hostname.yml
- name: set hostname
hostname:
name: "{
{ hname }}.{
{ domain }}"
[root@ansible-server reset]# vim tasks/set_hosts.yml
- name: set hosts file
lineinfile:
path: "/etc/hosts"
line: "{
{ item }} {
{hostvars[item].ansible_hostname}}.{
{ domain }} {
{hostvars[item].ansible_hostname}}"
loop:
"{
{ play_hosts }}"
- name: set hosts file2
lineinfile:
path: "/etc/hosts"
line: "{
{ item }}"
loop:
- "{
{ VIP }} k8s-lb"
- "{
{ VIP }} {
{ HARBOR_DOMAIN }}"
[root@ansible-server reset]# vim tasks/disable_selinux.yml
- name: disable selinux
replace:
path: /etc/sysconfig/selinux
regexp: '^(SELINUX=).*'
replace: '\1disabled'
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
[root@ansible-server reset]# vim tasks/disable_firewall.yml
- name: disable firewall
systemd:
name: firewalld
state: stopped
enabled: no
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: disable ufw
systemd:
name: ufw
state: stopped
enabled: no
when:
- ansible_distribution=="Ubuntu"
[root@ansible-server reset]# vim tasks/disable_networkmanager.yml
- name: disable NetworkManager
systemd:
name: NetworkManager
state: stopped
enabled: no
when:
- ansible_distribution=="CentOS"
- ansible_distribution_major_version=="7"
[root@ansible-server reset]# vim tasks/disable_swap.yml
- name: disable swap
replace:
path: /etc/fstab
regexp: '^(.*swap.*)'
replace: '#\1'
- name: get sd number
shell:
cmd: lsblk|awk -F"[ └─]" '/SWAP/{printf $3}'
register: SD_NAME
when:
- ansible_distribution=="Ubuntu"
- ansible_distribution_major_version=="20"
- name: disable swap for ubuntu20
shell:
cmd: systemctl mask dev-{
{
SD_NAME.stdout}}.swap
when:
- ansible_distribution=="Ubuntu"
- ansible_distribution_major_version=="20"
[root@ansible-server reset]# vim tasks/set_limits.yml
- name: set limit
shell:
cmd: ulimit -SHn 65535
- name: set limits.conf file
lineinfile:
path: "/etc/security/limits.conf"
line: "{
{ item }}"
loop:
- "* soft nofile 655360"
- "* hard nofile 131072"
- "* soft nproc 655350"
- "* hard nproc 655350"
- "* soft memlock unlimited"
- "* hard memlock unlimited"
[root@ansible-server reset]# vim tasks/optimization_sshd.yml
- name: optimization sshd disable UseDNS
replace:
path: /etc/ssh/sshd_config
regexp: '^#(UseDNS).*'
replace: '\1 no'
- name: optimization sshd diaable CentOS or Rocky GSSAPIAuthentication
replace:
path: /etc/ssh/sshd_config
regexp: '^(GSSAPIAuthentication).*'
replace: '\1 no'
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: optimization sshd diaable Ubuntu GSSAPIAuthentication
replace:
path: /etc/ssh/sshd_config
regexp: '^#(GSSAPIAuthentication).*'
replace: '\1 no'
notify:
- restart sshd
when:
- ansible_distribution=="Ubuntu"
[root@ansible-server reset]# vim tasks/set_alias.yml
- name: set CentOS or Rocky alias
lineinfile:
path: ~/.bashrc
line: "{
{ item }}"
loop:
- "alias cdnet=\"cd /etc/sysconfig/network-scripts\""
- "alias vie0=\"vim /etc/sysconfig/network-scripts/ifcfg-eth0\""
- "alias vie1=\"vim /etc/sysconfig/network-scripts/ifcfg-eth1\""
- "alias scandisk=\"echo '- - -' > /sys/class/scsi_host/host0/scan;echo '- - -' > /sys/class/scsi_host/host1/scan;echo '- - -' > /sys/class/scsi_host/host2/scan\""
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: set Ubuntu alias
lineinfile:
path: ~/.bashrc
line: "{
{ item }}"
loop:
- "alias cdnet=\"cd /etc/netplan\""
- "alias scandisk=\"echo '- - -' > /sys/class/scsi_host/host0/scan;echo '- - -' > /sys/class/scsi_host/host1/scan;echo '- - -' > /sys/class/scsi_host/host2/scan\""
when:
- ansible_distribution=="Ubuntu"
[root@ansible-server reset]# vim tasks/set_mirror.yml
- name: find CentOS or Rocky repo files
find:
paths: /etc/yum.repos.d/
patterns: "*.repo"
register: FILENAME
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: delete CentOS or Rocky repo files
file:
path: "{
{ item.path }}"
state: absent
with_items: "{
{ FILENAME.files }}"
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: set CentOS8 or Rocky8 Mirror warehouse
template:
src: yum8.repo.j2
dest: /etc/yum.repos.d/base.repo
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- ansible_distribution_major_version=="8"
- name: set CentOS7 Mirror warehouse
template:
src: yum7.repo.j2
dest: /etc/yum.repos.d/base.repo
when:
- ansible_distribution=="CentOS"
- ansible_distribution_major_version=="7"
- name: set Ubuntu Mirror warehouse
template:
src: apt.list.j2
dest: /etc/apt/sources.list
when:
- ansible_distribution=="Ubuntu"
- name: delete lock files
file:
path: "{
{ item }}"
state: absent
loop:
- /var/lib/dpkg/lock
- /var/lib/apt/lists/lock
- /var/cache/apt/archives/lock
when:
- ansible_distribution=="Ubuntu"
- name: apt update
apt:
update_cache: yes
force: yes
when:
- ansible_distribution=="Ubuntu"
[root@ansible-server reset]# vim tasks/main.yml
- include: set_hostname.yml
- include: set_hosts.yml
- include: disable_selinux.yml
- include: disable_firewall.yml
- include: disable_networkmanager.yml
- include: disable_swap.yml
- include: set_limits.yml
- include: optimization_sshd.yml
- include: set_alias.yml
- include: set_mirror.yml
[root@ansible-server reset]# cd ../../
[root@ansible-server ansible]# tree roles/reset/
[root@ansible-server ansible]# tree roles/reset/
roles/reset/
├── tasks
│ ├── disable_firewall.yml
│ ├── disable_networkmanager.yml
│ ├── disable_selinux.yml
│ ├── disable_swap.yml
│ ├── main.yml
│ ├── optimization_sshd.yml
│ ├── set_alias.yml
│ ├── set_hostname.yml
│ ├── set_hosts.yml
│ ├── set_limits.yml
│ └── set_mirror.yml
├── templates
│ ├── apt.list.j2
│ ├── yum7.repo.j2
│ └── yum8.repo.j2
└── vars
└── main.yml
3 directories, 15 files
[root@ansible-server ansible]# vim reset_role.yml
---
- hosts: all
roles:
- role: reset
[root@ansible-server ansible]# ansible-playbook reset_role.yml
5.2 安装软件包
[root@ansible-server ansible]# mkdir -p roles/reset-installpackage/{files,tasks}
[root@ansible-server ansible]# cd roles/reset-installpackage/
[root@ansible-server reset-installpackage]# ls
files tasks
[root@ansible-server reset-installpackage]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm -P files/
[root@ansible-server reset-installpackage]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm -P files/
[root@ansible-server reset-installpackage]# vim files/ge4.18_ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
[root@ansible-server reset-installpackage]# vim files/lt4.18_ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
[root@ansible-server reset-installpackage]# vim files/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
[root@ansible-server reset-installpackage]# vim tasks/install_package.yml
- name: install Centos or Rocky package
yum:
name: vim,tree,lrzsz,wget,jq,psmisc,net-tools,telnet,git
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: install Centos8 or Rocky8 package
yum:
name: rsync
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- ansible_distribution_major_version=="8"
- name: install Ubuntu package
apt:
name: tree,lrzsz,jq
force: yes
when:
- ansible_distribution=="Ubuntu"
[root@ansible-server reset-installpackage]# vim tasks/set_centos7_kernel.yml
- name: update CentOS7
yum:
name: '*'
state: latest
exclude: kernel*
when:
- ansible_distribution=="CentOS"
- ansible_distribution_major_version=="7"
- name: copy CentOS7 kernel files
copy:
src: "{
{ item }}"
dest: /tmp
loop:
- kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
- kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
when:
- ansible_distribution=="CentOS"
- ansible_distribution_major_version=="7"
- name: Finding RPM files
find:
paths: "/tmp"
patterns: "*.rpm"
register: RPM_RESULT
when:
- ansible_distribution=="CentOS"
- ansible_distribution_major_version=="7"
- name: Install RPM
yum:
name: "{
{ item.path }}"
with_items: "{
{ RPM_RESULT.files }}"
when:
- ansible_distribution=="CentOS"
- ansible_distribution_major_version=="7"
- name: delete kernel files
file:
path: "{
{ item.path }}"
state: absent
with_items: "{
{ RPM_RESULT.files }}"
when:
- ansible_distribution=="CentOS"
- ansible_distribution_major_version=="7"
- name: set grub
shell:
cmd: grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg; grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
when:
- ansible_distribution=="CentOS"
- ansible_distribution_major_version=="7"
[root@ansible-server reset-installpackage]# vim tasks/install_ipvsadm.yml
- name: install CentOS or Rocky ipvsadm
yum:
name: ipvsadm,ipset,sysstat,conntrack,libseccomp
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- inventory_hostname in groups.k8s_cluster
- name: install Ubuntu ipvsadm
apt:
name: ipvsadm,ipset,sysstat,conntrack,libseccomp-dev
force: yes
when:
- ansible_distribution=="Ubuntu"
- inventory_hostname in groups.k8s_cluster
[root@ansible-server reset-installpackage]# vim tasks/set_ipvs.yml
- name: configuration load_mod
shell:
cmd: |
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
when:
- inventory_hostname in groups.k8s_cluster
- name: configuration load_mod kernel ge4.18
shell:
cmd: modprobe -- nf_conntrack
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky") or (ansible_distribution=="Ubuntu" and ansible_distribution_major_version=="20")
- inventory_hostname in groups.k8s_cluster
- name: configuration load_mod kernel lt4.18
shell:
cmd: modprobe -- nf_conntrack_ipv4
when:
- (ansible_distribution=="Ubuntu" and ansible_distribution_major_version=="18")
- inventory_hostname in groups.k8s_cluster
- name: Copy ge4.18_ipvs.conf file
copy:
src: ge4.18_ipvs.conf
dest: /etc/modules-load.d/ipvs.conf
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky") or (ansible_distribution=="Ubuntu" and ansible_distribution_major_version=="20")
- inventory_hostname in groups.k8s_cluster
- name: Copy lt4.18_ipvs.conf file
copy:
src: lt4.18_ipvs.conf
dest: /etc/modules-load.d/ipvs.conf
when:
- (ansible_distribution=="Ubuntu" and ansible_distribution_major_version=="18")
- inventory_hostname in groups.k8s_cluster
- name: start systemd-modules-load service
systemd:
name: systemd-modules-load
state: started
enabled: yes
when:
- inventory_hostname in groups.k8s_cluster
[root@ansible-server reset-installpackage]# vim tasks/set_k8s_kernel.yml
- name: copy k8s.conf file
copy:
src: k8s.conf
dest: /etc/sysctl.d/
- name: Load kernel config
shell:
cmd: "sysctl --system"
[root@ansible-server reset-installpackage]# vim tasks/reboot_system.yml
- name: reboot system
reboot:
[root@ansible-server reset-installpackage]# vim tasks/main.yml
- include: install_package.yml
- include: set_centos7_kernel.yml
- include: install_ipvsadm.yml
- include: set_ipvs.yml
- include: set_k8s_kernel.yml
- include: reboot_system.yml
[root@ansible-server reset-installpackage]# cd ../../
[root@ansible-server ansible]# tree roles/reset-installpackage/
roles/reset-installpackage/
├── files
│ ├── ge4.18_ipvs.conf
│ ├── k8s.conf
│ ├── kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
│ ├── kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
│ └── lt4.18_ipvs.conf
└── tasks
├── install_ipvsadm.yml
├── install_package.yml
├── main.yml
├── reboot_system.yml
├── set_centos7_kernel.yml
├── set_ipvs.yml
└── set_k8s_kernel.yml
2 directories, 12 files
[root@ansible-server ansible]# vim reset_installpackage_role.yml
---
- hosts: all
serial: 3
roles:
- role: reset-installpackage
[root@ansible-server ansible]# ansible-playbook reset_installpackage_role.yml
6.chrony
6.1 chrony-server
[root@ansible-server ansible]# mkdir -p roles/chrony-server/{tasks,handlers}
[root@ansible-server ansible]# cd roles/chrony-server/
[root@ansible-server chrony-server]# ls
handlers tasks
[root@ansible-server chrony-server]# vim tasks/install_chrony_yum.yml
- name: install CentOS or Rocky chrony
yum:
name: chrony
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: delete CentOS or Rocky /etc/chrony.conf file contains '^pool.*' string line
lineinfile:
path: /etc/chrony.conf
regexp: '^pool.*'
state: absent
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
notify:
- restart chronyd
- name: delete CentOS or Rocky /etc/chrony.conf file contains '^server.*' string line
lineinfile:
path: /etc/chrony.conf
regexp: '^server.*'
state: absent
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
notify:
- restart chronyd
- name: add Time server for CentOS or Rocky /etc/chrony.conf file
lineinfile:
path: /etc/chrony.conf
insertafter: '^# Please consider .*'
line: "server ntp.aliyun.com iburst\nserver time1.cloud.tencent.com iburst\nserver ntp.tuna.tsinghua.edu.cn iburst"
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
notify:
- restart chronyd
- name: Substitution '^#(allow).*' string for CentOS or Rocky /etc/chrony.conf file
replace:
path: /etc/chrony.conf
regexp: '^#(allow).*'
replace: '\1 0.0.0.0/0'
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
notify:
- restart chronyd
- name: Substitution '^#(local).*' string for CentOS or Rocky /etc/chrony.conf file
replace:
path: /etc/chrony.conf
regexp: '^#(local).*'
replace: '\1 stratum 10'
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
notify:
- restart chronyd
[root@ansible-server chrony-server]# vim tasks/install_chrony_apt.yml
- name: delete lock files
file:
path: "{
{ item }}"
state: absent
loop:
- /var/lib/dpkg/lock
- /var/lib/apt/lists/lock
- /var/cache/apt/archives/lock
when:
- ansible_distribution=="Ubuntu"
- name: apt update
apt:
update_cache: yes
force: yes
when:
- ansible_distribution=="Ubuntu"
- name: install Ubuntu chrony
apt:
name: chrony
force: yes
when:
- ansible_distribution=="Ubuntu"
- name: delete Ubuntu /etc/chrony/chrony.conf file contains '^pool.*' string line
lineinfile:
path: /etc/chrony/chrony.conf
regexp: '^pool.*'
state: absent
when:
- ansible_distribution=="Ubuntu"
notify:
- restart chronyd
- name: add Time server for Ubuntu /etc/chrony/chrony.conf file
lineinfile:
path: /etc/chrony/chrony.conf
insertafter: '^# See http:.*'
line: "server ntp.aliyun.com iburst\nserver time1.cloud.tencent.com iburst\nserver ntp.tuna.tsinghua.edu.cn iburst"
when:
- ansible_distribution=="Ubuntu"
- name: add 'allow 0.0.0.0/0' string and 'local stratum 10' string for Ubuntu /etc/chrony/chrony.conf file
lineinfile:
path: /etc/chrony/chrony.conf
line: "{
{ item }}"
loop:
- "allow 0.0.0.0/0"
- "local stratum 10"
when:
- ansible_distribution=="Ubuntu"
notify:
- restart chronyd
[root@ansible-server chrony-server]# vim tasks/service.yml
- name: start chronyd
systemd:
name: chronyd
state: started
enabled: yes
[root@ansible-server chrony-server]# vim tasks/main.yml
- include: install_chrony_yum.yml
- include: install_chrony_apt.yml
- include: service.yml
[root@ansible-server chrony-server]# vim handlers/main.yml
- name: restart chronyd
systemd:
name: chronyd
state: restarted
[root@ansible-server chrony-server]# cd ../../
[root@ansible-server ansible]# tree roles/chrony-server/
roles/chrony-server/
├── handlers
│ └── main.yml
└── tasks
├── install_chrony_apt.yml
├── install_chrony_yum.yml
├── main.yml
└── service.yml
2 directories, 5 files
[root@ansible-server ansible]# vim chrony_server_role.yml
---
- hosts: chrony_server
roles:
- role: chrony-server
[root@ansible-server ansible]# ansible-playbook chrony_server_role.yml
[root@k8s-ha01 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^- 203.107.6.88 2 6 37 62 -15ms[ -15ms] +/- 35ms
^* 139.199.215.251 2 6 37 62 -10us[+1488us] +/- 37ms
^? 101.6.6.172 0 7 0 - +0ns[ +0ns] +/- 0ns
[root@k8s-ha02 ~]# chronyc sources -nv
210 Number of sources = 3
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 203.107.6.88 2 6 77 3 -4058us[+2582us] +/- 31ms
^+ 139.199.215.251 2 6 77 2 +6881us[+6881us] +/- 33ms
^? 101.6.6.172 0 7 0 - +0ns[ +0ns] +/- 0ns
6.2 chrony-client
[root@ansible-server ansible]# mkdir -p roles/chrony-client/{tasks,handlers,vars}
[root@ansible-server ansible]# cd roles/chrony-client/
[root@ansible-server chrony-client]# ls
handlers tasks vars
#下面IP设置成chrony-server的IP地址,SERVER1设置ha1的IP地址,SERVER2设置ha2的IP地址
[root@ansible-server chrony-client]# vim vars/main.yml
SERVER1: 172.31.3.104
SERVER2: 172.31.3.105
[root@ansible-server chrony-client]# vim tasks/install_chrony_yum.yml
- name: install CentOS or Rocky chrony
yum:
name: chrony
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: delete CentOS or Rocky /etc/chrony.conf file contains '^pool.*' string line
lineinfile:
path: /etc/chrony.conf
regexp: '^pool.*'
state: absent
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
notify:
- restart chronyd
- name: delete CentOS or Rocky /etc/chrony.conf file contains '^server.*' string line
lineinfile:
path: /etc/chrony.conf
regexp: '^server.*'
state: absent
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
notify:
- restart chronyd
- name: add Time server for CentOS or Rocky /etc/chrony.conf file
lineinfile:
path: /etc/chrony.conf
insertafter: '^# Please consider .*'
line: "server {
{ SERVER1 }} iburst\nserver {
{ SERVER2 }} iburst"
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
notify:
- restart chronyd
[root@ansible-server chrony-client]# vim tasks/install_chrony_apt.yml
- name: delete lock files
file:
path: "{
{ item }}"
state: absent
loop:
- /var/lib/dpkg/lock
- /var/lib/apt/lists/lock
- /var/cache/apt/archives/lock
when:
- ansible_distribution=="Ubuntu"
- name: apt update
apt:
update_cache: yes
force: yes
when:
- ansible_distribution=="Ubuntu"
- name: install Ubuntu chrony
apt:
name: chrony
force: yes
when:
- ansible_distribution=="Ubuntu"
- name: delete Ubuntu /etc/chrony/chrony.conf file contains '^pool.*' string line
lineinfile:
path: /etc/chrony/chrony.conf
regexp: '^pool.*'
state: absent
when:
- ansible_distribution=="Ubuntu"
notify:
- restart chronyd
- name: add Time server for Ubuntu /etc/chrony/chrony.conf file
lineinfile:
path: /etc/chrony/chrony.conf
insertafter: '^# See http:.*'
line: "server {
{ SERVER1 }} iburst\nserver {
{ SERVER2 }} iburst"
when:
- ansible_distribution=="Ubuntu"
notify:
- restart chronyd
[root@ansible-server chrony-client]# vim tasks/service.yml
- name: start chronyd
systemd:
name: chronyd
state: started
enabled: yes
[root@ansible-server chrony-client]# vim tasks/main.yml
- include: install_chrony_yum.yml
- include: install_chrony_apt.yml
- include: service.yml
[root@ansible-server chrony-client]# vim handlers/main.yml
- name: restart chronyd
systemd:
name: chronyd
state: restarted
[root@ansible-server chrony-client]# cd ../../
[root@ansible-server ansible]# tree roles/chrony-client/
roles/chrony-client/
├── handlers
│ └── main.yml
├── tasks
│ ├── install_chrony_apt.yml
│ ├── install_chrony_yum.yml
│ ├── main.yml
│ └── service.yml
└── vars
└── main.yml
3 directories, 6 files
[root@ansible-server ansible]# vim chrony_client_role.yml
---
- hosts: chrony_client
roles:
- role: chrony-client
[root@ansible-server ansible]# ansible-playbook chrony_client_role.yml
[root@k8s-master01 ~]# chronyc sources -nv
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* k8s-ha01 3 6 17 28 -57us[ -29us] +/- 31ms
^+ k8s-ha02 3 6 17 29 +204us[ +231us] +/- 34ms
7.haproxy
[root@ansible-server ansible]# mkdir -p roles/haproxy/{tasks,vars,files,templates}
[root@ansible-server ansible]# cd roles/haproxy/
[root@ansible-server haproxy]# ls
files tasks templates vars
[root@ansible-server haproxy]# wget http://www.lua.org/ftp/lua-5.4.3.tar.gz -P files/
[root@ansible-server haproxy]# wget https://www.haproxy.org/download/2.4/src/haproxy-2.4.10.tar.gz -P files/
[root@ansible-server haproxy]# vim files/haproxy.service
[Unit]
Description=HAProxy Load Balancer
After=syslog.target network.target
[Service]
ExecStartPre=/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /var/lib/haproxy/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
[Install]
WantedBy=multi-user.target
#下面VIP设置成自己的keepalived里的VIP(虚拟IP)地址
[root@ansible-server haproxy]# vim vars/main.yml
SRC_DIR: /usr/local/src
LUA_FILE: lua-5.4.3.tar.gz
HAPROXY_FILE: haproxy-2.4.10.tar.gz
HAPROXY_INSTALL_DIR: /apps/haproxy
STATS_AUTH_USER: admin
STATS_AUTH_PASSWORD: 123456
VIP: 172.31.3.188
[root@ansible-server haproxy]# vim templates/haproxy.cfg.j2
global
maxconn 100000
chroot {
{
HAPROXY_INSTALL_DIR }}
stats socket /var/lib/haproxy/haproxy.sock mode 600 level admin
uid 99
gid 99
daemon
pidfile /var/lib/haproxy/haproxy.pid
log 127.0.0.1 local3 info
defaults
option http-keep-alive
option forwardfor
maxconn 100000
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms
listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth {
{
STATS_AUTH_USER }}:{
{
STATS_AUTH_PASSWORD }}
listen kubernetes-6443
bind {
{
VIP }}:6443
mode tcp
log global
{
% for i in groups.master %}
server {
{
i }} {
{
i }}:6443 check inter 3s fall 2 rise 5
{
% endfor %}
listen harbor-80
bind {
{
VIP }}:80
mode http
log global
balance source
{
% for i in groups.harbor %}
server {
{
i }} {
{
i }}:80 check inter 3s fall 2 rise 5
{
% endfor %}
[root@ansible-server haproxy]# vim tasks/install_package.yml
- name: install CentOS or Rocky depend on the package
yum:
name: gcc,make,gcc-c++,glibc,glibc-devel,pcre,pcre-devel,openssl,openssl-devel,systemd-devel,libtermcap-devel,ncurses-devel,libevent-devel,readline-devel
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- inventory_hostname in groups.haproxy
- name: delete lock files
file:
path: "{
{ item }}"
state: absent
loop:
- /var/lib/dpkg/lock
- /var/lib/apt/lists/lock
- /var/cache/apt/archives/lock
when:
- ansible_distribution=="Ubuntu"
- inventory_hostname in groups.haproxy
- name: apt update
apt:
update_cache: yes
force: yes
when:
- ansible_distribution=="Ubuntu"
- inventory_hostname in groups.haproxy
- name: install Ubuntu depend on the package
apt:
name: gcc,make,openssl,libssl-dev,libpcre3,libpcre3-dev,zlib1g-dev,libreadline-dev,libsystemd-dev
force: yes
when:
- ansible_distribution=="Ubuntu"
- inventory_hostname in groups.haproxy
[root@ansible-server haproxy]# vim tasks/build_lua.yml
- name: unarchive lua package
unarchive:
src: "{
{ LUA_FILE }}"
dest: "{
{ SRC_DIR }}"
when:
- inventory_hostname in groups.haproxy
- name: get LUA_DIR directory
shell:
cmd: echo {
{
LUA_FILE }} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'
register: LUA_DIR
when:
- inventory_hostname in groups.haproxy
- name: Build and install lua
shell:
chdir: "{
{ SRC_DIR }}/{
{ LUA_DIR.stdout }}"
cmd: make all test
when:
- inventory_hostname in groups.haproxy
[root@ansible-server haproxy]# vim tasks/build_haproxy.yml
- name: unarchive haproxy package
unarchive:
src: "{
{ HAPROXY_FILE }}"
dest: "{
{ SRC_DIR }}"
when:
- inventory_hostname in groups.haproxy
- name: get HAPROXY_DIR directory
shell:
cmd: echo {
{
HAPROXY_FILE }} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'
register: HAPROXY_DIR
when:
- inventory_hostname in groups.haproxy
- name: make Haproxy
shell:
chdir: "{
{ SRC_DIR }}/{
{ HAPROXY_DIR.stdout }}"
cmd: make -j {
{
ansible_processor_vcpus }} ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 USE_LUA=1 LUA_INC={
{
SRC_DIR }}/{
{
LUA_DIR.stdout }}/src/ LUA_LIB={
{
SRC_DIR }}/{
{
LUA_DIR.stdout }}/src/ PREFIX={
{
HAPROXY_INSTALL_DIR }}
when:
- inventory_hostname in groups.haproxy
- name: make install Haproxy
shell:
chdir: "{
{ SRC_DIR }}/{
{ HAPROXY_DIR.stdout }}"
cmd: make install PREFIX={
{
HAPROXY_INSTALL_DIR }}
when:
- inventory_hostname in groups.haproxy
[root@ansible-server haproxy]# vim tasks/config.yml
- name: copy haproxy.service file
copy:
src: haproxy.service
dest: /lib/systemd/system
when:
- inventory_hostname in groups.haproxy
- name: create haproxy link
file:
src: "../..{
{ HAPROXY_INSTALL_DIR }}/sbin/{
{ item.src }}"
dest: "/usr/sbin/{
{ item.src }}"
state: link
owner: root
group: root
mode: 755
force: yes
with_items:
- src: haproxy
when:
- inventory_hostname in groups.haproxy
- name: create /etc/haproxy directory
file:
path: /etc/haproxy
state: directory
when:
- inventory_hostname in groups.haproxy
- name: create /var/lib/haproxy/ directory
file:
path: /var/lib/haproxy/
state: directory
when:
- inventory_hostname in groups.haproxy
- name: copy haproxy.cfg file
template:
src: haproxy.cfg.j2
dest: /etc/haproxy/haproxy.cfg
when:
- inventory_hostname in groups.haproxy
- name: Add the kernel
sysctl:
name: net.ipv4.ip_nonlocal_bind
value: "1"
when:
- inventory_hostname in groups.haproxy
- name: PATH variable
copy:
content: 'PATH={
{ HAPROXY_INSTALL_DIR }}/sbin:$PATH'
dest: /etc/profile.d/haproxy.sh
when:
- inventory_hostname in groups.haproxy
- name: PATH variable entry
shell:
cmd: . /etc/profile.d/haproxy.sh
when:
- inventory_hostname in groups.haproxy
[root@ansible-server haproxy]# vim tasks/service.yml
- name: start haproxy
systemd:
name: haproxy
state: started
enabled: yes
daemon_reload: yes
when:
- inventory_hostname in groups.haproxy
[root@ansible-server haproxy]# vim tasks/main.yml
- include: install_package.yml
- include: build_lua.yml
- include: build_haproxy.yml
- include: config.yml
- include: service.yml
[root@ansible-server haproxy]# cd ../../
[root@ansible-server ansible]# tree roles/haproxy/
roles/haproxy/
├── files
│ ├── haproxy-2.4.10.tar.gz
│ ├── haproxy.service
│ └── lua-5.4.3.tar.gz
├── tasks
│ ├── build_haproxy.yml
│ ├── build_lua.yml
│ ├── config.yml
│ ├── install_package.yml
│ ├── main.yml
│ └── service.yml
├── templates
│ └── haproxy.cfg.j2
└── vars
└── main.yml
4 directories, 11 files
[root@ansible-server ansible]# vim haproxy_role.yml
---
- hosts: haproxy:master:harbor
roles:
- role: haproxy
[root@ansible-server ansible]# ansible-playbook haproxy_role.yml
8.keepalived
8.1 keepalived-master
[root@ansible-server ansible]# mkdir -p roles/keepalived-master/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/keepalived-master/
[root@ansible-server keepalived-master]# ls
files tasks templates vars
[root@ansible-server keepalived-master]# wget https://keepalived.org/software/keepalived-2.2.4.tar.gz -P files/
[root@ansible-server keepalived-master]# vim files/check_haproxy.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-09
#FileName: check_haproxy.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
err=0
for k in $(seq 1 3);do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
#下面VIP设置成自己的keepalived里的VIP(虚拟IP)地址
[root@ansible-server keepalived-master]# vim vars/main.yml
URL: mirrors.cloud.tencent.com
ROCKY_URL: mirrors.sjtug.sjtu.edu.cn
KEEPALIVED_FILE: keepalived-2.2.4.tar.gz
SRC_DIR: /usr/local/src
KEEPALIVED_INSTALL_DIR: /apps/keepalived
STATE: MASTER
PRIORITY: 100
VIP: 172.31.3.188
[root@ansible-server keepalived-master]# vim templates/PowerTools.repo.j2
[PowerTools]
name=PowerTools
{
% if ansible_distribution =="Rocky" %}
baseurl=https://{
{
ROCKY_URL }}/rocky/$releasever/PowerTools/$basearch/os/
{
% elif ansible_distribution=="CentOS" %}
baseurl=https://{
{
URL }}/centos/$releasever/PowerTools/$basearch/os/
{
% endif %}
gpgcheck=1
{
% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{
% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{
% endif %}
[root@ansible-server keepalived-master]# vim templates/keepalived.conf.j2
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script check_haoroxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state {
{
STATE }}
interface {
{
ansible_default_ipv4.interface }}
virtual_router_id 51
priority {
{
PRIORITY }}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
{
{
VIP }} dev {
{
ansible_default_ipv4.interface }} label {
{
ansible_default_ipv4.interface }}:1
}
track_script {
check_haproxy
}
}
[root@ansible-server keepalived-master]# vim tasks/install_package.yml
- name: find "[PowerTools]" mirror warehouse
find:
path: /etc/yum.repos.d/
contains: '\[PowerTools\]'
register: RETURN
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- ansible_distribution_major_version=="8"
- name: copy repo file
template:
src: PowerTools.repo.j2
dest: /etc/yum.repos.d/PowerTools.repo
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky") and (ansible_distribution_major_version=="8")
- RETURN.matched == 0
- name: install CentOS8 or Rocky8 depend on the package
yum:
name: make,gcc,ipvsadm,autoconf,automake,openssl-devel,libnl3-devel,iptables-devel,ipset-devel,file-devel,net-snmp-devel,glib2-devel,pcre2-devel,libnftnl-devel,libmnl-devel,systemd-devel
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- ansible_distribution_major_version=="8"
- name: install CentOS7 depend on the package
yum:
name: make,gcc,libnfnetlink-devel,libnfnetlink,ipvsadm,libnl,libnl-devel,libnl3,libnl3-devel,lm_sensors-libs,net-snmp-agent-libs,net-snmp-libs,openssh-server,openssh-clients,openssl,openssl-devel,automake,iproute
when:
- ansible_distribution=="CentOS"
- ansible_distribution_major_version=="7"
- name: delete lock files
file:
path: "{
{ item }}"
state: absent
loop:
- /var/lib/dpkg/lock
- /var/lib/apt/lists/lock
- /var/cache/apt/archives/lock
when:
- ansible_distribution=="Ubuntu"
- name: apt update
apt:
update_cache: yes
force: yes
when:
- ansible_distribution=="Ubuntu"
- name: install Ubuntu 20.04 depend on the package
apt:
name: make,gcc,ipvsadm,build-essential,pkg-config,automake,autoconf,libipset-dev,libnl-3-dev,libnl-genl-3-dev,libssl-dev,libxtables-dev,libip4tc-dev,libip6tc-dev,libipset-dev,libmagic-dev,libsnmp-dev,libglib2.0-dev,libpcre2-dev,libnftnl-dev,libmnl-dev,libsystemd-dev
force: yes
when:
- ansible_distribution=="Ubuntu"
- ansible_distribution_major_version=="20"
- name: install Ubuntu 18.04 depend on the package
apt:
name: make,gcc,ipvsadm,build-essential,pkg-config,automake,autoconf,iptables-dev,libipset-dev,libnl-3-dev,libnl-genl-3-dev,libssl-dev,libxtables-dev,libip4tc-dev,libip6tc-dev,libipset-dev,libmagic-dev,libsnmp-dev,libglib2.0-dev,libpcre2-dev,libnftnl-dev,libmnl-dev,libsystemd-dev
force: yes
when:
- ansible_distribution=="Ubuntu"
- ansible_distribution_major_version=="18"
[root@ansible-server keepalived-master]# vim tasks/keepalived_file.yml
- name: unarchive keepalived package
unarchive:
src: "{
{ KEEPALIVED_FILE }}"
dest: "{
{ SRC_DIR }}"
[root@ansible-server keepalived_master]# vim tasks/build.yml
- name: get KEEPALIVED_DIR directory
shell:
cmd: echo {
{
KEEPALIVED_FILE }} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'
register: KEEPALIVED_DIR
- name: Build and install Keepalived
shell:
chdir: "{
{ SRC_DIR }}/{
{ KEEPALIVED_DIR.stdout }}"
cmd: ./configure --prefix={
{
KEEPALIVED_INSTALL_DIR }} --disable-fwmark
- name: make && make install
shell:
chdir: "{
{ SRC_DIR }}/{
{ KEEPALIVED_DIR.stdout }}"
cmd: make -j {
{
ansible_processor_vcpus }} && make install
[root@ansible-server keepalived-master]# vim tasks/config.yml
- name: create /etc/keepalived directory
file:
path: /etc/keepalived
state: directory
- name: copy keepalived.conf file
template:
src: keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
- name: copy check_haproxy.sh file
copy:
src: check_haproxy.sh
dest: /etc/keepalived/
mode: 0755
- name: copy keepalived.service file
copy:
remote_src: True
src: "{
{ SRC_DIR }}/{
{ KEEPALIVED_DIR.stdout }}/keepalived/keepalived.service"
dest: /lib/systemd/system/
- name: PATH variable
copy:
content: 'PATH={
{ KEEPALIVED_INSTALL_DIR }}/sbin:$PATH'
dest: /etc/profile.d/keepalived.sh
- name: PATH variable entry
shell:
cmd: . /etc/profile.d/keepalived.sh
[root@ansible-server keepalived-master]# vim tasks/service.yml
- name: start keepalived
systemd:
name: keepalived
state: started
enabled: yes
daemon_reload: yes
[root@ansible-server keepalived-master]# vim tasks/main.yml
- include: install_package.yml
- include: keepalived_file.yml
- include: build.yml
- include: config.yml
- include: service.yml
[root@ansible-server keepalived-master]# cd ../../
[root@ansible-server ansible]# tree roles/keepalived-master/
roles/keepalived-master/
├── files
│ ├── check_haproxy.sh
│ └── keepalived-2.2.4.tar.gz
├── tasks
│ ├── build.yml
│ ├── config.yml
│ ├── install_package.yml
│ ├── keepalived_file.yml
│ ├── main.yml
│ └── service.yml
├── templates
│ ├── keepalived.conf.j2
│ └── PowerTools.repo.j2
└── vars
└── main.yml
4 directories, 11 files
[root@ansible-server ansible]# vim keepalived_master_role.yml
---
- hosts: keepalives_master
roles:
- role: keepalived-master
[root@ansible-server ansible]# ansible-playbook keepalived_master_role.yml
8.2 keepalived-backup
[root@ansible-server ansible]# mkdir -p roles/keepalived-backup/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/keepalived-backup/
[root@ansible-server keepalived-master]# ls
files tasks templates vars
[root@ansible-server keepalived-backup]# wget https://keepalived.org/software/keepalived-2.2.4.tar.gz -P files/
[root@ansible-server keepalived-backup]# vim files/check_haproxy.sh
#!/bin/bash
#
#**********************************************************************************************
#Author: Raymond
#QQ: 88563128
#Date: 2022-01-09
#FileName: check_haproxy.sh
#URL: raymond.blog.csdn.net
#Description: The test script
#Copyright (C): 2022 All rights reserved
#*********************************************************************************************
err=0
for k in $(seq 1 3);do
check_code=$(pgrep haproxy)
if [[ $check_code == "" ]]; then
err=$(expr $err + 1)
sleep 1
continue
else
err=0
break
fi
done
if [[ $err != "0" ]]; then
echo "systemctl stop keepalived"
/usr/bin/systemctl stop keepalived
exit 1
else
exit 0
fi
#下面VIP设置成自己的keepalived里的VIP(虚拟IP)地址
[root@ansible-server keepalived-backup]# vim vars/main.yml
URL: mirrors.cloud.tencent.com
ROCKY_URL: mirrors.sjtug.sjtu.edu.cn
KEEPALIVED_FILE: keepalived-2.2.4.tar.gz
SRC_DIR: /usr/local/src
KEEPALIVED_INSTALL_DIR: /apps/keepalived
STATE: BACKUP
PRIORITY: 90
VIP: 172.31.3.188
[root@ansible-server keepalived-backup]# vim templates/PowerTools.repo.j2
[PowerTools]
name=PowerTools
{
% if ansible_distribution =="Rocky" %}
baseurl=https://{
{
ROCKY_URL }}/rocky/$releasever/PowerTools/$basearch/os/
{
% elif ansible_distribution=="CentOS" %}
baseurl=https://{
{
URL }}/centos/$releasever/PowerTools/$basearch/os/
{
% endif %}
gpgcheck=1
{
% if ansible_distribution =="Rocky" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rockyofficial
{
% elif ansible_distribution=="CentOS" %}
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
{
% endif %}
[root@ansible-server keepalived-backup]# vim templates/keepalived.conf.j2
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
script_user root
enable_script_security
}
vrrp_script check_haoroxy {
script "/etc/keepalived/check_haproxy.sh"
interval 5
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state {
{
STATE }}
interface {
{
ansible_default_ipv4.interface }}
virtual_router_id 51
priority {
{
PRIORITY }}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
{
{
VIP }} dev {
{
ansible_default_ipv4.interface }} label {
{
ansible_default_ipv4.interface }}:1
}
track_script {
check_haproxy
}
}
[root@ansible-server keepalived-backup]# vim tasks/install_package.yml
- name: find "[PowerTools]" mirror warehouse
find:
path: /etc/yum.repos.d/
contains: '\[PowerTools\]'
register: RETURN
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- ansible_distribution_major_version=="8"
- name: copy repo file
template:
src: PowerTools.repo.j2
dest: /etc/yum.repos.d/PowerTools.repo
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky") and (ansible_distribution_major_version=="8")
- RETURN.matched == 0
- name: install CentOS8 or Rocky8 depend on the package
yum:
name: make,gcc,ipvsadm,autoconf,automake,openssl-devel,libnl3-devel,iptables-devel,ipset-devel,file-devel,net-snmp-devel,glib2-devel,pcre2-devel,libnftnl-devel,libmnl-devel,systemd-devel
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- ansible_distribution_major_version=="8"
- name: install CentOS7 depend on the package
yum:
name: make,gcc,libnfnetlink-devel,libnfnetlink,ipvsadm,libnl,libnl-devel,libnl3,libnl3-devel,lm_sensors-libs,net-snmp-agent-libs,net-snmp-libs,openssh-server,openssh-clients,openssl,openssl-devel,automake,iproute
when:
- ansible_distribution=="CentOS"
- ansible_distribution_major_version=="7"
- name: delete lock files
file:
path: "{
{ item }}"
state: absent
loop:
- /var/lib/dpkg/lock
- /var/lib/apt/lists/lock
- /var/cache/apt/archives/lock
when:
- ansible_distribution=="Ubuntu"
- name: apt update
apt:
update_cache: yes
force: yes
when:
- ansible_distribution=="Ubuntu"
- name: install Ubuntu 20.04 depend on the package
apt:
name: make,gcc,ipvsadm,build-essential,pkg-config,automake,autoconf,libipset-dev,libnl-3-dev,libnl-genl-3-dev,libssl-dev,libxtables-dev,libip4tc-dev,libip6tc-dev,libipset-dev,libmagic-dev,libsnmp-dev,libglib2.0-dev,libpcre2-dev,libnftnl-dev,libmnl-dev,libsystemd-dev
force: yes
when:
- ansible_distribution=="Ubuntu"
- ansible_distribution_major_version=="20"
- name: install Ubuntu 18.04 depend on the package
apt:
name: make,gcc,ipvsadm,build-essential,pkg-config,automake,autoconf,iptables-dev,libipset-dev,libnl-3-dev,libnl-genl-3-dev,libssl-dev,libxtables-dev,libip4tc-dev,libip6tc-dev,libipset-dev,libmagic-dev,libsnmp-dev,libglib2.0-dev,libpcre2-dev,libnftnl-dev,libmnl-dev,libsystemd-dev
force: yes
when:
- ansible_distribution=="Ubuntu"
- ansible_distribution_major_version=="18"
[root@ansible-server keepalived-backup]# vim tasks/keepalived_file.yml
- name: unarchive keepalived package
unarchive:
src: "{
{ KEEPALIVED_FILE }}"
dest: "{
{ SRC_DIR }}"
[root@ansible-server keepalived_backup]# vim tasks/build.yml
- name: get KEEPALIVED_DIR directory
shell:
cmd: echo {
{
KEEPALIVED_FILE }} | sed -nr 's/^(.*[0-9]).([[:lower:]]).*/\1/p'
register: KEEPALIVED_DIR
- name: Build and install Keepalived
shell:
chdir: "{
{ SRC_DIR }}/{
{ KEEPALIVED_DIR.stdout }}"
cmd: ./configure --prefix={
{
KEEPALIVED_INSTALL_DIR }} --disable-fwmark
- name: make && make install
shell:
chdir: "{
{ SRC_DIR }}/{
{ KEEPALIVED_DIR.stdout }}"
cmd: make -j {
{
ansible_processor_vcpus }} && make install
[root@ansible-server keepalived-backup]# vim tasks/config.yml
- name: create /etc/keepalived directory
file:
path: /etc/keepalived
state: directory
- name: copy keepalived.conf file
template:
src: keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
- name: copy check_haproxy.sh file
copy:
src: check_haproxy.sh
dest: /etc/keepalived/
mode: 0755
- name: copy keepalived.service file
copy:
remote_src: True
src: "{
{ SRC_DIR }}/{
{ KEEPALIVED_DIR.stdout }}/keepalived/keepalived.service"
dest: /lib/systemd/system/
- name: PATH variable
copy:
content: 'PATH={
{ KEEPALIVED_INSTALL_DIR }}/sbin:$PATH'
dest: /etc/profile.d/keepalived.sh
- name: PATH variable entry
shell:
cmd: . /etc/profile.d/keepalived.sh
[root@ansible-server keepalived-backup]# vim tasks/service.yml
- name: start keepalived
systemd:
name: keepalived
state: started
enabled: yes
daemon_reload: yes
[root@ansible-server keepalived-backup]# vim tasks/main.yml
- include: install_package.yml
- include: keepalived_file.yml
- include: build.yml
- include: config.yml
- include: service.yml
[root@ansible-server keepalived-backup]# cd ../../
[root@ansible-server ansible]# tree roles/keepalived-backup/
roles/keepalived-backup/
├── files
│ ├── check_haproxy.sh
│ └── keepalived-2.2.4.tar.gz
├── tasks
│ ├── build.yml
│ ├── config.yml
│ ├── install_package.yml
│ ├── keepalived_file.yml
│ ├── main.yml
│ └── service.yml
├── templates
│ ├── keepalived.conf.j2
│ └── PowerTools.repo.j2
└── vars
└── main.yml
4 directories, 11 files
[root@ansible-server ansible]# vim keepalived_backup_role.yml
---
- hosts: keepalives_backup
roles:
- role: keepalived-backup
[root@ansible-server ansible]# ansible-playbook keepalived_backup_role.yml
9.harbor
9.1 docker基于二进制包
[root@ansible-server ansible]# mkdir -p roles/docker-binary/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/docker-binary/
[root@ansible-server docker-binary]# ls
files tasks vars templates
[root@ansible-server docker-binary]# wget https://download.docker.com/linux/static/stable/x86_64/docker-20.10.12.tgz -P files/
#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server docker-binary]# vim vars/main.yml
DOCKER_VERSION: 20.10.14
HARBOR_DOMAIN: harbor.raymonds.cc
[root@ansible-server docker-binary]# vim files/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H unix://var/run/docker.sock
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
[root@ansible-server docker-binary]# vim templates/daemon.json
{
"registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
],
"insecure-registries": ["{
{ HARBOR_DOMAIN }}"],
"exec-opts": ["native.cgroupdriver=systemd"],
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 5,
"log-opts": {
"max-size": "300m",
"max-file": "2"
},
"live-restore": true
}
[root@ansible-server docker-binary]# vim tasks/docker_files.yml
- name: unarchive docker package
unarchive:
src: "docker-{
{ DOCKER_VERSION }}.tgz"
dest: /usr/local/src
- name: move docker files
shell:
cmd: mv /usr/local/src/docker/* /usr/bin/
[root@ansible-server docker-binary]# vim tasks/service_file.yml
- name: copy docker.service file
copy:
src: docker.service
dest: /lib/systemd/system/docker.service
[root@ansible-server docker-binary]# vim tasks/set_mirror_accelerator.yml
- name: mkdir /etc/docker
file:
path: /etc/docker
state: directory
- name: set mirror_accelerator
template:
src: daemon.json.j2
dest: /etc/docker/daemon.json
[root@ansible-server docker-binary]# vim tasks/set_alias.yml
- name: set docker alias
lineinfile:
path: ~/.bashrc
line: "{
{ item }}"
loop:
- "alias rmi=\"docker images -qa|xargs docker rmi -f\""
- "alias rmc=\"docker ps -qa|xargs docker rm -f\""
[root@ansible-server docker-binary]# vim tasks/service.yml
- name: start docker
systemd:
name: docker
state: started
enabled: yes
daemon_reload: yes
[root@ansible-server docker-binary]# vim tasks/set_swap.yml
- name: set WARNING No swap limit support
replace:
path: /etc/default/grub
regexp: '^(GRUB_CMDLINE_LINUX=.*)\"$'
replace: '\1 swapaccount=1"'
when:
- ansible_distribution=="Ubuntu"
- name: update-grub
shell:
cmd: update-grub
when:
- ansible_distribution=="Ubuntu"
- name: reboot Ubuntu system
reboot:
when:
- ansible_distribution=="Ubuntu"
[root@ansible-server docker-binary]# vim tasks/main.yml
- include: docker_files.yml
- include: service_file.yml
- include: set_mirror_accelerator.yml
- include: set_alias.yml
- include: service.yml
- include: set_swap.yml
[root@ansible-server docker-binary]# cd ../../
[root@ansible-server ansible]# tree roles/docker-binary/
roles/docker-binary/
├── files
│ ├── docker-19.03.9.tgz
│ └── docker.service
├── tasks
│ ├── docker_files.yml
│ ├── main.yml
│ ├── service_file.yml
│ ├── service.yml
│ ├── set_alias.yml
│ ├── set_mirror_accelerator.yml
│ └── set_swap.yml
├── templates
│ └── daemon.json.j2
└── vars
└── main.yml
4 directories, 11 files
9.2 docker-compose
[root@ansible-server ansible]# mkdir -p roles/docker-compose/{tasks,files}
[root@ansible-server ansible]# cd roles/docker-compose/
[root@ansible-server docker-compose]# ls
files tasks
[root@ansible-server docker-compose]# wget https://github.com/docker/compose/releases/download/1.29.2/docker-compose-Linux-x86_64 -P files
[root@ansible-server docker-compose]# vim tasks/install_docker_compose.yml
- name: copy docker compose file
copy:
src: docker-compose-linux-x86_64
dest: /usr/bin/docker-compose
mode: 755
[root@ansible-server docker-compose]# vim tasks/main.yml
- include: install_docker_compose.yml
[root@ansible-server ansible]# tree roles/docker-compose/
roles/docker-compose/
├── files
│ └── docker-compose-linux-x86_64
└── tasks
├── install_docker_compose.yml
└── main.yml
2 directories, 3 files
9.3 harbor
[root@ansible-server ansible]# mkdir -p roles/harbor/{tasks,files,templates,vars,meta}
[root@ansible-server ansible]# cd roles/harbor/
[root@ansible-server harbor]# ls
files meta tasks templates vars
[root@ansible-server harbor]# wget https://github.com/goharbor/harbor/releases/download/v2.4.1/harbor-offline-installer-v2.4.1.tgz -P files/
[root@ansible-server harbor]# vim templates/harbor.service.j2
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor
[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f {
{
HARBOR_INSTALL_DIR }}/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f {
{
HARBOR_INSTALL_DIR }}/harbor/docker-compose.yml down
[Install]
WantedBy=multi-user.target
[root@ansible-server harbor]# vim vars/main.yml
HARBOR_INSTALL_DIR: /apps
HARBOR_VERSION: 2.5.0
HARBOR_ADMIN_PASSWORD: 123456
[root@ansible-server harbor]# vim tasks/harbor_files.yml
- name: create HARBOR_INSTALL_DIR directory
file:
path: "{
{ HARBOR_INSTALL_DIR }}"
state: directory
- name: unarchive harbor package
unarchive:
src: "harbor-offline-installer-v{
{ HARBOR_VERSION }}.tgz"
dest: "{
{ HARBOR_INSTALL_DIR }}/"
creates: "{
{ HARBOR_INSTALL_DIR }}/harbor"
[root@ansible-server harbor]# vim tasks/config.yml
- name: mv harbor.yml
shell:
cmd: mv {
{
HARBOR_INSTALL_DIR }}/harbor/harbor.yml.tmpl {
{
HARBOR_INSTALL_DIR }}/harbor/harbor.yml
creates: "{
{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"
- name: set harbor.yml file 'hostname' string line
replace:
path: "{
{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"
regexp: '^(hostname:) .*'
replace: '\1 {
{ ansible_default_ipv4.address }}'
- name: set harbor.yml file 'harbor_admin_password' string line
replace:
path: "{
{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"
regexp: '^(harbor_admin_password:) .*'
replace: '\1 {
{ HARBOR_ADMIN_PASSWORD }}'
- name: set harbor.yml file 'https' string line
replace:
path: "{
{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"
regexp: '^(https:)'
replace: '#\1'
- name: set harbor.yml file 'port' string line
replace:
path: "{
{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"
regexp: ' (port: 443)'
replace: '# \1'
- name: set harbor.yml file 'certificate' string line
replace:
path: "{
{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"
regexp: ' (certificate: .*)'
replace: '# \1'
- name: set harbor.yml file 'private_key' string line
replace:
path: "{
{ HARBOR_INSTALL_DIR }}/harbor/harbor.yml"
regexp: ' (private_key: .*)'
replace: '# \1'
[root@ansible-server harbor]# vim tasks/install_python.yml
- name: install CentOS or Rocky python
yum:
name: python3
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- name: delete lock files
file:
path: "{
{ item }}"
state: absent
loop:
- /var/lib/dpkg/lock
- /var/lib/apt/lists/lock
- /var/cache/apt/archives/lock
when:
- ansible_distribution=="Ubuntu"
- name: apt update
apt:
update_cache: yes
force: yes
when:
- ansible_distribution=="Ubuntu"
- name: install Ubuntu python
apt:
name: python3
when:
- ansible_distribution=="Ubuntu"
[root@ansible-server harbor]# vim tasks/install_harbor.yml
- name: install harbor
shell:
cmd: "{
{ HARBOR_INSTALL_DIR }}/harbor/install.sh"
[root@ansible-server harbor]# vim tasks/service_file.yml
- name: copy harbor.service
template:
src: harbor.service.j2
dest: /lib/systemd/system/harbor.service
[root@ansible-server harbor]# vim tasks/service.yml
- name: service enable
systemd:
name: harbor
state: started
enabled: yes
daemon_reload: yes
[root@ansible-server harbor]# vim tasks/main.yml
- include: harbor_files.yml
- include: config.yml
- include: install_python.yml
- include: install_harbor.yml
- include: service_file.yml
- include: service.yml
#这里是harbor依赖的角色,docker-binary就是docker基于二进制安装,根据情况修改
[root@ansible-server harbor]# vim meta/main.yml
dependencies:
- role: docker-binary
- role: docker-compose
[root@ansible-server harbor]# cd ../../
[root@ansible-server ansible]# tree roles/harbor/
roles/harbor/
├── files
│ └── harbor-offline-installer-v2.4.1.tgz
├── meta
│ └── main.yml
├── tasks
│ ├── config.yml
│ ├── harbor_files.yml
│ ├── install_harbor.yml
│ ├── install_python.yml
│ ├── main.yml
│ ├── service_file.yml
│ └── service.yml
├── templates
│ └── harbor.service.j2
└── vars
└── main.yml
5 directories, 11 files
[root@ansible-server ansible]# vim harbor_role.yml
---
- hosts: harbor
roles:
- role: harbor
[root@ansible-server ansible]# ansible-playbook harbor_role.yml
9.4 创建harbor仓库
这步一定要做,不然后面镜像下载了上传不到harbor,ansible会执行出错
在harbor01新建项目google_containers
在harbor02新建项目google_containers
在harbor02上新建目标
在harbor02上新建规则
在harbor01上新建目标
在harbor01上新建规则
10.部署etcd
10.1 安装etcd
[root@ansible-server ansible]# mkdir -p roles/etcd/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/etcd/
[root@ansible-server etcd]# ls
files tasks templates vars
[root@ansible-server etcd]# wget https://github.com/etcd-io/etcd/releases/download/v3.5.0/etcd-v3.5.0-linux-amd64.tar.gz
[root@ansible-server etcd]# mkdir files/etcd
[root@ansible-server etcd]# tar -xf etcd-v3.5.0-linux-amd64.tar.gz --strip-components=1 -C files/etcd/ etcd-v3.5.0-linux-amd64/etcd{,ctl}
[root@ansible-server etcd]# ls files/etcd/
etcd etcdctl
[root@ansible-server etcd]# rm -f etcd-v3.5.0-linux-amd64.tar.gz
[root@ansible-server etcd]# vim tasks/copy_etcd_file.yml
- name: copy etcd files to etcd
copy:
src: "etcd/{
{ item }}"
dest: /usr/local/bin/
mode: 0755
loop:
- etcd
- etcdctl
when:
- inventory_hostname in groups.etcd
- name: create /opt/cni/bin directory
file:
path: /opt/cni/bin
state: directory
when:
- inventory_hostname in groups.etcd
[root@ansible-server etcd]# wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O files/cfssl
[root@ansible-server etcd]# wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O files/cfssljson
#下面ETCD02和ETCD03的IP地址根据自己的更改
[root@ansible-server etcd]# vim vars/main.yml
ETCD_CLUSTER: etcd
K8S_CLUSTER: kubernetes
ETCD_CERT:
- etcd-ca-key.pem
- etcd-ca.pem
- etcd-key.pem
- etcd.pem
ETCD02: 172.31.3.109
ETCD03: 172.31.3.110
[root@ansible-server etcd]# mkdir templates/pki
[root@ansible-server etcd]# vim templates/pki/etcd-ca-csr.json.j2
{
"CN": "{
{ ETCD_CLUSTER }}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
],
"ca": {
"expiry": "876000h"
}
}
[root@ansible-server etcd]# vim templates/pki/ca-config.json.j2
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
[root@ansible-server etcd]# vim templates/pki/etcd-csr.json.j2
{
"CN": "{
{ ETCD_CLUSTER }}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "etcd",
"OU": "Etcd Security"
}
]
}
[root@ansible-server etcd]# vim tasks/create_etcd_cert.yml
- name: copy cfssl and cfssljson tools
copy:
src: "{
{ item }}"
dest: /usr/local/bin
mode: 0755
loop:
- cfssl
- cfssljson
when:
- ansible_hostname=="k8s-etcd01"
- name: create /etc/etcd/ssl directory
file:
path: /etc/etcd/ssl
state: directory
when:
- inventory_hostname in groups.etcd
- name: create pki directory
file:
path: /root/pki
state: directory
when:
- ansible_hostname=="k8s-etcd01"
- name: copy pki files
template:
src: "pki/{
{ item }}.j2"
dest: "/root/pki/{
{ item }}"
loop:
- etcd-ca-csr.json
- ca-config.json
- etcd-csr.json
when:
- ansible_hostname=="k8s-etcd01"
- name: create etcd-ca cert
shell:
chdir: /root/pki
cmd: cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca
creates: /etc/etcd/ssl/etcd-ca.pem
when:
- ansible_hostname=="k8s-etcd01"
- name: create etcd cert
shell:
chdir: /root/pki
cmd: "cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,{% for i in groups.etcd %}{
{ hostvars[i].ansible_hostname}},{% endfor %}{% for i in groups.etcd %}{
{ hostvars[i].ansible_default_ipv4.address }}{% if not loop.last %},{% endif %}{% endfor %} -profile={
{ K8S_CLUSTER }} etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd"
creates: /etc/etcd/ssl/etcd-key.pem
when:
- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to etcd02
synchronize:
src: "/etc/etcd/ssl/{
{ item }}"
dest: /etc/etcd/ssl/
mode: pull
loop:
"{
{ ETCD_CERT }}"
delegate_to: "{
{ ETCD02 }}"
when:
- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to etcd03
synchronize:
src: "/etc/etcd/ssl/{
{ item }}"
dest: /etc/etcd/ssl/
mode: pull
loop:
"{
{ ETCD_CERT }}"
delegate_to: "{
{ ETCD03 }}"
when:
- ansible_hostname=="k8s-etcd01"
[root@ansible-server etcd]# mkdir templates/config
[root@ansible-server etcd]# vim templates/config/etcd.config.yml.j2
name: '{
{ inventory_hostname }}'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://{
{ ansible_default_ipv4.address }}:2380'
listen-client-urls: 'https://{
{ ansible_default_ipv4.address }}:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://{
{ ansible_default_ipv4.address }}:2380'
advertise-client-urls: 'https://{
{ ansible_default_ipv4.address }}:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: '{% for i in groups.etcd %}{
{ hostvars[i].inventory_hostname }}=https://{
{ hostvars[i].ansible_default_ipv4.address }}:2380{% if not loop.last %},{% endif %}{% endfor %}'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
peer-transport-security:
cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
peer-client-cert-auth: true
trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
[root@ansible-server etcd]# mkdir files/service
[root@ansible-server etcd]# vim files/service/etcd.service
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
[root@ansible-server etcd]# vim tasks/etcd_config.yml
- name: copy etcd_config file
template:
src: config/etcd.config.yml.j2
dest: /etc/etcd/etcd.config.yml
when:
- inventory_hostname in groups.etcd
- name: copy etcd.service file
copy:
src: service/etcd.service
dest: /lib/systemd/system/etcd.service
when:
- inventory_hostname in groups.etcd
- name: create /etc/kubernetes/pki/etcd directory
file:
path: /etc/kubernetes/pki/etcd
state: directory
when:
- inventory_hostname in groups.etcd
- name: link etcd_ssl to kubernetes pki
file:
src: "/etc/etcd/ssl/{
{ item }}"
dest: "/etc/kubernetes/pki/etcd/{
{ item }}"
state: link
loop:
"{
{ ETCD_CERT }}"
when:
- inventory_hostname in groups.etcd
- name: start etcd
systemd:
name: etcd
state: started
enabled: yes
daemon_reload: yes
when:
- inventory_hostname in groups.etcd
[root@ansible-server etcd]# vim tasks/main.yml
- include: copy_etcd_file.yml
- include: create_etcd_cert.yml
- include: etcd_config.yml
[root@ansible-server etcd]# cd ../../
[root@ansible-server ansible]# tree roles/etcd/
roles/etcd/
├── files
│ ├── cfssl
│ ├── cfssljson
│ ├── etcd
│ │ ├── etcd
│ │ └── etcdctl
│ └── service
│ └── etcd.service
├── tasks
│ ├── copy_etcd_file.yml
│ ├── create_etcd_cert.yml
│ ├── etcd_config.yml
│ └── main.yml
├── templates
│ ├── config
│ │ └── etcd.config.yml.j2
│ └── pki
│ ├── ca-config.json.j2
│ ├── etcd-ca-csr.json.j2
│ └── etcd-csr.json.j2
└── vars
└── main.yml
8 directories, 14 files
[root@ansible-server ansible]# vim etcd_role.yml
---
- hosts: etcd
roles:
- role: etcd
[root@ansible-server ansible]# ansible-playbook etcd_role.yml
10.2 验证etcd
[root@k8s-etcd01 ~]# export ETCDCTL_API=3
[root@k8s-etcd01 ~]# etcdctl --endpoints="172.31.3.108:2379,172.31.3.109:2379,172.31.3.110:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 172.31.3.108:2379 | a9fef56ff96ed75c | 3.5.0 | 20 kB | false | false | 2 | 8 | 8 | |
| 172.31.3.109:2379 | 8319ef09e8b3d277 | 3.5.0 | 20 kB | true | false | 2 | 8 | 8 | |
| 172.31.3.110:2379 | 209a1f57c506dba2 | 3.5.0 | 20 kB | false | false | 2 | 8 | 8 | |
+-------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
11.部署docker
#只需要创建这个文件就行
[root@ansible-server ansible]# vim docker_binary_role.yml
---
- hosts: k8s_cluster
roles:
- role: docker-binary
[root@ansible-server ansible]# ansible-playbook docker_binary_role.yml
12.部署master
12.1 安装master组件
[root@ansible-server ansible]# mkdir -p roles/kubernetes-master/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/kubernetes-master/
[root@ansible-server kubernetes-master]# ls
files tasks templates vars
#下面MASTER01、MASTER02和MASTER03的IP地址根据自己的更改
[root@ansible-server kubernetes-master]# vim vars/main.yml
ETCD_CERT:
- etcd-ca-key.pem
- etcd-ca.pem
- etcd-key.pem
- etcd.pem
MASTER01: 172.31.3.101
MASTER02: 172.31.3.102
MASTER03: 172.31.3.103
[root@ansible-server kubernetes-master]# vim tasks/copy_etcd_cert.yml
- name: create /etc/etcd/ssl directory
file:
path: /etc/etcd/ssl
state: directory
when:
- inventory_hostname in groups.master
- name: transfer etcd-ca-key.pem file from etcd01 to master01
synchronize:
src: "/etc/etcd/ssl/{
{ item }}"
dest: /etc/etcd/ssl/
mode: pull
loop:
"{
{ ETCD_CERT }}"
delegate_to: "{
{ MASTER01 }}"
when:
- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to master02
synchronize:
src: "/etc/etcd/ssl/{
{ item }}"
dest: /etc/etcd/ssl/
mode: pull
loop:
"{
{ ETCD_CERT }}"
delegate_to: "{
{ MASTER02 }}"
when:
- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to master03
synchronize:
src: "/etc/etcd/ssl/{
{ item }}"
dest: /etc/etcd/ssl/
mode: pull
loop:
"{
{ ETCD_CERT }}"
delegate_to: "{
{ MASTER03 }}"
when:
- ansible_hostname=="k8s-etcd01"
- name: create /etc/kubernetes/pki/etcd directory
file:
path: /etc/kubernetes/pki/etcd
state: directory
when:
- inventory_hostname in groups.master
- name: link etcd_ssl to kubernetes pki
file:
src: "/etc/etcd/ssl/{
{ item }}"
dest: "/etc/kubernetes/pki/etcd/{
{ item }}"
state: link
loop:
"{
{ ETCD_CERT }}"
when:
- inventory_hostname in groups.master
[root@ansible-server kubernetes-master]# wget https://dl.k8s.io/v1.22.8/kubernetes-server-linux-amd64.tar.gz
[root@ansible-server kubernetes-master]# mkdir files/bin
[root@ansible-server kubernetes-master]# tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C files/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
[root@ansible-server kubernetes-master]# ls files/bin/
kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler
[root@ansible-server kubernetes-master]# rm -f kubernetes-server-linux-amd64.tar.gz
[root@ansible-server kubernetes-master]# vim tasks/copy_kubernetes_file.yml
- name: copy kubernetes files to master
copy:
src: "bin/{
{ item }}"
dest: /usr/local/bin/
mode: 0755
loop:
- kube-apiserver
- kube-controller-manager
- kubectl
- kubelet
- kube-proxy
- kube-scheduler
when:
- inventory_hostname in groups.master
- name: create /opt/cni/bin directory
file:
path: /opt/cni/bin
state: directory
when:
- inventory_hostname in groups.master
[root@ansible-server kubernetes-master]# wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O files/cfssl
[root@ansible-server kubernetes-master]# wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O files/cfssljson
[root@ansible-server kubernetes-master]# ls files/
bin cfssl cfssljson
#下面SERVICE_IP变量改成自己规划的service_ip地址,VIP设置成自己的keepalived里的VIP(虚拟IP)地址
[root@ansible-server kubernetes-master]# vim vars/main.yml
...
SERVICE_IP: 10.96.0.1
VIP: 172.31.3.188
K8S_CLUSTER: kubernetes
KUBERNETES_CERT:
- ca.csr
- ca-key.pem
- ca.pem
- apiserver.csr
- apiserver-key.pem
- apiserver.pem
- front-proxy-ca.csr
- front-proxy-ca-key.pem
- front-proxy-ca.pem
- front-proxy-client.csr
- front-proxy-client-key.pem
- front-proxy-client.pem
- controller-manager.csr
- controller-manager-key.pem
- controller-manager.pem
- scheduler.csr
- scheduler-key.pem
- scheduler.pem
- admin.csr
- admin-key.pem
- admin.pem
- sa.key
- sa.pub
KUBECONFIG:
- controller-manager.kubeconfig
- scheduler.kubeconfig
- admin.kubeconfig
[root@ansible-server kubernetes-master]# mkdir templates/pki
[root@ansible-server kubernetes-master]# vim templates/pki/ca-csr.json.j2
{
"CN": "{
{ K8S_CLUSTER }}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
],
"ca": {
"expiry": "876000h"
}
}
[root@ansible-server kubernetes-master]# vim templates/pki/ca-config.json.j2
{
"signing": {
"default": {
"expiry": "876000h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "876000h"
}
}
}
}
[root@ansible-server kubernetes-master]# vim templates/pki/apiserver-csr.json.j2
{
"CN": "kube-apiserver",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "Kubernetes",
"OU": "Kubernetes-manual"
}
]
}
[root@ansible-server kubernetes-master]# vim templates/pki/front-proxy-ca-csr.json.j2
{
"CN": "{
{ K8S_CLUSTER }}",
"key": {
"algo": "rsa",
"size": 2048
}
}
[root@ansible-server kubernetes-master]# vim templates/pki/front-proxy-client-csr.json.j2
{
"CN": "front-proxy-client",
"key": {
"algo": "rsa",
"size": 2048
}
}
[root@ansible-server kubernetes-master]# vim templates/pki/manager-csr.json.j2
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-controller-manager",
"OU": "Kubernetes-manual"
}
]
}
[root@ansible-server kubernetes-master]# vim templates/pki/scheduler-csr.json.j2
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:kube-scheduler",
"OU": "Kubernetes-manual"
}
]
}
[root@ansible-server kubernetes-master]# vim templates/pki/admin-csr.json.j2
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "system:masters",
"OU": "Kubernetes-manual"
}
]
}
[root@ansible-server kubernetes-master]# vim tasks/create_kubernetes_cert.yml
- name: create /etc/kubernetes/pki directory
file:
path: /etc/kubernetes/pki
state: directory
when:
- inventory_hostname in groups.master
- name: copy cfssl and cfssljson tools
copy:
src: "{
{ item }}"
dest: /usr/local/bin
mode: 0755
loop:
- cfssl
- cfssljson
when:
- ansible_hostname=="k8s-master01"
- name: create pki directory
file:
path: /root/pki
state: directory
when:
- ansible_hostname=="k8s-master01"
- name: copy pki files
template:
src: "pki/{
{ item }}.j2"
dest: "/root/pki/{
{ item }}"
loop:
- ca-csr.json
- ca-config.json
- apiserver-csr.json
- front-proxy-ca-csr.json
- front-proxy-client-csr.json
- manager-csr.json
- scheduler-csr.json
- admin-csr.json
when:
- ansible_hostname=="k8s-master01"
- name: create ca cert
shell:
chdir: /root/pki
cmd: cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
creates: /etc/kubernetes/pki/ca.pem
when:
- ansible_hostname=="k8s-master01"
- name: create apiserver cert
shell:
chdir: /root/pki
cmd: cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname={
{
SERVICE_IP }},{
{
VIP }},127.0.0.1,{
{
K8S_CLUSTER }},{
{
K8S_CLUSTER }}.default,{
{
K8S_CLUSTER }}.default.svc,{
{
K8S_CLUSTER }}.default.svc.cluster,{
{
K8S_CLUSTER }}.default.svc.cluster.local,{
% for i in groups.master %}{
{
hostvars[i].ansible_default_ipv4.address }}{
% if not loop.last %},{
% endif %}{
% endfor %} -profile={
{
K8S_CLUSTER }} apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
creates: /etc/kubernetes/pki/apiserver.pem
when:
- ansible_hostname=="k8s-master01"
- name: create front-proxy-ca cert
shell:
chdir: /root/pki
cmd: cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca
creates: /etc/kubernetes/pki/front-proxy-ca.pem
when:
- ansible_hostname=="k8s-master01"
- name: create front-proxy-client cert
shell:
chdir: /root/pki
cmd: cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile={
{
K8S_CLUSTER }} front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
creates: /etc/kubernetes/pki/front-proxy-client.pem
when:
- ansible_hostname=="k8s-master01"
- name: create controller-manager cert
shell:
chdir: /root/pki
cmd: cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile={
{
K8S_CLUSTER }} manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
creates: /etc/kubernetes/pki/controller-manager.pem
when:
- ansible_hostname=="k8s-master01"
- name: set-cluster controller-manager.kubeconfig
shell:
cmd: kubectl config set-cluster {
{
K8S_CLUSTER }} --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://{
{
VIP }}:6443 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: set-credentials controller-manager.kubeconfig
shell:
cmd: kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/pki/controller-manager.pem --client-key=/etc/kubernetes/pki/controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: set-context controller-manager.kubeconfig
shell:
cmd: kubectl config set-context system:kube-controller-manager@{
{
K8S_CLUSTER }} --cluster={
{
K8S_CLUSTER }} --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: use-context controller-manager.kubeconfig
shell:
cmd: kubectl config use-context system:kube-controller-manager@{
{
K8S_CLUSTER }} --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: create scheduler cert
shell:
chdir: /root/pki
cmd: cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile={
{
K8S_CLUSTER }} scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
creates: /etc/kubernetes/pki/scheduler.pem
when:
- ansible_hostname=="k8s-master01"
- name: set-cluster scheduler.kubeconfig
shell:
cmd: kubectl config set-cluster {
{
K8S_CLUSTER }} --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://{
{
VIP }}:6443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: set-credentials scheduler.kubeconfig
shell:
cmd: kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/pki/scheduler.pem --client-key=/etc/kubernetes/pki/scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: set-context scheduler.kubeconfig
shell:
cmd: kubectl config set-context system:kube-scheduler@{
{
K8S_CLUSTER }} --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: use-context scheduler.kubeconfig
shell:
cmd: kubectl config use-context system:kube-scheduler@{
{
K8S_CLUSTER }} --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: create admin cert
shell:
chdir: /root/pki
cmd: cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile={
{
K8S_CLUSTER }} admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
creates: /etc/kubernetes/pki/admin.pem
when:
- ansible_hostname=="k8s-master01"
- name: set-cluster admin.kubeconfig
shell:
cmd: kubectl config set-cluster {
{
K8S_CLUSTER }} --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://{
{
VIP }}:6443 --kubeconfig=/etc/kubernetes/admin.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: set-credentials admin.kubeconfig
shell:
cmd: kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: set-context admin.kubeconfig
shell:
cmd: kubectl config set-context kubernetes-admin@{
{
K8S_CLUSTER }} --cluster={
{
K8S_CLUSTER }} --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: use-context admin.kubeconfig
shell:
cmd: kubectl config use-context kubernetes-admin@{
{
K8S_CLUSTER }} --kubeconfig=/etc/kubernetes/admin.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: create sa.key
shell:
cmd: openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
creates: /etc/kubernetes/pki/sa.key
when:
- ansible_hostname=="k8s-master01"
- name: create sa.pub
shell:
cmd: openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
creates: /etc/kubernetes/pki/sa.pub
when:
- ansible_hostname=="k8s-master01"
- name: transfer cert files from master01 to master02
synchronize:
src: "/etc/kubernetes/pki/{
{ item }}"
dest: /etc/kubernetes/pki
mode: pull
loop:
"{
{ KUBERNETES_CERT }}"
delegate_to: "{
{ MASTER02 }}"
when:
- ansible_hostname=="k8s-master01"
- name: transfer cert files from master01 to master03
synchronize:
src: "/etc/kubernetes/pki/{
{ item }}"
dest: /etc/kubernetes/pki
mode: pull
loop:
"{
{ KUBERNETES_CERT }}"
delegate_to: "{
{ MASTER03 }}"
when:
- ansible_hostname=="k8s-master01"
- name: transfer kubeconfig files from master01 to master02
synchronize:
src: "/etc/kubernetes/{
{ item }}"
dest: /etc/kubernetes/
mode: pull
loop:
"{
{ KUBECONFIG }}"
delegate_to: "{
{ MASTER02 }}"
when:
- ansible_hostname=="k8s-master01"
- name: transfer kubeconfig files from master01 to master03
synchronize:
src: "/etc/kubernetes/{
{ item }}"
dest: /etc/kubernetes/
mode: pull
loop:
"{
{ KUBECONFIG }}"
delegate_to: "{
{ MASTER03 }}"
when:
- ansible_hostname=="k8s-master01"
#SERVICE_SUBNET改成自己规划的service网段地址,POD_SUBNET改成自己规划的容器网段,MASTER变量ip改成master02和master03的IP,HARBOR_DOMAIN的地址设置成自己的harbor域名地址,CLUSTERDNS改成service网段的第10个IP
[root@ansible-server kubernetes-master]# vim vars/main.yml
...
KUBE_DIRECTROY:
- /etc/kubernetes/manifests/
- /etc/systemd/system/kubelet.service.d
- /var/lib/kubelet
- /var/log/kubernetes
SERVICE_SUBNET: 10.96.0.0/12
POD_SUBNET: 192.168.0.0/12
MASTER:
- 172.31.3.102
- 172.31.3.103
HARBOR_DOMAIN: harbor.raymonds.cc
USERNAME: admin
PASSWORD: 123456
PAUSE_VERSION: 3.5
CLUSTERDNS: 10.96.0.10
PKI_DIR: /etc/kubernetes/pki
K8S_DIR: /etc/kubernetes
[root@ansible-server kubernetes-master]# mkdir templates/service
[root@ansible-server kubernetes-master]# vim templates/service/kube-apiserver.service.j2
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--v=2 \
--logtostderr=true \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--insecure-port=0 \
--advertise-address={
{
ansible_default_ipv4.address }} \
--service-cluster-ip-range={
{
SERVICE_SUBNET }} \
--service-node-port-range=30000-32767 \
--etcd-servers={
% for i in groups.etcd %}https://{
{
hostvars[i].ansible_default_ipv4.address }}:2379{
% if not loop.last %},{
% endif %}{
% endfor %} \
--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--client-ca-file=/etc/kubernetes/pki/ca.pem \
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/pki/sa.pub \
--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
--authorization-mode=Node,RBAC \
--enable-bootstrap-token-auth=true \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
--requestheader-allowed-names=aggregator \
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-username-headers=X-Remote-User
# --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
[root@ansible-server kubernetes-master]# vim templates/service/kube-controller-manager.service.j2
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr={
{
POD_SUBNET }} \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
[root@ansible-server kubernetes-master]# mkdir files/service/
[root@ansible-server kubernetes-master]# vim files/service/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--leader-elect=true \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
[root@ansible-server kubernetes-master]# mkdir files/yaml
[root@ansible-server kubernetes-master]# vim files/yaml/bootstrap.secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-c8ad9c
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
description: "The default bootstrap token generated by 'kubelet '."
token-id: c8ad9c
token-secret: 2e4d610cf3e7426e
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver
[root@ansible-server kubernetes-master]# vim files/service/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
[root@ansible-server kubernetes-master]# mkdir templates/config
[root@ansible-server kubernetes-master]# vim templates/config/10-kubelet.conf.j2
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image={
{ HARBOR_DOMAIN }}/google_containers/pause:{
{ PAUSE_VERSION }}"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
[root@ansible-server kubernetes-master]# vim templates/config/kubelet-conf.yml.j2
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- {
{
CLUSTERDNS }}
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
[root@ansible-server kubernetes-master]# vim templates/config/kube-proxy.conf.j2
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: {
{
POD_SUBNET }}
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
[root@ansible-server kubernetes-master]# vim files/service/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.conf \
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
[root@ansible-server kubernetes-master]# vim tasks/master_config.yml
- name: create kubernetes directory
file:
path: "{
{ item }}"
state: directory
loop:
"{
{ KUBE_DIRECTROY }}"
when:
- inventory_hostname in groups.master
- name: copy kube-apiserver.service
template:
src: service/kube-apiserver.service.j2
dest: /lib/systemd/system/kube-apiserver.service
when:
- inventory_hostname in groups.master
- name: start kube-apiserver
systemd:
name: kube-apiserver
state: started
enabled: yes
daemon_reload: yes
when:
- inventory_hostname in groups.master
- name: copy kube-controller-manager.service
template:
src: service/kube-controller-manager.service.j2
dest: /lib/systemd/system/kube-controller-manager.service
when:
- inventory_hostname in groups.master
- name: start kube-controller-manager
systemd:
name: kube-controller-manager
state: started
enabled: yes
daemon_reload: yes
when:
- inventory_hostname in groups.master
- name: copy kube-scheduler.service
copy:
src: service/kube-scheduler.service
dest: /lib/systemd/system/
when:
- inventory_hostname in groups.master
- name: start kube-scheduler
systemd:
name: kube-scheduler
state: started
enabled: yes
daemon_reload: yes
when:
- inventory_hostname in groups.master
- name: set-cluster bootstrap-kubelet.kubeconfig
shell:
cmd: kubectl config set-cluster {
{
K8S_CLUSTER }} --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://{
{
VIP }}:6443 --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: set-credentials bootstrap-kubelet.kubeconfig
shell:
cmd: kubectl config set-credentials tls-bootstrap-token-user --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: set-context bootstrap-kubelet.kubeconfig
shell:
cmd: kubectl config set-context tls-bootstrap-token-user@{
{
K8S_CLUSTER }} --cluster={
{
K8S_CLUSTER }} --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: use-context bootstrap-kubelet.kubeconfig
shell:
cmd: kubectl config use-context tls-bootstrap-token-user@{
{
K8S_CLUSTER }} --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: create user kube config directory
file:
path: /root/.kube
state: directory
when:
- ansible_hostname=="k8s-master01"
- name: copy kubeconfig to user directory
copy:
src: /etc/kubernetes/admin.kubeconfig
dest: /root/.kube/config
remote_src: yes
when:
- ansible_hostname=="k8s-master01"
- name: copy bootstrap.secret.yaml
copy:
src: yaml/bootstrap.secret.yaml
dest: /root
when:
- ansible_hostname=="k8s-master01"
- name: create pod by bootstrap.secret.yaml
shell:
chdir: /root
cmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f bootstrap.secret.yaml"
when:
- ansible_hostname=="k8s-master01"
- name: transfer bootstrap-kubelet.kubeconfig file from mater01 to master02 master03
synchronize:
src: /etc/kubernetes/bootstrap-kubelet.kubeconfig
dest: /etc/kubernetes/
mode: pull
delegate_to: "{
{ item }}"
loop:
"{
{ MASTER }}"
when:
- ansible_hostname=="k8s-master01"
- name: copy kubelet.service to master
copy:
src: service/kubelet.service
dest: /lib/systemd/system/
when:
- inventory_hostname in groups.master
- name: docker login
shell:
cmd: docker login -u {
{
USERNAME }} -p {
{
PASSWORD }} {
{
HARBOR_DOMAIN }}
when:
- ansible_hostname=="k8s-master01"
- name: download pause image
shell: |
docker pull registry.aliyuncs.com/google_containers/pause:{
{
PAUSE_VERSION }}
docker tag registry.aliyuncs.com/google_containers/pause:{
{
PAUSE_VERSION }} {
{
HARBOR_DOMAIN }}/google_containers/pause:{
{
PAUSE_VERSION }}
docker rmi registry.aliyuncs.com/google_containers/pause:{
{
PAUSE_VERSION }}
docker push {
{
HARBOR_DOMAIN }}/google_containers/pause:{
{
PAUSE_VERSION }}
when:
- ansible_hostname=="k8s-master01"
- name: copy 10-kubelet.conf to master
template:
src: config/10-kubelet.conf.j2
dest: /etc/systemd/system/kubelet.service.d/10-kubelet.conf
when:
- inventory_hostname in groups.master
- name: copy kubelet-conf.yml to master
template:
src: config/kubelet-conf.yml.j2
dest: /etc/kubernetes/kubelet-conf.yml
when:
- inventory_hostname in groups.master
- name: start kubelet for master
systemd:
name: kubelet
state: started
enabled: yes
daemon_reload: yes
when:
- inventory_hostname in groups.master
- name: create serviceaccount
shell:
cmd: kubectl -n kube-system create serviceaccount kube-proxy
ignore_errors: yes
when:
- ansible_hostname=="k8s-master01"
- name: create clusterrolebinding
shell:
cmd: kubectl create clusterrolebinding system:kube-proxy --clusterrole system:node-proxier --serviceaccount kube-system:kube-proxy
ignore_errors: yes
when:
- ansible_hostname=="k8s-master01"
- name: get SECRET var
shell:
cmd: kubectl -n kube-system get sa/kube-proxy --output=jsonpath='{.secrets[0].name}'
register: SECRET
when:
- ansible_hostname=="k8s-master01"
- name: get JWT_TOKEN var
shell:
cmd: kubectl -n kube-system get secret/{
{
SECRET.stdout }} --output=jsonpath='{.data.token}' | base64 -d
register: JWT_TOKEN
when:
- ansible_hostname=="k8s-master01"
- name: set-cluster kube-proxy.kubeconfig
shell:
cmd: kubectl config set-cluster {
{
K8S_CLUSTER }} --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://{
{
VIP }}:6443 --kubeconfig={
{
K8S_DIR }}/kube-proxy.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: set-credentials kube-proxy.kubeconfig
shell:
cmd: kubectl config set-credentials {
{
K8S_CLUSTER }} --token={
{
JWT_TOKEN.stdout }} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: set-context kube-proxy.kubeconfig
shell:
cmd: kubectl config set-context {
{
K8S_CLUSTER }} --cluster={
{
K8S_CLUSTER }} --user=kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: use-context kube-proxy.kubeconfig
shell:
cmd: kubectl config use-context {
{
K8S_CLUSTER }} --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
when:
- ansible_hostname=="k8s-master01"
- name: transfer kube-proxy.kubeconfig files from master01 to master02 master03
synchronize:
src: /etc/kubernetes/kube-proxy.kubeconfig
dest: /etc/kubernetes/
mode: pull
delegate_to: "{
{ item }}"
loop:
"{
{ MASTER }}"
when:
- ansible_hostname=="k8s-master01"
- name: copy kube-proxy.conf to master
template:
src: config/kube-proxy.conf.j2
dest: /etc/kubernetes/kube-proxy.conf
when:
- inventory_hostname in groups.master
- name: copy kube-proxy.service to master
copy:
src: service/kube-proxy.service
dest: /lib/systemd/system/
when:
- inventory_hostname in groups.master
- name: start kube-proxy to master
systemd:
name: kube-proxy
state: started
enabled: yes
daemon_reload: yes
when:
- inventory_hostname in groups.master
[root@ansible-server kubernetes-master]# vim tasks/install_automatic_completion_tool.yml
- name: install CentOS or Rocky bash-completion tool
yum:
name: bash-completion
when:
- (ansible_distribution=="CentOS" or ansible_distribution=="Rocky")
- ansible_hostname=="k8s-master01"
- name: install Ubuntu bash-completion tool
apt:
name: bash-completion
force: yes
when:
- ansible_distribution=="Ubuntu"
- ansible_hostname=="k8s-master01"
- name: source completion bash
shell: |
"source <(kubectl completion bash)"
echo "source <(kubectl completion bash)" >> ~/.bashrc
when:
- ansible_hostname=="k8s-master01"
[root@ansible-server kubernetes-master]# vim tasks/main.yml
- include: copy_etcd_cert.yml
- include: copy_kubernetes_file.yml
- include: create_kubernetes_cert.yml
- include: master_config.yml
- include: install_automatic_completion_tool.yml
[root@ansible-server kubernetes-master]# cd ../../
[root@ansible-server ansible]# tree roles/kubernetes-master/
roles/kubernetes-master/
├── files
│ ├── bin
│ │ ├── kube-apiserver
│ │ ├── kube-controller-manager
│ │ ├── kubectl
│ │ ├── kubelet
│ │ ├── kube-proxy
│ │ └── kube-scheduler
│ ├── cfssl
│ ├── cfssljson
│ ├── service
│ │ ├── kubelet.service
│ │ ├── kube-proxy.service
│ │ └── kube-scheduler.service
│ └── yaml
│ └── bootstrap.secret.yaml
├── tasks
│ ├── copy_etcd_cert.yml
│ ├── copy_kubernetes_file.yml
│ ├── create_kubernetes_cert.yml
│ ├── install_automatic_completion_tool.yml
│ ├── main.yml
│ └── master_config.yml
├── templates
│ ├── config
│ │ ├── 10-kubelet.conf.j2
│ │ ├── kubelet-conf.yml.j2
│ │ └── kube-proxy.conf.j2
│ ├── pki
│ │ ├── admin-csr.json.j2
│ │ ├── apiserver-csr.json.j2
│ │ ├── ca-config.json.j2
│ │ ├── ca-csr.json.j2
│ │ ├── front-proxy-ca-csr.json.j2
│ │ ├── front-proxy-client-csr.json.j2
│ │ ├── manager-csr.json.j2
│ │ └── scheduler-csr.json.j2
│ └── service
│ ├── kube-apiserver.service.j2
│ └── kube-controller-manager.service.j2
└── vars
└── main.yml
10 directories, 32 files
[root@ansible-server ansible]# vim kubernetes_master_role.yml
---
- hosts: master:etcd
roles:
- role: kubernetes-master
[root@ansible-server ansible]# ansible-playbook kubernetes_master_role.yml
12.2 验证master
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local NotReady <none> 13s v1.22.8
k8s-master02.example.local NotReady <none> 13s v1.22.8
k8s-master03.example.local NotReady <none> 13s v1.22.8
13.部署node
13.1 安装node组件
[root@ansible-server ansible]# mkdir -p roles/kubernetes-node/{tasks,files,vars,templates}
[root@ansible-server ansible]# cd roles/kubernetes-node/
[root@ansible-server kubernetes-node]# ls
files tasks templates vars
[root@ansible-server kubernetes-node]# mkdir files/bin
[root@ansible-server kubernetes-node]# cp /data/ansible/roles/kubernetes-master/files/bin/{kubelet,kube-proxy} files/bin/
[root@ansible-server kubernetes-node]# ls files/bin/
kubelet kube-proxy
[root@ansible-server kubernetes-node]# vim tasks/copy_kubernetes_file.yaml
- name: copy kubernetes files to node
copy:
src: "bin/{
{ item }}"
dest: /usr/local/bin/
mode: 0755
loop:
- kubelet
- kube-proxy
when:
- inventory_hostname in groups.node
- name: create /opt/cni/bin directory
file:
path: /opt/cni/bin
state: directory
when:
- inventory_hostname in groups.node
#下面NODE01、NODE02和NODE03的IP地址根据自己的更改
[root@ansible-server kubernetes-node]# vim vars/main.yml
ETCD_CERT:
- etcd-ca-key.pem
- etcd-ca.pem
- etcd-key.pem
- etcd.pem
NODE01: 172.31.3.111
NODE02: 172.31.3.112
NODE03: 172.31.3.113
[root@ansible-server kubernetes-node]# vim tasks/copy_etcd_cert.yaml
- name: create /etc/etcd/ssl directory for node
file:
path: /etc/etcd/ssl
state: directory
when:
- inventory_hostname in groups.node
- name: transfer etcd-ca-key.pem file from etcd01 to node01
synchronize:
src: "/etc/etcd/ssl/{
{ item }}"
dest: /etc/etcd/ssl/
mode: pull
loop:
"{
{ ETCD_CERT }}"
delegate_to: "{
{ NODE01 }}"
when:
- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to node02
synchronize:
src: "/etc/etcd/ssl/{
{ item }}"
dest: /etc/etcd/ssl/
mode: pull
loop:
"{
{ ETCD_CERT }}"
delegate_to: "{
{ NODE02 }}"
when:
- ansible_hostname=="k8s-etcd01"
- name: transfer etcd-ca-key.pem file from etcd01 to node03
synchronize:
src: "/etc/etcd/ssl/{
{ item }}"
dest: /etc/etcd/ssl/
mode: pull
loop:
"{
{ ETCD_CERT }}"
delegate_to: "{
{ NODE03 }}"
when:
- ansible_hostname=="k8s-etcd01"
- name: create /etc/kubernetes/pki/etcd directory
file:
path: /etc/kubernetes/pki/etcd
state: directory
when:
- inventory_hostname in groups.node
- name: link etcd_ssl to kubernetes pki
file:
src: "/etc/etcd/ssl/{
{ item }}"
dest: "/etc/kubernetes/pki/etcd/{
{ item }}"
state: link
loop:
"{
{ ETCD_CERT }}"
when:
- inventory_hostname in groups.node
[root@ansible-server kubernetes-node]# vim vars/main.yml
...
NODE:
- 172.31.3.111
- 172.31.3.112
- 172.31.3.113
[root@ansible-server kubernetes-node]# vim tasks/copy_kubernetes_cert.yml
- name: create /etc/kubernetes/pki directory to node
file:
path: /etc/kubernetes/pki
state: directory
when:
- inventory_hostname in groups.node
- name: transfer ca.pem file from mater01 to node
synchronize:
src: /etc/kubernetes/pki/ca.pem
dest: /etc/kubernetes/pki/
mode: pull
delegate_to: "{
{ item }}"
loop:
"{
{ NODE }}"
when:
- ansible_hostname=="k8s-master01"
- name: transfer ca-key.pem file from mater01 to node
synchronize:
src: /etc/kubernetes/pki/ca-key.pem
dest: /etc/kubernetes/pki/
mode: pull
delegate_to: "{
{ item }}"
loop:
"{
{ NODE }}"
when:
- ansible_hostname=="k8s-master01"
- name: transfer front-proxy-ca.pem file from mater01 to node
synchronize:
src: /etc/kubernetes/pki/front-proxy-ca.pem
dest: /etc/kubernetes/pki/
mode: pull
delegate_to: "{
{ item }}"
loop:
"{
{ NODE }}"
when:
- ansible_hostname=="k8s-master01"
- name: transfer bootstrap-kubelet.kubeconfig file from mater01 to node
synchronize:
src: /etc/kubernetes/bootstrap-kubelet.kubeconfig
dest: /etc/kubernetes/
mode: pull
delegate_to: "{
{ item }}"
loop:
"{
{ NODE }}"
when:
- ansible_hostname=="k8s-master01"
[root@ansible-server kubernetes-node]# vim vars/main.yml
...
KUBE_DIRECTROY:
- /etc/kubernetes/manifests/
- /etc/systemd/system/kubelet.service.d
- /var/lib/kubelet
- /var/log/kubernetes
HARBOR_DOMAIN: harbor.raymonds.cc
PAUSE_VERSION: 3.5
CLUSTERDNS: 10.96.0.10
POD_SUBNET: 192.168.0.0/12
[root@ansible-server kubernetes-node]# mkdir files/service
[root@ansible-server kubernetes-node]# vim files/service/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
[root@ansible-server kubernetes-node]# mkdir templates/config
[root@ansible-server kubernetes-node]# vim templates/config/10-kubelet.conf.j2
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image={
{ HARBOR_DOMAIN }}/google_containers/pause:{
{ PAUSE_VERSION }}"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
[root@ansible-server kubernetes-node]# vim templates/config/kubelet-conf.yml.j2
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- {
{
CLUSTERDNS }}
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
[root@ansible-server kubernetes-node]# vim templates/config/kube-proxy.conf.j2
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
qps: 5
clusterCIDR: {
{
POD_SUBNET }}
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
masqueradeAll: true
minSyncPeriod: 5s
scheduler: "rr"
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
[root@ansible-server kubernetes-node]# vim files/service/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.conf \
--v=2
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
[root@ansible-server kubernetes-node]# vim tasks/node_config.yml
- name: create kubernetes directory to node
file:
path: "{
{ item }}"
state: directory
loop:
"{
{ KUBE_DIRECTROY }}"
when:
- inventory_hostname in groups.node
- name: copy kubelet.service to node
copy:
src: service/kubelet.service
dest: /lib/systemd/system/
when:
- inventory_hostname in groups.node
- name: copy 10-kubelet.conf to node
template:
src: config/10-kubelet.conf.j2
dest: /etc/systemd/system/kubelet.service.d/10-kubelet.conf
when:
- inventory_hostname in groups.node
- name: copy kubelet-conf.yml to node
template:
src: config/kubelet-conf.yml.j2
dest: /etc/kubernetes/kubelet-conf.yml
when:
- inventory_hostname in groups.node
- name: start kubelet for node
systemd:
name: kubelet
state: started
enabled: yes
daemon_reload: yes
when:
- inventory_hostname in groups.node
- name: transfer kube-proxy.kubeconfig files from master01 to node
synchronize:
src: /etc/kubernetes/kube-proxy.kubeconfig
dest: /etc/kubernetes/
mode: pull
delegate_to: "{
{ item }}"
loop:
"{
{ NODE }}"
when:
- ansible_hostname=="k8s-master01"
- name: copy kube-proxy.conf to node
template:
src: config/kube-proxy.conf.j2
dest: /etc/kubernetes/kube-proxy.conf
when:
- inventory_hostname in groups.node
- name: copy kube-proxy.service to node
copy:
src: service/kube-proxy.service
dest: /lib/systemd/system/
when:
- inventory_hostname in groups.node
- name: start kube-proxy to node
systemd:
name: kube-proxy
state: started
enabled: yes
daemon_reload: yes
when:
- inventory_hostname in groups.node
[root@ansible-server kubernetes-node]# vim tasks/main.yml
- include: copy_kubernetes_file.yaml
- include: copy_etcd_cert.yaml
- include: copy_kubernetes_cert.yml
- include: node_config.yml
[root@ansible-server kubernetes-node]# cd ../../
[root@ansible-server ansible]# tree roles/kubernetes-node/
roles/kubernetes-node/
├── files
│ ├── bin
│ │ ├── kubelet
│ │ └── kube-proxy
│ └── service
│ ├── kubelet.service
│ └── kube-proxy.service
├── tasks
│ ├── copy_etcd_cert.yaml
│ ├── copy_kubernetes_cert.yml
│ ├── copy_kubernetes_file.yaml
│ ├── main.yml
│ └── node_config.yml
├── templates
│ └── config
│ ├── 10-kubelet.conf.j2
│ ├── kubelet-conf.yml.j2
│ └── kube-proxy.conf.j2
└── vars
└── main.yml
7 directories, 13 files
[root@ansible-server ansible]# vim kubernetes_node_role.yml
---
- hosts: master:node:etcd
roles:
- role: kubernetes-node
[root@ansible-server ansible]# ansible-playbook kubernetes_node_role.yml
13.2 验证node
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local NotReady <none> 8m25s v1.22.8
k8s-master02.example.local NotReady <none> 8m25s v1.22.8
k8s-master03.example.local NotReady <none> 8m25s v1.22.8
k8s-node01.example.local NotReady <none> 11s v1.22.8
k8s-node02.example.local NotReady <none> 12s v1.22.8
k8s-node03.example.local NotReady <none> 11s v1.22.8
14.安装Calico
14.1 安装calico
[root@ansible-server ansible]# mkdir -p roles/calico/{tasks,vars,templates}
[root@ansible-server ansible]# cd roles/calico
[root@ansible-server calico]# ls
tasks templates vars
#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址,POD_SUBNET改成自己规划的容器网段
[root@ansible-server calico]# vim vars/main.yml
HARBOR_DOMAIN: harbor.raymonds.cc
POD_SUBNET: 192.168.0.0/12
[root@ansible-server calico]# cat templates/calico-etcd.yaml.j2
---
# Source: calico/templates/calico-etcd-secrets.yaml
# The following contains k8s Secrets for use with a TLS enabled etcd cluster.
# For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: calico-etcd-secrets
namespace: kube-system
data:
# Populate the following with etcd TLS configuration if desired, but leave blank if
# not using TLS for etcd.
# The keys below should be uncommented and the values populated with the base64
# encoded contents of each file that would be associated with the TLS data.
# Example command for encoding a file contents: cat <file> | base64 -w 0
# etcd-key: null
# etcd-cert: null
# etcd-ca: null
---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Configure this with the location of your etcd cluster.
etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
# If you're using TLS enabled etcd uncomment the following.
# You must also populate the Secret below with these files.
etcd_ca: "" # "/calico-secrets/etcd-ca"
etcd_cert: "" # "/calico-secrets/etcd-cert"
etcd_key: "" # "/calico-secrets/etcd-key"
# Typha is disabled.
typha_service_name: "none"
# Configure the backend to use.
calico_backend: "bird"
# Configure the MTU to use for workload interfaces and tunnels.
# By default, MTU is auto-detected, and explicitly setting this field should not be required.
# You can override auto-detection by providing a non-zero value.
veth_mtu: "0"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"etcd_endpoints": "__ETCD_ENDPOINTS__",
"etcd_key_file": "__ETCD_KEY_FILE__",
"etcd_cert_file": "__ETCD_CERT_FILE__",
"etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {
"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {
"bandwidth": true}
}
]
}
---
# Source: calico/templates/calico-kube-controllers-rbac.yaml
# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
# Pods are monitored for changing labels.
# The node controller monitors Kubernetes nodes.
# Namespace and serviceaccount labels are used for policy.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
- serviceaccounts
verbs:
- watch
- list
- get
# Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: ["networking.k8s.io"]
resources:
- networkpolicies
verbs:
- watch
- list
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
---
---
# Source: calico/templates/calico-node-rbac.yaml
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
# The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [""]
resources:
- pods
- nodes
- namespaces
verbs:
- get
# EndpointSlices are used for Service-based network policy rule
# enforcement.
- apiGroups: ["discovery.k8s.io"]
resources:
- endpointslices
verbs:
- watch
- list
- apiGroups: [""]
resources:
- endpoints
- services
verbs:
# Used to discover service IPs for advertisement.
- watch
- list
# Pod CIDR auto-detection on kubeadm needs access to config maps.
- apiGroups: [""]
resources:
- configmaps
verbs:
- get
- apiGroups: [""]
resources:
- nodes/status
verbs:
# Needed for clearing NodeNetworkUnavailable flag.
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: docker.io/calico/cni:v3.21.4
command: ["/opt/cni/bin/install"]
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
- mountPath: /calico-secrets
name: etcd-certs
securityContext:
privileged: true
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: docker.io/calico/pod2daemon-flexvol:v3.21.4
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
securityContext:
privileged: true
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: docker.io/calico/node:v3.21.4
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Set noderef for node controller.
- name: CALICO_K8S_NODE_REF
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
value: "Never"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the VXLAN tunnel device.
- name: FELIX_VXLANMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the Wireguard tunnel device.
- name: FELIX_WIREGUARDMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
lifecycle:
preStop:
exec:
command:
- /bin/calico-node
- -shutdown
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
- -bird-ready
periodSeconds: 10
timeoutSeconds: 10
volumeMounts:
# For maintaining CNI plugin API credentials.
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
readOnly: false
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- mountPath: /calico-secrets
name: etcd-certs
- name: policysync
mountPath: /var/run/nodeagent
# For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
# parent directory.
- name: sysfs
mountPath: /sys/fs/
# Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.
# If the host is known to mount that filesystem already then Bidirectional can be omitted.
mountPropagation: Bidirectional
- name: cni-log-dir
mountPath: /var/log/calico/cni
readOnly: true
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: sysfs
hostPath:
path: /sys/fs/
type: DirectoryOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Used to access CNI logs.
- name: cni-log-dir
hostPath:
path: /var/log/calico/cni
# Mount in the etcd TLS secrets with mode 400.
# See https://kubernetes.io/docs/concepts/configuration/secret/
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
defaultMode: 0400
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
nodeSelector:
kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
# The controllers must run in the host network namespace so that
# it isn't governed by policy that would prevent it from working.
hostNetwork: true
containers:
- name: calico-kube-controllers
image: docker.io/calico/kube-controllers:v3.21.4
env:
# The location of the etcd cluster.
- name: ETCD_ENDPOINTS
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_endpoints
# Location of the CA certificate for etcd.
- name: ETCD_CA_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_ca
# Location of the client key for etcd.
- name: ETCD_KEY_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_key
# Location of the client certificate for etcd.
- name: ETCD_CERT_FILE
valueFrom:
configMapKeyRef:
name: calico-config
key: etcd_cert
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: policy,namespace,serviceaccount,workloadendpoint,node
volumeMounts:
# Mount in the etcd TLS secrets.
- mountPath: /calico-secrets
name: etcd-certs
livenessProbe:
exec:
command:
- /usr/bin/check-status
- -l
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
periodSeconds: 10
volumes:
# Mount in the etcd TLS secrets with mode 400.
# See https://kubernetes.io/docs/concepts/configuration/secret/
- name: etcd-certs
secret:
secretName: calico-etcd-secrets
defaultMode: 0440
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
---
# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
---
# Source: calico/templates/calico-typha.yaml
---
# Source: calico/templates/configure-canal.yaml
---
# Source: calico/templates/kdd-crds.yaml
#修改下面内容
[root@ansible-server calico]# grep "etcd_endpoints:.*" templates/calico-etcd.yaml.j2
etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
[root@ansible-server calico]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "{% for i in groups.etcd %}https://{
{ hostvars[i].ansible_default_ipv4.address }}:2379{% if not loop.last %},{% endif %}{% endfor %}"#g' templates/calico-etcd.yaml.j2
[root@ansible-server calico]# grep "etcd_endpoints:.*" templates/calico-etcd.yaml.j2
etcd_endpoints: {
% for i in groups.etcd %}https://{
{
hostvars[i].ansible_default_ipv4.address }}{
% if not loop.last %},{
% endif %}{
% endfor %}
[root@ansible-server calico]# vim tasks/calico_file.yml
- name: copy calico-etcd.yaml file
template:
src: calico-etcd.yaml.j2
dest: /root/calico-etcd.yaml
when:
- ansible_hostname=="k8s-master01"
[root@ansible-server calico]# vim tasks/config.yml
- name: get ETCD_KEY key
shell:
cmd: cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'
register: ETCD_KEY
when:
- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd-key:.*" line
replace:
path: /root/calico-etcd.yaml
regexp: '# (etcd-key:) null'
replace: '\1 {
{ ETCD_KEY.stdout }}'
when:
- ansible_hostname=="k8s-master01"
- name: get ETCD_CERT key
shell:
cmd: cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'
register: ETCD_CERT
when:
- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd-cert:.*" line
replace:
path: /root/calico-etcd.yaml
regexp: '# (etcd-cert:) null'
replace: '\1 {
{ ETCD_CERT.stdout }}'
when:
- ansible_hostname=="k8s-master01"
- name: get ETCD_CA key
shell:
cmd: cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'
when:
- ansible_hostname=="k8s-master01"
register: ETCD_CA
- name: Modify the ".*etcd-ca:.*" line
replace:
path: /root/calico-etcd.yaml
regexp: '# (etcd-ca:) null'
replace: '\1 {
{ ETCD_CA.stdout }}'
when:
- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd_ca:.*" line
replace:
path: /root/calico-etcd.yaml
regexp: '(etcd_ca:) ""'
replace: '\1 "/calico-secrets/etcd-ca"'
when:
- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd_cert:.*" line
replace:
path: /root/calico-etcd.yaml
regexp: '(etcd_cert:) ""'
replace: '\1 "/calico-secrets/etcd-cert"'
when:
- ansible_hostname=="k8s-master01"
- name: Modify the ".*etcd_key:.*" line
replace:
path: /root/calico-etcd.yaml
regexp: '(etcd_key:) ""'
replace: '\1 "/calico-secrets/etcd-key"'
when:
- ansible_hostname=="k8s-master01"
- name: Modify the ".*CALICO_IPV4POOL_CIDR.*" line
replace:
path: /root/calico-etcd.yaml
regexp: '# (- name: CALICO_IPV4POOL_CIDR)'
replace: '\1'
when:
- ansible_hostname=="k8s-master01"
- name: Modify the ".*192.168.0.0.*" line
replace:
path: /root/calico-etcd.yaml
regexp: '# (value:) "192.168.0.0/16"'
replace: ' \1 "{
{ POD_SUBNET }}"'
when:
- ansible_hostname=="k8s-master01"
- name: Modify the "image:" line
replace:
path: /root/calico-etcd.yaml
regexp: '(.*image:) docker.io/calico(/.*)'
replace: '\1 {
{ HARBOR_DOMAIN }}/google_containers\2'
when:
- ansible_hostname=="k8s-master01"
[root@ansible-server calico]# vim tasks/download_images.yml
- name: get calico version
shell:
chdir: /root
cmd: awk -F "/" '/image:/{print $NF}' calico-etcd.yaml
register: CALICO_VERSION
when:
- ansible_hostname=="k8s-master01"
- name: download calico image
shell: |
{
% for i in CALICO_VERSION.stdout_lines %}
docker pull registry.cn-beijing.aliyuncs.com/raymond9/{
{
i }}
docker tag registry.cn-beijing.aliyuncs.com/raymond9/{
{
i }} {
{
HARBOR_DOMAIN }}/google_containers/{
{
i }}
docker rmi registry.cn-beijing.aliyuncs.com/raymond9/{
{
i }}
docker push {
{
HARBOR_DOMAIN }}/google_containers/{
{
i }}
{
% endfor %}
when:
- ansible_hostname=="k8s-master01"
[root@ansible-server calico]# vim tasks/install_calico.yml
- name: install calico
shell:
chdir: /root
cmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f calico-etcd.yaml"
when:
- ansible_hostname=="k8s-master01"
[root@ansible-server calico]# vim tasks/main.yml
- include: calico_file.yml
- include: config.yml
- include: download_images.yml
- include: install_calico.yml
[root@ansible-server calico]# cd ../../
[root@ansible-server ansible]# tree roles/calico
roles/calico
├── tasks
│ ├── calico_file.yml
│ ├── config.yml
│ ├── download_images.yml
│ ├── install_calico.yml
│ └── main.yml
├── templates
│ └── calico-etcd.yaml.j2
└── vars
└── main.yml
3 directories, 7 files
[root@ansible-server ansible]# vim calico_role.yml
---
- hosts: master:etcd
roles:
- role: calico
[root@ansible-server ansible]# ansible-playbook calico_role.yml
14.2 验证calico
[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep calico
calico-kube-controllers-7dd7f59c79-8wtbq 1/1 Running 0 41s
calico-node-2jmgv 1/1 Running 0 41s
calico-node-652nt 1/1 Running 0 41s
calico-node-btzxx 1/1 Running 0 41s
calico-node-scjpl 1/1 Running 0 41s
calico-node-tv28k 1/1 Running 0 41s
calico-node-v9m8g 1/1 Running 0 41s
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.example.local Ready <none> 26m v1.22.8
k8s-master02.example.local Ready <none> 26m v1.22.8
k8s-master03.example.local Ready <none> 26m v1.22.8
k8s-node01.example.local Ready <none> 18m v1.22.8
k8s-node02.example.local Ready <none> 18m v1.22.8
k8s-node03.example.local Ready <none> 18m v1.22.8
15.安装 CoreDNS
15.1 安装 CoreDNS
[root@ansible-server ansible]# mkdir -p roles/coredns/{tasks,templates,vars}
[root@ansible-server ansible]# cd roles/coredns/
[root@ansible-server coredns]# ls
tasks templates vars
#下面CLUSTERDNS改成自己规划的service网段的第10个IP地址,HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server coredns]# vim vars/main.yml
CLUSTERDNS: 10.96.0.10
HARBOR_DOMAIN: harbor.raymonds.cc
[root@ansible-server coredns]# cat templates/coredns.yaml.j2
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. Default is 1.
# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: registry.aliyuncs.com/google_containers/coredns:1.8.4
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 192.168.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
[root@ansible-server coredns]# vim templates/coredns.yaml.j2
...
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop ##将loop插件直接删除,避免内部循环
reload
loadbalance
}
...
spec:
selector:
k8s-app: kube-dns
clusterIP: {
{
CLUSTERDNS }} #修改这里
...
[root@ansible-server coredns]# vim tasks/coredns_file.yml
- name: copy coredns.yaml file
template:
src: coredns.yaml.j2
dest: /root/coredns.yaml
when:
- ansible_hostname=="k8s-master01"
[root@ansible-server coredns]# vim tasks/config.yml
- name: Modify the "image:" line
replace:
path: /root/coredns.yaml
regexp: '(.*image:) registry.aliyuncs.com(/.*)'
replace: '\1 {
{ HARBOR_DOMAIN }}\2'
[root@ansible-server coredns]# vim tasks/download_images.yml
- name: get coredns version
shell:
chdir: /root
cmd: awk -F "/" '/image:/{print $NF}' coredns.yaml
register: COREDNS_VERSION
- name: download coredns image
shell: |
{
% for i in COREDNS_VERSION.stdout_lines %}
docker pull registry.aliyuncs.com/google_containers/{
{
i }}
docker tag registry.aliyuncs.com/google_containers/{
{
i }} {
{
HARBOR_DOMAIN }}/google_containers/{
{
i }}
docker rmi registry.aliyuncs.com/google_containers/{
{
i }}
docker push {
{
HARBOR_DOMAIN }}/google_containers/{
{
i }}
{
% endfor %}
[root@ansible-server coredns]# vim tasks/install_coredns.yml
- name: install coredns
shell:
chdir: /root
cmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f coredns.yaml"
[root@ansible-server coredns]# vim tasks/main.yml
- include: coredns_file.yml
- include: config.yml
- include: download_images.yml
- include: install_coredns.yml
[root@ansible-server coredns]# cd ../../
[root@ansible-server ansible]# tree roles/coredns/
roles/coredns/
├── tasks
│ ├── config.yml
│ ├── coredns_file.yml
│ ├── download_images.yml
│ ├── install_coredns.yml
│ └── main.yml
├── templates
│ └── coredns.yaml.j2
└── vars
└── main.yml
3 directories, 7 files
[root@ansible-server ansible]# vim coredns_role.yml
---
- hosts: master01
roles:
- role: coredns
[root@ansible-server ansible]# ansible-playbook coredns_role.yml
15.2 验证coredns
[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep coredns
coredns-7b7758766c-m5s6g 1/1 Running 0 26s
16.安装Metrics
16.1 安装metrics
[root@ansible-server ansible]# mkdir -p roles/metrics/{files,vars,tasks}
[root@ansible-server ansible]# cd roles/metrics/
[root@ansible-server metrics]# ls
files tasks vars
#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server metrics]# vim vars/main.yml
HARBOR_DOMAIN: harbor.raymonds.cc
[root@ansible-server metrics]# cat files/components.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {
}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
#修改下面内容:
[root@k8s-master01 ~]# vim components.yaml
...
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=15s
#添加下面内容
- --kubelet-insecure-tls
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem
- --requestheader-username-headers=X-Remote-User
- --requestheader-group-headers=X-Remote-Group
- --requestheader-extra-headers-prefix=X-Remote-Extra-
...
volumeMounts:
- mountPath: /tmp
name: tmp-dir
#添加下面内容
- name: ca-ssl
mountPath: /etc/kubernetes/pki
...
volumes:
- emptyDir: {
}
name: tmp-dir
#添加下面内容
- name: ca-ssl
hostPath:
path: /etc/kubernetes/pki
[root@ansible-server metrics]# vim tasks/metrics_file.yml
- name: copy components.yaml file
copy:
src: components.yaml
dest: /root/components.yaml
[root@ansible-server metrics]# vim tasks/config.yml
- name: Modify the "image:" line
replace:
path: /root/components.yaml
regexp: '(.*image:) k8s.gcr.io/metrics-server(/.*)'
replace: '\1 {
{ HARBOR_DOMAIN }}/google_containers\2'
[root@ansible-server metrics]# vim tasks/download_images.yml
- name: get metrics version
shell:
chdir: /root
cmd: awk -F "/" '/image:/{print $NF}' components.yaml
register: METRICS_VERSION
- name: download metrics image
shell: |
{
% for i in METRICS_VERSION.stdout_lines %}
docker pull registry.aliyuncs.com/google_containers/{
{
i }}
docker tag registry.aliyuncs.com/google_containers/{
{
i }} {
{
HARBOR_DOMAIN }}/google_containers/{
{
i }}
docker rmi registry.aliyuncs.com/google_containers/{
{
i }}
docker push {
{
HARBOR_DOMAIN }}/google_containers/{
{
i }}
{
% endfor %}
[root@ansible-server metrics]# vim tasks/install_metrics.yml
- name: install metrics
shell:
chdir: /root
cmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f components.yaml"
[root@ansible-server metrics]# vim tasks/main.yml
- include: metrics_file.yml
- include: config.yml
- include: download_images.yml
- include: install_metrics.yml
[root@ansible-server metrics]# cd ../../
[root@ansible-server ansible]# tree roles/metrics/
roles/metrics/
├── files
│ └── components.yaml
├── tasks
│ ├── config.yml
│ ├── download_images.yml
│ ├── install_metrics.yml
│ ├── main.yml
│ └── metrics_file.yml
└── vars
└── main.yml
3 directories, 7 files
[root@ansible-server ansible]# vim metrics_role.yml
---
- hosts: master01
roles:
- role: metrics
[root@ansible-server ansible]# ansible-playbook metrics_role.yml
16.2 验证metrics
[root@k8s-master01 ~]# kubectl get pod -n kube-system |grep metrics
metrics-server-6bbd84d79c-mmqfl 1/1 Running 0 30s
[root@k8s-master01 ~]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master01.example.local 131m 6% 1643Mi 45%
k8s-master02.example.local 121m 6% 1575Mi 43%
k8s-master03.example.local 124m 6% 1491Mi 41%
k8s-node01.example.local 65m 3% 827Mi 22%
k8s-node02.example.local 69m 3% 886Mi 24%
k8s-node03.example.local 66m 3% 845Mi 23%
17.安装dashboard
17.1 安装dashboard
[root@ansible-server ansible]# mkdir -p roles/dashboard/{tasks,vars,files,templates}
[root@ansible-server ansible]# cd roles/dashboard/
[root@ansible-server dashboard]# ls
files tasks templates vars
#下面HARBOR_DOMAIN的地址设置成自己的harbor域名地址
[root@ansible-server dashboard]# vim vars/main.yml
HARBOR_DOMAIN: harbor.raymonds.cc
NODEPORT: 30005
[root@ansible-server dashboard]# cat templates/recommended.yaml.j2
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
type: NodePort
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.3.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {
}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.6
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {
}
[root@ansible-server dashboard]# vim templates/recommended.yaml.j2
...
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #添加这行
ports:
- port: 443
targetPort: 8443
nodePort: {
{
NODEPORT }} #添加这行
selector:
k8s-app: kubernetes-dashboard
...
[root@ansible-server dashboard]# vim files/admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
[root@ansible-server dashboard]# vim tasks/dashboard_file.yml
- name: copy recommended.yaml file
template:
src: recommended.yaml.j2
dest: /root/recommended.yaml
- name: copy admin.yaml file
copy:
src: admin.yaml
dest: /root/admin.yaml
[root@ansible-server dashboard]# vim tasks/config.yml
- name: Modify the "image:" line
replace:
path: /root/recommended.yaml
regexp: '(.*image:) kubernetesui(/.*)'
replace: '\1 {
{ HARBOR_DOMAIN }}/google_containers\2'
[root@ansible-server dashboard]# vim tasks/download_images.yml
- name: get dashboard version
shell:
chdir: /root
cmd: awk -F "/" '/image:/{print $NF}' recommended.yaml
register: DASHBOARD_VERSION
- name: download dashboard image
shell: |
{
% for i in DASHBOARD_VERSION.stdout_lines %}
docker pull registry.aliyuncs.com/google_containers/{
{
i }}
docker tag registry.aliyuncs.com/google_containers/{
{
i }} {
{
HARBOR_DOMAIN }}/google_containers/{
{
i }}
docker rmi registry.aliyuncs.com/google_containers/{
{
i }}
docker push {
{
HARBOR_DOMAIN }}/google_containers/{
{
i }}
{
% endfor %}
[root@ansible-server dashboard]# vim tasks/install_dashboard.yml
- name: install dashboard
shell:
chdir: /root
cmd: "kubectl --kubeconfig=/etc/kubernetes/admin.kubeconfig apply -f recommended.yaml -f admin.yaml"
[root@ansible-server dashboard]# vim tasks/main.yml
- include: dashboard_file.yml
- include: config.yml
- include: download_images.yml
- include: install_dashboard.yml
[root@ansible-server dashboard]# cd ../../
[root@ansible-server ansible]# tree roles/dashboard/
roles/dashboard/
├── files
│ └── admin.yaml
├── tasks
│ ├── config.yml
│ ├── dashboard_file.yml
│ ├── download_images.yml
│ ├── install_dashboard.yml
│ └── main.yml
├── templates
│ └── recommended.yaml.j2
└── vars
└── main.yml
4 directories, 8 files
[root@ansible-server ansible]# vim dashboard_role.yml
---
- hosts: master01
roles:
- role: dashboard
[root@ansible-server ansible]# ansible-playbook dashboard_role.yml
17.2 登录dashboard
https://172.31.3.101:30005
查看token值:
[root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-xtrmb
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 179e165a-80ae-4db7-b3c3-cf1f5d5047b7
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1411 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImdFQlF3cXJIWk9vTVZXejM4LWMxT3RDUUVFdkswRWpuMFhBV1dxVWVrVW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXh0cm1iIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxNzllMTY1YS04MGFlLTRkYjctYjNjMy1jZjFmNWQ1MDQ3YjciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.BM1Ecw1Mdv3CeYKrA_WwEBQIdFTZZ_2HhMIQL9DKy-cAm81FE17NQOKJNrNOC14bssTBHqqMWcQFgUbbzXj4nXJk5WjUV2oi-BQu0FQFPpd0qsvURqDMrS9hgY4bMYtR-MAEpI6-tdXq_OWYefFAurQrtgLrzsg1vnJEvhe1tJUW0Qc65ouyBP0795xVY8xfwlvJOWcTTy4F6sfYmK9NkyjbplaEoT3J1wTbU9be62GN03JxgyftOdChrXMJ-6JMFxps9lMyCQ-dBF2aviAGWzAWIbWiDZdpOU1v9B_fWAYRu081AYebnLJl9aYPLBCLFKWrowzKptIcMj2P6wadyA