kubeadm部署k8s集群

IP 主机名 说明  
10.10.10.21 k8s-master master 节点  
10.10.10.22 k8s-node1 node 节点  
10.10.10.23 k8s-node2 node 节点  

系统初始化操作

# 关闭防火墙和 selinux
systemctl stop firewalld && systemctl disable firewalld
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0

# 关闭 swap (k8s默认不使用 swap,可以指定参数使用 swap)
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

配置时间同步

centos7 默认已启用 chrony 服务,执行 chronyc sources 命令,查看存在以*开头的行,说明已经与NTP服务器时间同步。

[root@k8s-master ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 193.182.111.142 2 6 357 30 -142ms[ -218ms] +/- 151ms

基本配置

修改 /etc/hosts 文件
cat << EOF >> /etc/hosts
10.10.10.21 k8s-master k8s-master.k8s.com
10.10.10.22 k8s-node1 k8s-node1.k8s.com
10.10.10.23 k8s-node2 k8s-node2.k8s.com
EOF  

修改主机名
master节点:
hostnamectl set-hostname k8s-master 

node1节点:
hostnamectl set-hostname k8s-node1 

node2节点:
hostnamectl set-hostname k8s-node2 

修改 iptables 相关参数

CentOS 7上的一些用户报告了由于iptables被绕过而导致流量路由不正确的问题。创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

cat << EOF > /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# 使配置生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

加载 ipvs 相关模块

由于ipvs已经加入到了内核的主干,所以为 kube-proxy 开启ipvs的前提需要加载以下的内核模块:
在所有的Kubernetes节点执行以下脚本:

cat << EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

# 执行脚本
chmod 755 /etc/sysconfig/modules/ipvs.modules \
&& bash /etc/sysconfig/modules/ipvs.modules \
&& lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。
使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4 命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了ipset软件包。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

yum install ipset ipvsadm -y

安装Docker

# 安装要求的软件包
yum install yum-utils device-mapper-persistent-data lvm2 -y
# 添加基础yum源和Docker repository。
yum-config-manager --add-repo http://mirrors.163.com/.help/CentOS7-Base-163.repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装docker
yum install docker-ce-18.09.8 -y
# 创建daemon.json配置文件
# 注意,这里这指定了cgroupdriver=systemd,另外由于国内拉取镜像较慢,最后追加了阿里云镜像加速配置。
mkdir /etc/docker
cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
  "max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
 "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

安装kubeadm、kubelet、kubectl

官方安装文档可以参考:https://kubernetes.io/docs/setup/independent/install-kubeadm/
kubelet 在群集中所有节点上运行的核心组件, 用来执行如启动pods和containers等操作。
kubeadm 引导启动k8s集群的命令行工具,用于初始化 Cluster。
kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。
# 配置kubernetes.repo的源,由于官方源国内无法访问,这里使用阿里云yum源

cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 在所有节点上安装kubelet、kubeadm 和 kubectl
yum install kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1 --disableexcludes=kubernetes

# 安装指定版本
# yum list --showduplicates kubeadm --disableexcludes=kubernetes
# 安装新版
# yum install kubelet kubeadm kubectl -y

# 启动kubelet服务
systemctl enable kubelet && systemctl start kubelet

部署master节点

完整的官方文档可以参考:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/

初始化命令说明:
  --apiserver-advertise-address
    指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。
  --pod-network-cidr
    指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对 --pod-network-cidr 有自己的要求,这里设置为 10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR。
  --image-repository
    Kubenetes默认Registries地址是 k8s.gcr.io,在国内并不能访问 gcr.io,我们可以增加 –image-repository 参数,默认值是 k8s.gcr.io,将其指定为阿里云镜像地址:registry.aliyuncs.com/google_containers。
  --kubernetes-version=v1.16.0
    关闭版本探测,因为它的默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本来跳过网络请求。

Master节点执行初始化:
注意这里执行初始化用到了--image-repository选项,指定初始化需要的镜像源从阿里云镜像仓库拉取。
kubeadm init \
  --apiserver-advertise-address=10.10.10.21 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.13.1 \
  --pod-network-cidr=10.244.0.0/16 

kubernetes关闭swap主要是为了性能考虑,当然如果不想关闭swap,需要:
  1.编辑/etc/sysconfig/kubelet
    添加KUBELET_EXTRA_ARGS="–fail-swap-on=false"
  2.初始化:
    kubeadm init 添加 --ignore-preflight-errors=Swap

[root@k8s-master ~]# kubeadm init \
> --apiserver-advertise-address=10.10.10.21 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version v1.13.1 \
> --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.8. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [10.10.10.21 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [10.10.10.21 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.10.21]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.503345 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 9orgzu.jxcbihx48hqphehe
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 10.10.10.21:6443 --token 9orgzu.jxcbihx48hqphehe --discovery-token-ca-cert-hash sha256:3b8d7ce366ba8b0a63d0a2f30e1f41588057217bf94e8263c133d797944a26a6

配置 kubectl

kubectl 是管理 Kubernetes Cluster 的命令行工具, Master 初始化完成后需要做一些配置工作才能使用kubectl,,这里直接配置root用户:
export KUBECONFIG=/etc/kubernetes/admin.conf
# 普通用户可以参考 kubeadm init 最后提示
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# 启用 kubectl 命令自动补全功能(注销重新登录生效)
echo "source <(kubectl completion bash)" >> ~/.bashrc

node节点加入集群

Kubernetes 的 Worker 节点跟 Master 节点几乎是相同的,它们运行着的都是一个 kubelet 组件。
唯一的区别在于,在 kubeadm init 的过程中,kubelet 启动后,Master 节点上还会自动运行 kube-apiserver、kube-scheduler、kube-controller-manger 这三个系统 Pod。
在 k8s-node1 和 k8s-node2 上分别执行如下命令,将其注册到 Cluster 中:

#执行以下命令将节点接入集群
kubeadm join 10.10.10.21:6443 --token 9orgzu.jxcbihx48hqphehe --discovery-token-ca-cert-hash sha256:3b8d7ce366ba8b0a63d0a2f30e1f41588057217bf94e8263c133d797944a26a6

#如果执行kubeadm init时没有记录下加入集群的命令,可以通过以下命令重新创建
kubeadm token create --print-join-command

在k8s-node1上执行kubeadm join :
[root@k8s-node1 ~]# kubeadm join 10.10.10.21:6443 --token 9orgzu.jxcbihx48hqphehe --discovery-token-ca-cert-hash sha256:3b8d7ce366ba8b0a63d0a2f30e1f41588057217bf94e8263c133d797944a26a6
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.8. Latest validated version: 18.06
[discovery] Trying to connect to API Server "10.10.10.21:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.10.10.21:6443"
[discovery] Requesting info from "https://10.10.10.21:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.10.10.21:6443"
[discovery] Successfully established connection with API Server "10.10.10.21:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

node02上也同样执行。

部署网络插件

要让 Kubernetes Cluster 能够工作,必须安装 Pod 网络,否则 Pod 之间无法通信。
Kubernetes 支持多种网络方案,这里我们使用 flannel
执行如下命令部署 flannel:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

此时在查看Pod状态(可以看到所有Pod均处于Running状态)

测试集群各个组件

首先验证kube-apiserver, kube-controller-manager, kube-scheduler, pod network 是否正常:
部署一个 Nginx Deployment,包含2个Pod 副本
参考:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
kubectl create deployment nginx --image=nginx:alpine
kubectl scale deployment nginx --replicas=2
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pods,svc -l app=nginx -o wide

验证Nginx Pod是否正确运行,并且会分配10.244.开头的集群IP,再验证一下kube-proxy是否正常,
以 NodePort 方式对外提供服务(也是外部流量引入集群内部的方法,参考:https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/)
可以通过任意 NodeIP:Port 在集群外部访问这个服务,至此,说明集群正常。

 

外部流量引入集群内部的方法

NodePort
NodePort 类型的 service 是让外部流量可以访问集群内部服务最基本的方式。NodePort, 顾名思义可以在所有 Node(VM)上打开一个特定的 port,任何发送到此 port 的流量都将转发到 service 上。
LoadBalancer
负载均衡器类型的 service 是在公网上暴露服务的标准方式(云原生环境)。在 GKE 上,这将启动一个网络LoadBalancer,该网络LoadBalancer将为你提供一个 IP 地址,用来将所有流量转发到你的 service 上。
Ingress
与以上所有例子不同,Ingress 实际上不是 service 的一个类型。相反,它位于多个 service 之前,充当集群中的“智能路由器”或入口点。您可以使用 Ingress 做很多不同的事情。现在市面上有许多不同类型的 Ingress 控制器,他们具有不同的功能。

kube-proxy开启ipvs

# 修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”
kubectl edit configmap kube-proxy -n kube-system
# 之后重启各个节点上的kube-proxy pod
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
# 查看日志,出现Using ipvs Proxier即可(前提是已经加载ipvs内核模块)
kubectl logs kube-proxy-4t6m7 -n kube-system

猜你喜欢

转载自www.cnblogs.com/outsrkem/p/12636129.html