二进制安装k8s v1.12.7

开篇

一、虚拟机规划

1、master集群

 

IP

主机名

日期/时间同步

192.168.1.164

k8s2-master1

server

192.168.1.140

k8s2-master2

client

192.168.1.122

k8s2-master3

client

2node集群

IP

主机名

日期/时间同步

192.168.1.146

k8s2-node1

client

192.168.1.231

k8s2-node2

client

192.168.1.221

k8s2-node3

client

3etcd集群

IP

主机名

日期/时间同步

192.168.1.226

etcd1

client

192.168.1.228

etcd2

client

192.168.1.229

etcd3

client

二、安装环境

 

系统版本:centos 7.6(最简安装)

内核版本:3.10.0-957.el7.x86_64

三、安装前的准备工作

 

1、所有节点做时间主从同步

安装ntp服务

yum -y install ntp

服务端(一台)安装chrony服务

yum -y install chrony

修改配置文件(在服务端操作,就是你需要同步的服务器)

vi /etc/chrony.conf

启动chronyd服务以及加入开机自启

systemctl start chronyd

systemctl enable chronyd

l 加入定时任务

*/5 * * * * /usr/sbin/ntpdate 202.120.2.101 >/dev/null 2>&1

客户端(多台)安装chrony服务

yum -y install chrony

修改配置文件

vi /etc/chrony.conf

加入定时任务,,每五分钟同步一次

crontab -e

l 启动服务并加入开机自启

systemctl start chronyd

systemctl enable chronyd

2、所有节点安装docker-ce

安装需要的包

yum install -y yum-utils device-mapper-persistent-data lvm2

安装镜像源

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

安装docker-ce-18.06

yum -y install docker-ce-18.06.3.ce-3.el7

设置启动和加入开机自启

systemctl start docker

systemctl enable docker

3、所有节点关闭防火墙(后期开防火墙,加入端口)

    systemctl stop firewalld

systemctl disable firewalld

4、所有节点关闭内存交换

swapoff -a(临时关闭)

vi /etc/fatab(永久关闭)

5、所有节点禁用selinux

setenforce 0(临时关闭)

vi /etc/sysconfig/selinux

6、所有节点ip与主机名映射

   vi /etc/hosts

7、做免密登陆(k8s2-master1上生成密钥)

    ssh-keygen -t rsa

ssh-copy-id etcd1

ssh-copy-id etcd2

ssh-copy-id etcd3

ssh-copy-id k8s2-master1

ssh-copy-id k8s2-master2

ssh-copy-id k8s2-master3

ssh-copy-id k8s2-node1

ssh-copy-id k8s2-node2

ssh-copy-id k8s2-node3

8、所有节点加载内核

   modprobe br_netfilter

modprobe ip_vs

9、所有节点设置系统参数

   cat > /etc/sysctl.d/kubernetes.conf <<EOF

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.swappiness=0

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_watches=89100

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

EOF

   sysctl -p /etc/sysctl.d/kubernetes.conf(立即生效)

 

注意:

   步骤8跟步骤9执行顺序不要颠倒,否则如图:

 

 

四、所需证书

 

IP

主机名

需要的证书(/etc/kubernetes/cert)

192.168.1.164

k8s2-master1

ca-key.pem
ca.pem
ca-config.json只生成证书的主机需要)
encryption-config.yaml
flanneld-key.pem
flanneld.pem
kube-controller-manager-key.pem
kube-controller-manager.kubeconfig
kube-controller-manager.pem
kubernetes-key.pem
kubernetes.pem
kube-scheduler.kubeconfig
~/.kube/config

192.168.1.140

k8s2-master2

192.168.1.122

k8s2-master3

192.168.1.146

k8s2-node1

ca-key.pem
ca.pem
flanneld-key.pem
flanneld.pem
kubelet-bootstrap.kubeconfig
kubelet-client-2018-12-20-20-10-59.pem
kubelet-client-current.pem
kubelet.config.json
kubelet.crt
kubelet.key
kubelet.kubeconfig
kube-proxy.config.yaml
kube-proxy.kubeconfig

192.168.1.231

k8s2-node2

192.168.1.221

k8s2-node3

192.168.1.226

etcd1

ca-key.pem
etcd-key.pem
ca.pem
etcd.pem

192.168.1.228

etcd2

192.168.1.229

etcd3

 

使用TLS证书搭建etcd集群

一、创建CA证书和密钥

   kubernetes 系统各组件需要使用 TLS 证书对通信进行加密,本文档使用 CloudFlare PKI 工具集 cfssl 来生成 Certificate Authority (CA) 证书和秘钥文件,CA 是自签名的证书,用来签名后续创建的其它 TLS 证书。

   k8s2-master1上操作

证书只需要创建一次即可,以后在向集群中添加新节点时只要将 /etc/kubernetes/cert 目录下的证书拷贝到新节点上即可。

1、安装 CFSSL

    curl -o /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

curl -o /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x /usr/local/bin/cfssl*

2、创建CA配置文件

   mkdir -p /etc/kubernetes/cert && cd /etc/kubernetes/cert

cat > ca-config.json <<EOF

{

  "signing": {

    "default": {

      "expiry": "87600h"

    },

    "profiles": {

      "kubernetes": {

        "usages": [

            "signing",

            "key encipherment",

            "server auth",

            "client auth"

        ],

        "expiry": "87600h"

      }

    }

  }

}

EOF

  • ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile 
  • signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE 
  • server auth:表示 client 可以用该 CA server 提供的证书进行验证; 
  • client auth:表示 server 可以用该 CA client 提供的证书进行验证;

3、创建CA证书签名请求

cat > ca-csr.json <<EOF

{

  "CN": "kubernetes",

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "k8s",

      "OU": "4Paradigm"

    }

  ]

}

EOF

  • CNCommon Namekube-apiserver 从证书中提取该字段作为请求的用户名 (User Name),浏览器使用该字段验证网站是否合法;
  • OOrganizationkube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)
  • kube-apiserver 将提取的 UserGroup 作为 RBAC 授权的用户标识;

生成 CA 证书和私钥:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

生成四个文件

2、创建etcd证书签名请求文件

cat > etcd-csr.json <<EOF

{

  "CN": "etcd",

  "hosts": [

    "127.0.0.1",

    "192.168.1.226",

    "192.168.1.228",

    "192.168.1.229"

  ],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "k8s",

      "OU": "4Paradigm"

    }

  ]

}

EOF

  • hosts 字段指定授权使用该证书的 etcd 节点 IP 或域名列表,这里将 etcd 集群的三个节点 IP 都列在其中;

生成 CA 证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

    -ca-key=/etc/kubernetes/cert/ca-key.pem \

    -config=/etc/kubernetes/cert/ca-config.json \

    -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

生成的文件:

3、*.pem证书分发到3etcd/etc/kubernetes/cert目录下

scp *.pem etcd1:/etc/kuberneters/cert

scp *.pem etcd2:/etc/kuberneters/cert

scp *.pem etcd3:/etc/kuberneters/cert

ETCD使用证书的组件如下:

  • etcd使用 ca.pemetcd-key.pemetcd.pem

二、部署etcd集群

在三个节点都安装etcd,下面的操作需要三个节点都执行一遍

1、下载etcd安装包

 

yum -y install wget

wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz


tar zxf etcd-v3.3.10-linux-amd64.tar.gz


cp  etcd-v3.3.10-linux-amd64/etcd* /usr/local/bin

 

2、创建工作目录

 

mkdir -p /var/lib/etcd

 

3、创建systemd unit 文件(红色字填写对应etcd主机的ip,绿色字为etcd集群各个节点的ip

 

cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

 

[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

ExecStart=/usr/local/bin/etcd \

  --name etcd1 \

  --cert-file=/etc/kuberneters/cert/etcd.pem \

  --key-file=/etc/kuberneters/cert/etcd-key.pem \

  --peer-cert-file=/etc/kuberneters/cert/etcd.pem \

  --peer-key-file=/etc/kuberneters/cert/etcd-key.pem \

  --trusted-ca-file=/etc/kuberneters/cert/ca.pem \

  --peer-trusted-ca-file=/etc/kuberneters/cert/ca.pem \

  --initial-advertise-peer-urls https://192.168.1.226:2380 \

  --listen-peer-urls https://192.168.1.226:2380 \

  --listen-client-urls https://192.168.1.226:2379,http://127.0.0.1:2379 \

  --advertise-client-urls https://192.168.1.226:2379 \

  --initial-cluster-token etcd-cluster-0 \

  --initial-cluster etcd1=https://192.168.1.226:2380,etcd2=https://192.168.1.228:2380,etcd3=https://192.168.1.229:2380 \

  --initial-cluster-state new \

  --data-dir=/var/lib/etcd

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

EOF

 

为了保证通信安全,需要指定 etcd 的公私钥(cert-filekey-file)Peers 通信的公私钥和 CA 证书(peer-cert-filepeer-key-filepeer-trusted-ca-file)、客户端的CA证书(trusted-ca-file;

创建etcd.pem 证书时使用的 etcd-csr.json 文件的 hosts 字段包含所有 etcd 节点的IP,否则证书校验会出错;

l –initial-cluster-state 值为 new 时,–name 的参数值必须位于 –initial-cluster 列表中.

4、启动etcd服务并且设置开机自启动

systemctl daemon-reload

systemctl enable etcd

systemctl start etcd

 

etcd启动后状态:

etcd1:

 

etcd2:

 

etcd3:

 

 

注意:

     三台需要同时启动服务(启动一台紧接着启动另一台的形式启动),不然启动失败。如果第一次启动失败,需要重新清除缓存。例如我们创建的工作目录:/var/lib/etcd下的所有数据都要清除(三个节点的都要清除)

 

 

5验证etcd集群状态,以及查看leader,在任何一个etcd节点执行

 

etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem cluster-health

 

 

 

etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem member list

 

生成Flannel网络

一、生成Flannel网络TLS证书

 

所有集群节点都安装Flannel,下面的操作在k8s-master1上进行,其他节点重复执行即可。(证书生成一次就行)

为了便于区分我在/etc/kubernets/cert目录下创建flannel目录:

mkdir -p /etc/kubernets/cert/flannel

1、创建证书签名请求

cat > flanneld-csr.json <<EOF

{

  "CN": "flanneld",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "k8s",

      "OU": "4Paradigm"

    }

  ]

}

EOF

 

  • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

 

生成的文件

 

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

  -ca-key=/etc/kubernetes/cert/ca-key.pem \

  -config=/etc/kubernetes/cert/ca-config.json \

  -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

生成的文件:

2、将证书分发到所有集群节点/etc/kubernetes/cert/目录下

cp flanneld*.pem /etc/kubernetes/cert

scp flanneld*.pem k8s2-master2:/etc/kubernetes/cert/

scp flanneld*.pem k8s2-master3:/etc/kubernetes/cert/

scp flanneld*.pem k8s2-node1:/etc/kubernetes/cert/

scp flanneld*.pem k8s2-node2:/etc/kubernetes/cert/

scp flanneld*.pem k8s2-node3:/etc/kubernetes/cert/

scp flanneld*.pem etcd1:/etc/kubernetes/cert

scp flanneld*.pem etcd2:/etc/kubernetes/cert

scp flanneld*.pem etcd3:/etc/kubernetes/cert

 

二、部署Flannel

 

1、下载安装Flannel

 

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz

cp {flanneld,mk-docker-opts.sh} /usr/local/bin

 

2、向 etcd 写入网段信息

 

下面2条命令在etcd集群中任意一台执行一次即可,也是是创建一个flannel网段供docker分配使用

 

etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem mkdir /kubernetes/network

 

etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem mk /kubernetes/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

 

3创建system unit文件(绿色部分为etcd集群各个节点的IP

 

cat > /usr/lib/systemd/system/flanneld.service << EOF

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

After=network-online.target

Wants=network-online.target

After=etcd.service

Before=docker.service

 

[Service]

Type=notify

ExecStart=/usr/local/bin/flanneld \

  -etcd-cafile=/etc/kubernetes/cert/ca.pem \

  -etcd-certfile=/etc/kubernetes/cert/flanneld.pem \

  -etcd-keyfile=/etc/kubernetes/cert/flanneld-key.pem \

  -etcd-endpoints=https://192.168.1.226:2379,https://192.168.1.228:2379,https://192.168.1.229:2379 \

  -etcd-prefix=/kubernetes/network

ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

Restart=on-failure

 

[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

EOF

 

l mk-docker-opts.sh 脚本将分配给 flanneld Pod 子网网段信息写入到 /run/flannel/docker 文件中,后续 docker 启动时使用这个文件中参数值设置 docker0 网桥。

 

l flanneld 使用系统缺省路由所在的接口和其它节点通信,对于有多个网络接口的机器(如,内网和公网),可以用 -iface=enpxx 选项值指定通信接口。

4、启动flannel并设置开机自启

systemctl daemon-reload

systemctl enable flanneld

systemctl start flanneld

 

flanneld正确的启动状态示例:

 

systemctl status -l flanneld

 

 

5、查看flannel分配的子网信息

cat /run/flannel/docker

cat /run/flannel/subnet.env

 

 

l /run/flannel/docker是flannel分配给docker的子网信息,/run/flannel/subnet.env包含了flannel整个大网段以及在此节点上的子网段

 

6、查看flannel网络是否生效

三、配置docker支持flannel网络

1、配置docker支持flannel网络,所有docker节点都操作

vi /usr/lib/systemd/system/docker.service

2、重启docker,使配置生效

systemctl daemon-reload

systemctl restart docker

 

3、查看docker网络是否生效

启动一个容器

docker run -itd centos

查看ip地址是否是flannel网络分配的网段

 

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 1c1

 

 

 

4、查看所有集群主机的网络情况

 

etcdctl --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/kubernetes/cert/etcd.pem --key-file=/etc/kubernetes/cert/etcd-key.pem ls /kubernetes/network/subnets

 

 

 

配置Kubernetes master集群

kubernetes master 节点包含的组件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

一、部署kubectl命令工具

l kubectl kubernetes 集群的命令行管理工具,kubectl 默认从 ~/.kube/config 文件读取 kube-apiserver 地址、证书、用户名等信息最后生成的kubectlconfig必修命名为config

l ~/.kube/config只需要部署一次,然后拷贝到其他的master

1下载kubectl

wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz

tar -xzvf kubernetes-server-linux-amd64.tar.gz

 

cd kubernetes/server/bin/

 

cp kube-apiserver kubeadm kube-controller-manager kubectl kube-scheduler /usr/local/bin

2创建请求证书

为了便于区分生成的各种文件,我在/etc/kubernets/cert下创建master目录

mkdir -p /etc/kubernetes/cert/master

 

cat > admin-csr.json <<EOF

{

  "CN": "admin",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "system:masters",

      "OU": "4Paradigm"

    }

  ]

}

EOF

 

生成文件:

 

 

生成证书和私钥

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

  -ca-key=/etc/kubernetes/cert/ca-key.pem \

  -config=/etc/kubernetes/cert/ca-config.json \

  -profile=kubernetes admin-csr.json | cfssljson -bare admin

 

生成的文件:

 

 

 

3创建~/.kube/config文件

 

三个master节点都要创建~/.kube/

 

mkdir ~/.kube/

 

kubectl config set-cluster kubernetes \

  --certificate-authority=/etc/kubernetes/cert/ca.pem \

  --embed-certs=true \

  --server=https://192.168.1.164:6443 \

  --kubeconfig=kubectl.kubeconfig

 

生成文件:

 

 

# 设置客户端认证参数

 

kubectl config set-credentials admin \

  --client-certificate=/etc/kubernetes/cert/admin.pem \

  --client-key=/etc/kubernetes/cert/admin-key.pem \

  --embed-certs=true \

  --kubeconfig=kubectl.kubeconfig

 

 

# 设置上下文参数

 

kubectl config set-context kubernetes \

  --cluster=kubernetes \

  --user=admin \

  --kubeconfig=kubectl.kubeconfig

# 设置默认上下文

 

kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

 

4、分发~/.kube/config文件

 

cp kubectl.kubeconfig ~/.kube/config

scp kubectl.kubeconfig k8s2-master2:~/.kube/config

scp kubectl.kubeconfig k8s2-master3:~/.kube/config

 

分发完成后到各个master节点的相应目录下将 kubectl.kubeconfig 变为config

 

二、部署api-server

为了更好的区分证书生成的文件,我们创建api-server文件:

mkdir -p /etc/kubernetes/cert/api-server

cd /etc/kubernetes/cert/api-server

1创建kube-apiserver的证书签名请求:(红色部分对应自己的master ip)

cat > kubernetes-csr.json <<EOF

{

  "CN": "kubernetes",

  "hosts": [

    "127.0.0.1",

    "192.168.1.164",

    "192.168.1.140",

    "192.168.1.122",

    "kubernetes",

    "kubernetes.default",

    "kubernetes.default.svc",

    "kubernetes.default.svc.cluster",

    "kubernetes.default.svc.cluster.local"

  ],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "k8s",

      "OU": "4Paradigm"

    }

  ]

}

EOF

 

生成的文件:

 

 

 

生成证书和私钥:

 

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

  -ca-key=/etc/kubernetes/cert/ca-key.pem \

  -config=/etc/kubernetes/cert/ca-config.json \

  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

 

生成的文件:

 

 

 

2将生成的证书和私钥文件拷贝到 各个master 节点:

 

cp kubernetes*.pem /etc/kubernetes/cert

scp kubernetes*.pem k8s2-master2:/etc/kubernetes/cert/

scp kubernetes*.pem k8s2-master3:/etc/kubernetes/cert/

 

3、创建加密配置文件

 

cat > encryption-config.yaml <<EOF

kind: EncryptionConfig

apiVersion: v1

resources:

  - resources:

      - secrets

    providers:

      - aescbc:

          keys:

            - name: key1

              secret: $(head -c 32 /dev/urandom | base64)

      - identity: {}

EOF

 

4、分发加密配置文件到master节点

 

cp encryption-config.yaml /etc/kubernetes/cert/

scp encryption-config.yaml k8s2-master2:/etc/kubernetes/cert/

scp encryption-config.yaml k8s2-master3:/etc/kubernetes/cert/

 

5、创建kube-apiserver systemd unit文件红色部分为对应的master ip,绿色为各个etcdip

 

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

 

[Service]

ExecStart=/usr/local/bin/kube-apiserver \

  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \

  --anonymous-auth=false \

  --experimental-encryption-provider-config=/etc/kubernetes/cert/encryption-config.yaml \

  --advertise-address=192.168.1.164 \

  --bind-address=192.168.1.164 \

  --insecure-port=0 \

  --authorization-mode=Node,RBAC \

  --runtime-config=api/all \

  --enable-bootstrap-token-auth \

  --service-cluster-ip-range=192.168.1.0/24 \

  --service-node-port-range=30000-32700 \

  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \

  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \

  --client-ca-file=/etc/kubernetes/cert/ca.pem \

  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \

  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \

  --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \

  --etcd-cafile=/etc/kubernetes/cert/ca.pem \

  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \

  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \

  --etcd-servers=https://192.168.1.226:2379,https://192.168.1.228:2379,https://192.168.1.229:2379 \

  --enable-swagger-ui=true \

  --allow-privileged=true \

  --apiserver-count=3 \

  --audit-log-maxage=30 \

  --audit-log-maxbackup=3 \

  --audit-log-maxsize=100 \

  --audit-log-path=/var/log/kube-apiserver-audit.log \

  --event-ttl=1h \

  --alsologtostderr=true \

  --logtostderr=false \

  --log-dir=/var/log/kubernetes \

  --v=2

Restart=on-failure

RestartSec=5

Type=notify

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

EOF

 

  • --experimental-encryption-provider-config:启用加密特性;
  • --authorization-mode=Node,RBAC 开启 Node RBAC 授权模式,拒绝未授权的请求;
  • --enable-admission-plugins:启用 ServiceAccount  NodeRestriction
  • --service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager  --service-account-private-key-file 指定私钥文件,两者配对使用;
  • --tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件。--client-ca-file 用于验证 client (kue-controller-managerkube-schedulerkubeletkube-proxy )请求所带的证书;
  • --kubelet-client-certificate--kubelet-client-key:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
  • --bind-address 不能为 127.0.0.1,否则外界不能访问它的安全端口 6443
  • --insecure-port=0:关闭监听非安全端口(8080)
  • --service-cluster-ip-range 指定 Service Cluster IP 地址段;
  • --service-node-port-range 指定 NodePort 的端口范围;
  • --runtime-config=api/all=true 启用所有版本的 APIs,如 autoscaling/v2alpha1
  • --enable-bootstrap-token-auth:启用 kubelet bootstrap token 认证;
  • --apiserver-count=3:指定集群运行模式,多台 kube-apiserver 会通过 leader 选举产生一个工作节点,其它节点处于阻塞状态;

6、分发kube-apiserver.service文件到其他master

scp /usr/lib/systemd/system/kube-apiserver.service k8s2-master2:/etc/systemd/system/kube-apiserver.service

 

scp /usr/lib/systemd/system/kube-apiserver.service k8s2-master3:/etc/systemd/system/kube-apiserver.service

 

7、创建日志目录

 

mkdir -p /var/log/kubernetes

 

8、启动api-server服务并加入开机自启

 

systemctl daemon-reload

systemctl start kube-apiserver

systemctl enable kube-apiserver

 

kube-apiserver正确的启动状态:

 

 

 

9、检查api-server和集群状态

 

ss -tpln | grep kube-apiserve

 

 

 

kubectl cluster-info

 

 

 

10、授予kubernetes证书访问kubelet api权限

 

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

 

 

 

三、部署kube-controller-manager

 

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

 

为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:

 

kube-apiserver 的安全端口通信时;

在安全端口(https10252) 输出 prometheus 格式的 metrics

1、创建kube-controller-manager证书请求:

 

   为了更好的区分生成的证书跟密钥文件,我们在/etc/kubernetes/cert下创建kube-controller-manager目录

   mkdir -p /etc/kubernetes/cert/kube-controller-manager

   cd /etc/kubernetes/cert/kube-controller-manager

cat > kube-controller-manager-csr.json << EOF

{

    "CN": "system:kube-controller-manager",

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "hosts": [

      "127.0.0.1",

      "192.168.1.164",

      "192.168.1.140",

      "192.168.1.122"

    ],

    "names": [

      {

        "C": "CN",

        "ST": "BeiJing",

        "L": "BeiJing",

        "O": "system:kube-controller-manager",

        "OU": "4Paradigm"

      }

    ]

}

EOF

 

生成文件:

 

 

 

生成证书和私钥:

 

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

  -ca-key=/etc/kubernetes/cert/ca-key.pem \

  -config=/etc/kubernetes/cert/ca-config.json \

  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

 

生成文件:

 

 

 

2、将生成的证书和私钥分发到所有 master 节点

 

cp kube-controller-manager*.pem /etc/kubernetes/cert/

scp kube-controller-manager*.pem k8s2-master2:/etc/kubernetes/cert/

scp kube-controller-manager*.pem k8s2-master3:/etc/kubernetes/cert/

 

3、创建和分发kubeconfig文件

kubectl config set-cluster kubernetes \

  --certificate-authority=/etc/kubernetes/cert/ca.pem \

  --embed-certs=true \

  --server=https://192.168.1.164:6443 \

  --kubeconfig=kube-controller-manager.kubeconfig

 

生成文件:

 

 

 

kubectl config set-credentials system:kube-controller-manager \

  --client-certificate=kube-controller-manager.pem \

  --client-key=kube-controller-manager-key.pem \

  --embed-certs=true \

  --kubeconfig=kube-controller-manager.kubeconfig

 

 

 

kubectl config set-context system:kube-controller-manager \

  --cluster=kubernetes \

  --user=system:kube-controller-manager \

  --kubeconfig=kube-controller-manager.kubeconfig

 

 

 

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

 

 

 

分发 kube-controller-manager.kubeconfig 到所有 master 节点

 

cp kube-controller-manager.kubeconfig /etc/kubernetes/cert/

scp kube-controller-manager.kubeconfig k8s2-master2:/etc/kubernetes/cert/

scp kube-controller-manager.kubeconfig k8s2-master3:/etc/kubernetes/cert/

 

4创建和分发kube-controller-manager systemd unit文件

 

cat > /usr/lib/systemd/system/kube-controller-manager.service  << EOF

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

 

[Service]

ExecStart=/usr/local/bin/kube-controller-manager \

  --address=127.0.0.1 \

  --kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \

  --authentication-kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \

  --service-cluster-ip-range=192.168.1.0/24 \

  --cluster-name=kubernetes \

  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \

  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \

  --experimental-cluster-signing-duration=8760h \

  --root-ca-file=/etc/kubernetes/cert/ca.pem \

  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \

  --leader-elect=true \

  --feature-gates=RotateKubeletServerCertificate=true \

  --controllers=*,bootstrapsigner,tokencleaner \

  --horizontal-pod-autoscaler-use-rest-clients=true \

  --horizontal-pod-autoscaler-sync-period=10s \

  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \

  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \

  --use-service-account-credentials=true \

  --alsologtostderr=true \

  --logtostderr=false \

  --log-dir=/var/log/kubernetes \

  --v=2

Restart=on

Restart=on-failure

RestartSec=5

 

[Install]

WantedBy=multi-user.target

EOF

 

  • --address:指定监听的地址为127.0.0.1
  • --kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver
  • --cluster-signing-*-file:签名 TLS Bootstrap 创建的证书
  • --experimental-cluster-signing-duration:指定 TLS Bootstrap 证书的有效期;
  • --root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
  • --service-account-private-key-file:签名 ServiceAccount Token 的私钥文件,必须和 kube-apiserver  --service-account-key-file 指定的公钥文件配对使用;
  • --service-cluster-ip-range :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
  • --leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
  • --feature-gates=RotateKubeletServerCertificate=true:开启 kublet server 证书的自动更新特性;
  • --controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token
  • --horizontal-pod-autoscaler-*custom metrics 相关参数,支持 autoscaling/v2alpha1
  • --tls-cert-file--tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥;
  • --use-service-account-credentials=true

分发kube-controller-manager systemd unit文件

scp /usr/lib/systemd/system/kube-controller-manager.service k8s2-master2:/usr/lib/systemd/system/kube-controller-manager.service

 

scp /usr/lib/systemd/system/kube-controller-manager.service k8s2-master3:/usr/lib/systemd/system/kube-controller-manager.service

 

5启动kube-controller-manager服务

 

systemctl daemon-reload

systemctl start kube-controller-manager

systemctl enable kube-controller-manager

 

kube-controller-manager正确的启动状态:

 

 

 

6检查kube-controller-manager服务

 

 

 

7查看当前kube-controller-managerleader

 

kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml

 

 

 

四、部署kube-scheduler

 

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

 

为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:

 

kube-apiserver 的安全端口通信;

在安全端口(https10251) 输出 prometheus 格式的 metrics

1、创建kube-scheduler证书请求

   为了更好的区分生成的证书跟密钥文件,我们在/etc/kubernetes/cert下创建kube-scheduler目录

   mkdir -p /etc/kubernetes/cert/kube-scheduler

   cd /etc/kubernetes/cert/kube-scheduler

cat > kube-scheduler-csr.json << EOF

{

    "CN": "system:kube-scheduler",

    "hosts": [

      "127.0.0.1",

      "192.168.1.164",

      "192.168.1.140",

      "192.168.1.122"

    ],

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

      {

        "C": "CN",

        "ST": "BeiJing",

        "L": "BeiJing",

        "O": "system:kube-scheduler",

        "OU": "4Paradigm"

      }

    ]

}

EOF

 

生成的文件:

 

 

 

生成证书和私钥:

 

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

  -ca-key=/etc/kubernetes/cert/ca-key.pem \

  -config=/etc/kubernetes/cert/ca-config.json \

  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

 

 

生成的文件:

 

 

 

2创建和分发kube-scheduler.kubeconfig文件

 

kubectl config set-cluster kubernetes \

  --certificate-authority=/etc/kubernetes/cert/ca.pem \

  --embed-certs=true \

  --server=https://192.168.1.164:6443 \

  --kubeconfig=kube-scheduler.kubeconfig

 

生成文件:

 

 

 

kubectl config set-credentials system:kube-scheduler \

  --client-certificate=kube-scheduler.pem \

  --client-key=kube-scheduler-key.pem \

  --embed-certs=true \

  --kubeconfig=kube-scheduler.kubeconfig

 

 

 

kubectl config set-context system:kube-scheduler \

  --cluster=kubernetes \

  --user=system:kube-scheduler \

  --kubeconfig=kube-scheduler.kubeconfig

 

 

 

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

 

 

 

分发 kubeconfig 到所有 master 节点:

 

cp kube-scheduler.kubeconfig /etc/kubernetes/cert/

scp kube-scheduler.kubeconfig k8s2-master2:/etc/kubernetes/cert/

scp kube-scheduler.kubeconfig k8s2-master3:/etc/kubernetes/cert/

 

3创建和分发kube-scheduler systemd unit文件

 

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

 

[Service]

ExecStart=/usr/local/bin/kube-scheduler \

  --address=127.0.0.1 \

  --kubeconfig=/etc/kubernetes/cert/kube-scheduler.kubeconfig \

  --leader-elect=true \

  --alsologtostderr=true \

  --logtostderr=false \

  --log-dir=/var/log/kubernetes \

  --v=2

Restart=on-failure

RestartSec=5

 

[Install]

WantedBy=multi-user.target

EOF

 

  • --address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
  • --kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver
  • --leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

分发 systemd unit 文件到所有 master 节点:

scp /usr/lib/systemd/system/kube-scheduler.service k8s2-master2:/usr/lib/systemd/system/kube-scheduler.service

 

scp /usr/lib/systemd/system/kube-scheduler.service k8s2-master3:/usr/lib/systemd/system/kube-scheduler.service

 

4、启动kube-scheduler服务并加入开机自启

 

systemctl daemon-reload

systemctl start kube-scheduler

systemctl enable kube-scheduler

 

kube-scheduler正确的启动状态:

 

systemctl status -l kube-scheduler

 

 

 

5、查看监听端口

 

 

 

6、查看当前kube-schedulerleader

 

 kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml

7、在所有master节点上验证功能是否正常

kubectl get cs

配置kubernetes node集群

以下kubeadmkubectl命令操作都是在k8s-master1上执行的。

kubernetes node 节点运行如下组件:

  • docker
  • kubelet
  • kube-proxy
  • flannel

一、安装依赖包

yum install -y epel-release conntrack ipvsadm ipset jq iptables curl sysstat libseccomp

 

/usr/sbin/modprobe ip_vs

查看ip_vs是否被加载:

lsmod | grep ip_vs

二、部署kubelet组件

l kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 execrunlogs 等。

 

l kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

 

为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(apiserverheapster)

 

1、下载和分发kubelet二进制文件

    

wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz

 

tar -xzvf kubernetes-server-linux-amd64.tar.gz

 

cd kubernetes/server/bin/

 

cp kubelet kube-proxy /usr/local/bin

 

scp  kubelet kube-proxy k8s2-node2:/usr/local/bin

 

scp  kubelet kube-proxy k8s2-node3:/usr/local/bin

 

2、创建kubelet bootstrap kubeconfig文件 (k8s-master1上执行

 

创建kubelet_bootstrap文件

 

mkdir -p /etc/kubernetes/cert/kubelet_bootstrap

 

cd /etc/kubernetes/cert/kubelet_bootstrap

 

#创建 token

kubeadm token create \

  --description kubelet-bootstrap-token \

  --groups system:bootstrappers:k8s-master1 \

  --kubeconfig ~/.kube/config

 

 

# 设置集群参数

kubectl config set-cluster kubernetes \

  --certificate-authority=/etc/kubernetes/cert/ca.pem \

  --embed-certs=true \

  --server=https://192.168.1.164:6443 \

  --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig

 

# 设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \

  --token=手动输入创建token时生成的密钥 \

  --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig

# 设置上下文参数

kubectl config set-context default \

  --cluster=kubernetes \

  --user=kubelet-bootstrap \

  --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig

 

# 设置默认上下文

kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig

 

l kubelet bootstrap kubeconfig文件创建三次,分别把k8s-master1改成k8s-master2k8s-master3

  • 证书中写入 Token 而非证书,证书后续由 controller-manager 创建。

生成的文件:

 

 

 

3、查看 kubeadm 为各节点创建的 token

 

kubeadm token list

 

 

 

  • 创建的 token 有效期为 1 ,超期后将不能再被使用,且会被 kube-controller-manager tokencleaner 清理(如果启用该 controller 的话)
  • kube-apiserver 接收 kubelet bootstrap token 后,将请求的 user 设置为 system:bootstrap:group 设置为 system:bootstrappers

查看各 token 关联的 Secret(红色的为创建生成的token

 

kubectl get secrets  -n kube-system

 

 

 

4、分发bootstrap kubeconfig文件

 

 

 

5、创建和分发kubelet参数配置文件

 

创建 kubelet 参数配置模板文件:红色字体改成对应node主机ip

cat >kubelet.config.json<<EOF

{

  "kind": "KubeletConfiguration",

  "apiVersion": "kubelet.config.k8s.io/v1beta1",

  "authentication": {

    "x509": {

      "clientCAFile": "/etc/kubernetes/cert/ca.pem"

    },

    "webhook": {

      "enabled": true,

      "cacheTTL": "2m0s"

    },

    "anonymous": {

      "enabled": false

    }

  },

  "authorization": {

    "mode": "Webhook",

    "webhook": {

      "cacheAuthorizedTTL": "5m0s",

      "cacheUnauthorizedTTL": "30s"

    }

  },

  "address": "192.168.1.146",

  "port": 10250,

  "readOnlyPort": 0,

  "cgroupDriver": "cgroupfs",

  "hairpinMode": "promiscuous-bridge",

  "serializeImagePulls": false,

  "featureGates": {

    "RotateKubeletClientCertificate": true,

    "RotateKubeletServerCertificate": true

  },

  "clusterDomain": "cluster.local.",

  "clusterDNS": ["10.254.0.2"]

}

EOF

 

  • addressAPI 监听地址,不能为 127.0.0.1,否则 kube-apiserverheapster 等不能调用 kubelet API
  • readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
  • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
  • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;
  • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
  • 对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized
  • authroization.mode=Webhookkubelet 使用 SubjectAccessReview API 查询 kube-apiserver usergroup 是否具有操作资源的权限(RBAC)
  • featureGates.RotateKubeletClientCertificatefeatureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager --experimental-cluster-signing-duration 参数;
  • 需要 root 账户运行;

为各节点创建和分发 kubelet 配置文件:

scp kubelet.config.json k8s2-node1:/etc/kubernetes/cert/kubelet.config.json

 

scp kubelet.config.json k8s2-node2:/etc/kubernetes/cert/kubelet.config.json

 

scp kubelet.config.json k8s2-node3:/etc/kubernetes/cert/kubelet.config.json

6创建和分发kubelet systemd unit文件 红色字体改成对应node主机ip

cat> /usr/lib/systemd/system/kubelet.service <<EOF

[Unit]

Description=Kubernetes Kubelet

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service

 

[Service]

WorkingDirectory=/var/lib/kubelet

ExecStart=/usr/local/bin/kubelet \

  --bootstrap-kubeconfig=/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig \

  --cert-dir=/etc/kubernetes/cert \

  --kubeconfig=/etc/kubernetes/cert/kubelet.kubeconfig \

  --config=/etc/kubernetes/cert/kubelet.config.json \

  --hostname-override=192.168.1.146 \

  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 \

  --allow-privileged=true \

  --alsologtostderr=true \

  --logtostderr=false \

  --log-dir=/var/log/kubernetes \

  --v=2

Restart=on-failure

RestartSec=5

 

[Install]

WantedBy=multi-user.target

EOF

  • 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;
  • --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token kube-apiserver 发送 TLS Bootstrapping 请求;
  • K8S approve kubelet csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;

为各节点创建和分发 kubelet systemd unit 文件:

scp /etc/systemd/system/kubelet.service k8s2-node2:/etc/systemd/system/kubelet.service

 

scp /etc/systemd/system/kubelet.service k8s2-node3:/etc/systemd/system/kubelet.service

7Bootstrap Token Auth和授予权限

kublet 启动时查找配置的 --kubeletconfig 文件是否存在,如果不存在则使用 --bootstrap-kubeconfig kube-apiserver 发送证书签名请求 (CSR)

kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证(事先使用 kubeadm 创建的 token),认证通过后将请求的 user 设置为 system:bootstrap:group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth

默认情况下,这个 user group 没有创建 CSR 的权限,kubelet 启动失败

解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers clusterrole system:node-bootstrapper 绑定:

 kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

8、启动kubelet服务node节点上操作

mkdir -p /var/log/kubernetes && mkdir -p /var/lib/kubelet

systemctl daemon-reload

systemctl start kubelet

systemctl enable kubelet

 

kubelet正确启动状态:

 

 

 

kubelet 启动后使用 --bootstrap-kubeconfig kube-apiserver 发送 CSR 请求,当这个 CSR approve 后,kube-controller-manager kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。

注意:kube-controller-manager 需要配置 --cluster-signing-cert-file  --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。

  • 三个 work 节点的 csr 均处于 pending 状态;

此时kubelet的进程有,但是监听端口还未启动,需要进行下面步骤!

 

9、approve kubelet csr请求

自动approve csr请求

创建三个 ClusterRoleBinding,分别用于自动 approve clientrenew clientrenew server 证书:

cat > csr-crb.yaml <<EOF

 # Approve all CSRs for the group "system:bootstrappers"

 kind: ClusterRoleBinding

 apiVersion: rbac.authorization.k8s.io/v1

 metadata:

   name: auto-approve-csrs-for-group

 subjects:

 - kind: Group

   name: system:bootstrappers

   apiGroup: rbac.authorization.k8s.io

 roleRef:

   kind: ClusterRole

   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient

   apiGroup: rbac.authorization.k8s.io---

 # To let a node of the group "system:nodes" renew its own credentials

 kind: ClusterRoleBinding

 apiVersion: rbac.authorization.k8s.io/v1

 metadata:

   name: node-client-cert-renewal

 subjects:

 - kind: Group

   name: system:nodes

   apiGroup: rbac.authorization.k8s.io

 roleRef:

   kind: ClusterRole

   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient

   apiGroup: rbac.authorization.k8s.io

---

# A ClusterRole which instructs the CSR approver to approve a node requesting a

# serving cert matching its client cert.

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: approve-node-server-renewal-csr

rules:- apiGroups: ["certificates.k8s.io"]

  resources: ["certificatesigningrequests/selfnodeserver"]

  verbs: ["create"]---

 # To let a node of the group "system:nodes" renew its own server credentials

 kind: ClusterRoleBinding

 apiVersion: rbac.authorization.k8s.io/v1

 metadata:

   name: node-server-cert-renewal

 subjects:

 - kind: Group

   name: system:nodes

   apiGroup: rbac.authorization.k8s.io

 roleRef:

   kind: ClusterRole

   name: approve-node-server-renewal-csr

   apiGroup: rbac.authorization.k8s.io

EOF

  • auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group system:bootstrappers
  • node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group system:nodes;
  • node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group system:nodes;

生效配置:

kubectl apply -f csr-crb.yaml

 

10、查看kubelet情况

 

kubectl get csr (成的token24h后过期,过期后执行同样的命令,显示:No resources found)

 

 

 

所有节点均 ready

11Kubelet提供的API接口

ss -tpln | grep kubelet

 

  • 45701: cadvisor http 服务;
  • 10248: healthz http 服务;
  • 10250: https API 服务;注意:未开启只读端口 10255

由于关闭了匿名认证,同时开启了 webhook 授权,所有访问 10250 端口 https API 的请求都需要被认证和授权。

预定义的 ClusterRole system:kubelet-api-admin 授予访问 kubelet 所有 API 的权限:

kubectl describe clusterrole system:kubelet-api-admin

12kubet api认证和授权

kublet的配置文件kubelet.config.json配置了如下认证参数:

  • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
  • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
  • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;

同时配置了如下授权参数

  • authroization.mode=Webhook:开启 RBAC 授权;

kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized

签名证书通过的示例:

通过认证后,kubelet 使用 SubjectAccessReview API kube-apiserver 发送请求,查询证书或 token 对应的 usergroup 是否有操作资源的权限(RBAC)

证书认证和授权:

权限充足的证书:

使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书

 

curl -s --cacert /etc/kubernetes/cert/ca.pem --cert ./admin.pem --key ./admin-key.pem https://192.168.80.10:10250/metrics|head

 

  • --cacert--cert--key 的参数值必须是文件路径,如上面的 ./admin.pem 不能省略 ./,否则返回 401 Unauthorized

bear token 认证和授权:

https://www.cnblogs.com/harlanzhang/p/10152508.html

 

 三、部署kube-proxy组件

kube-proxy 运行在所有 worker 节点上,,它监听 apiserver service Endpoint 的变化情况,创建路由规则来进行服务负载均衡

 

本文档讲解部署 kube-proxy 的部署,使用 ipvs 模式。

 

1创建kube-proxy证书

 

为更好的区分在创建kube-proxy证书时产生哪些文件,我们在/etc/kubernetes/cert目录下创建worker_kube-proxy目录来存放产生的文件

 

mkdir /etc/kubernetes/cert/worker_kube-proxy && cd /etc/kubernetes/cert/worker_kube-proxy

 

cat > kube-proxy-csr.json <<EOF

{

  "CN": "system:kube-proxy",

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "BeiJing",

      "L": "BeiJing",

      "O": "k8s",

      "OU": "4Paradigm"

    }

  ]

}

EOF

 

  • CN:指定该证书的 User  system:kube-proxy
  • 预定义的 RoleBinding system:node-proxier User system:kube-proxy Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限;
  • 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

产生文件:

生成证书跟私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \

  -ca-key=/etc/kubernetes/cert/ca-key.pem \

  -config=/etc/kubernetes/cert/ca-config.json \

  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy

 

生成的文件:

 

 

 

2创建和分发kubeconfig文件

 

kubectl config set-cluster kubernetes \

  --certificate-authority=/etc/kubernetes/cert/ca.pem \

  --embed-certs=true \

  --server=https://192.168.1.164:6443 \

  --kubeconfig=kube-proxy.kubeconfig

 

产生的文件:

 

 

 

kubectl config set-credentials kube-proxy \

  --client-certificate=kube-proxy.pem \

  --client-key=kube-proxy-key.pem \

  --embed-certs=true \

  --kubeconfig=kube-proxy.kubeconfig

 

 

 

 

kubectl config set-context default \

  --cluster=kubernetes \

  --user=kube-proxy \

  --kubeconfig=kube-proxy.kubeconfig

 

 

 

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

 

 

 

分发kubeconfig文件:

 

scp kube-proxy.kubeconfig k8s2-node1:/etc/kubernetes/cert/

scp kube-proxy.kubeconfig k8s2-node2:/etc/kubernetes/cert/

scp kube-proxy.kubeconfig k8s2-node3:/etc/kubernetes/cert/

 

3创建kube-proxy配置文件

 

创建 kube-proxy config 文件模板:红色对应你的node几点ip,红色为flannel网络的网段

 

cat >kube-proxy.config.yaml <<EOF

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: 192.168.1.146

clientConnection:

kubeconfig: /etc/kubernetes/cert/kube-proxy.kubeconfig

clusterCIDR: 172.30.0.0/16

healthzBindAddress: 192.168.1.146:10256

hostnameOverride: k8s-node1

kind: KubeProxyConfiguration

metricsBindAddress: 192.168.1.146:10249

mode: "ipvs"

EOF

 

  • bindAddress: 监听地址;
  • clientConnection.kubeconfig: 连接 apiserver kubeconfig 文件;
  • clusterCIDR: kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr  --masquerade-all选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT
  • hostnameOverride: 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;
  • mode: 使用 ipvs 模式;

 

 

生成的文件:

 

 

 

为各节点创建和分发 kube-proxy 配置文件:

 

scp kube-proxy.config.yaml k8s2-node1:/etc/kubernetes/cert/

 

scp kube-proxy.config.yaml k8s2-node2:/etc/kubernetes/cert/

 

scp kube-proxy.config.yaml k8s2-node3:/etc/kubernetes/cert/

 

 

 

6、创建和分发kube-proxy systemd unit文件k8s2-master1节点做

 

cat > /usr/lib/systemd/system/kube-proxy.service <<EOF

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

 

[Service]

WorkingDirectory=/var/lib/kube-proxy

ExecStart=/usr/local/bin/kube-proxy \

  --config=/etc/kubernetes/cert/kube-proxy.config.yaml \

  --alsologtostderr=true \

  --logtostderr=false \

  --log-dir=/var/lib/kube-proxy/log \

  --v=2

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

EOF

 

分发kube-proxy ssytemd unit 文件:(分发完成后要修改文件中相应的ip

 

scp /usr/lib/systemd/system/kube-proxy.service k8s2-node1:/usr/lib/systemd/system

 

scp /usr/lib/systemd/system/kube-proxy.service k8s2-node2:/usr/lib/systemd/system

 

scp /usr/lib/systemd/system/kube-proxy.service k8s2-node3:/usr/lib/systemd/system

 

 

 

7、启动kube-proxy服务创建每个node节点的工作和日志目录

 

mkdir -p /var/lib/kube-proxy/log

 

 

 

systemctl daemon-reload

systemctl enable kube-proxy

systemctl restart kube-proxy

 

kube-proxy正确的启动状态:

 

 

7查看ipvs路由规则

 

yum -y install ipvsadm

 

ipvsadm -ln

 

 

可见将所有到 kubernetes cluster ip 443 端口的请求都转发到 kube-apiserver 6443 端口。

恭喜!至此node节点部署完成

 

四、验证集群功能

 

1、查看节点状况

 

kubectl get nodes

 

 

2、创建nginx web测试文件

 

cat >nginx-web.yml<<EOF

apiVersion: v1

kind: Service

metadata:

  name: nginx-web

  labels:

    tier: frontend

spec:

  type: NodePort

  selector:

    tier: frontend

  ports:

  - name: http

    port: 80

    targetPort: 80

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: nginx-con

  labels:

    tier: frontend

spec:

  replicas: 3

  template:

    metadata:

      labels:

        tier: frontend

    spec:

      containers:

      - name: nginx-pod

        image: nginx

        ports:

        - containerPort: 80

EOF

创建:

 

kubectl create -f nginx-web.yml

 

3查看各个NodePod IP的连通性

 

kubectl get pod -o wide

 

 

 

分别在node节点ping这三个ip

 

4、查看集群服务

 

kubectl get svc

 

 

 

5访问服务可达性

 

用局域网的任意其他主机访问应用,nodeip:nodeprot方式

 

curl -I 192.168.1.146:31662

 

 

 

flannel网络的主机上使用集群ip访问应用:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

猜你喜欢

转载自www.cnblogs.com/lucifer1889/p/11589935.html