Kubernetes (K8s) 安装部署过程(五)之Master节点安装

etcd集群为3台,分别复用这3台虚拟机。

作为k8s的核心,master节点主要包含三个组件,分别是:

三个组件:
kube-apiserver
kube-scheduler
kube-controller-manager

这个三个组件密切联系,再次提醒关闭selinux,关闭防火墙,最好禁用掉。

1、创建TLS证书

这些证书我们在第一篇文章中已经创建,共8个,这里核对一下数量是否正确,至于证书是否正确参考第一篇文章的注释实现。位置:221虚拟机master节点

[root@k8s_Master ssl]# ls /etc/kubernetes/ssl
admin-key.pem  admin.pem  ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  kubernetes-key.pem  kubernetes.pem

2、

获取k8s server端文件并安装

我们采用在github上下载的方式获得tar包,解压或者二进制程序。说明:这里使用的是GitHub上最新版本的。

下载Kubernetes(简称K8S)二进制文件(下载地址:https://github.com/kubernetes/kubernetes/releases

从上边的网址中选择相应的版本,本文以1.19.-rc.4版本为例,从 CHANGELOG页面 下载二进制文件

[root@k8s_Master package]# wget https://dl.k8s.io/v1.19.0-rc.4/kubernetes-server-linux-amd64.tar.gz
[root@k8s_Master package]# tar -xf kubernetes-server-linux-amd64.tar.gz 
[root@k8s_Master package]# cd kubernetes
[root@k8s_Master kubernetes]# ls
addons  kubernetes-src.tar.gz  LICENSES  server
[root@k8s_Master kubernetes]# tar -xf kubernetes-src.tar.gz

拷贝二进制文件到/usr/bin下,可能会提示overwrite,因为前面安装的kubectl会安装一部分,直接覆盖就好,下面的语句使用了-r去覆盖,不加-r会提示,并且这个server包含server和client文件,不用单独下载client包

[root@k8s_Master kubernetes]# \cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/

至此一些必要的二进制命令文件获取完毕,下一部制作3个组件的服务程序和配置文件

3、制作apiserver的服务文件

/usr/lib/systemd/system/kube-apiserver.service内容:

[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_ETCD_SERVERS \
        $KUBE_API_ADDRESS \
        $KUBE_API_PORT \
        $KUBELET_PORT \
        $KUBE_ALLOW_PRIV \
        $KUBE_SERVICE_ADDRESSES \
        $KUBE_ADMISSION_CONTROL \
        $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

制作/etc/kubernetes/config通用文件,的内容为:

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://sz-pg-oam-docker-test-001.tendcloud.com:8080"
KUBE_MASTER="--master=http://192.168.0.221:8080"

kube-apiserver的配置文件/etc/kubernetes/apiserver内容为:

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=192.168.0.221 --bind-address=192.168.0.221 --insecure-bind-address=127.0.0.1"

# The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://192.168.0.221:2379,https://192.168.0.222:2379,https://192.168.0.223:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction"

# Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC,Node --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"

设置开机启动并启动apiserver组件:

[root@k8s_Master kubernetes]# systemctl daemon-reload
[root@k8s_Master kubernetes]# systemctl enable kube-apiserver
Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /usr/lib/systemd/system/kube-apiserver.service.
[root@k8s_Master kubernetes]# systemctl start kube-apiserver
[root@k8s_Master kubernetes]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Service
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-08-14 19:00:02 CST; 12s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 27141 (kube-apiserver)
    Tasks: 10 (limit: 17529)
   Memory: 346.1M
   CGroup: /system.slice/kube-apiserver.service
           └─27141 /usr/local/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=https://192.168.0.221:2379,https://192.168.0.222:2379,https://192.168.0.223:2379 --advertise-address=192.168.0.221 --bind-address=192.168.0.221 --insecure-bind-address=127.0.0.1 -->

Aug 14 19:00:03 k8s_Master kube-apiserver[27141]: I0814 19:00:03.922512   27141 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
Aug 14 19:00:03 k8s_Master kube-apiserver[27141]: I0814 19:00:03.922556   27141 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
Aug 14 19:00:03 k8s_Master kube-apiserver[27141]: I0814 19:00:03.929670   27141 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
Aug 14 19:00:03 k8s_Master kube-apiserver[27141]: I0814 19:00:03.934245   27141 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
Aug 14 19:00:03 k8s_Master kube-apiserver[27141]: I0814 19:00:03.934273   27141 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
Aug 14 19:00:04 k8s_Master kube-apiserver[27141]: I0814 19:00:04.635972   27141 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
Aug 14 19:00:04 k8s_Master kube-apiserver[27141]: I0814 19:00:04.701499   27141 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
Aug 14 19:00:04 k8s_Master kube-apiserver[27141]: W0814 19:00:04.903769   27141 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.0.221]
Aug 14 19:00:04 k8s_Master kube-apiserver[27141]: I0814 19:00:04.904462   27141 controller.go:606] quota admission added evaluator for: endpoints
Aug 14 19:00:04 k8s_Master kube-apiserver[27141]: I0814 19:00:04.908683   27141 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io

netstat -anptu 检查端口,6443和8080端口应该监听成功,代表apiserver安装成功。

[root@k8s_Master kubernetes]# netstat -antpu|grep 6443
tcp        0      0 192.168.0.221:6443      0.0.0.0:*               LISTEN      27141/kube-apiserve 
tcp        0      0 192.168.0.221:33338     192.168.0.221:6443      ESTABLISHED 27141/kube-apiserve 
tcp        0      0 192.168.0.221:6443      192.168.0.221:33338     ESTABLISHED 27141/kube-apiserve 
[root@k8s_Master kubernetes]# netstat -antpu|grep 8080
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      27141/kube-apiserve

4、配置和启动 kube-controller-manager

服务定义文件/usr/lib/systemd/system/kube-controller-manager.service内容为:

说明,某些文件可能已经存在,我们只要核对内容即可。

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

相关配置文件配置文件/etc/kubernetes/controller-manager内容:

###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --leader-elect=true"

设置开机启动并启动controller-manager

[root@k8s_Master kubernetes]# systemctl daemon-reload
[root@k8s_Master kubernetes]# systemctl enable kube-controller-manager
Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /usr/lib/systemd/system/kube-controller-manager.service.
[root@k8s_Master kubernetes]# systemctl start kube-controller-manager
[root@k8s_Master kubernetes]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-08-14 19:06:46 CST; 6s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 27370 (kube-controller)
    Tasks: 7 (limit: 17529)
   Memory: 21.9M
   CGroup: /system.slice/kube-controller-manager.service
           └─27370 /usr/local/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://192.168.0.221:8080 --address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster>

Aug 14 19:06:47 k8s_Master kube-controller-manager[27370]: W0814 19:06:47.609699   27370 authentication.go:289] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so requ>
Aug 14 19:06:47 k8s_Master kube-controller-manager[27370]: W0814 19:06:47.609714   27370 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
Aug 14 19:06:47 k8s_Master kube-controller-manager[27370]: I0814 19:06:47.609729   27370 controllermanager.go:175] Version: v1.19.0-rc.4
Aug 14 19:06:47 k8s_Master kube-controller-manager[27370]: I0814 19:06:47.610445   27370 secure_serving.go:197] Serving securely on [::]:10257
Aug 14 19:06:47 k8s_Master kube-controller-manager[27370]: I0814 19:06:47.610507   27370 tlsconfig.go:240] Starting DynamicServingCertificateController
Aug 14 19:06:47 k8s_Master kube-controller-manager[27370]: I0814 19:06:47.610780   27370 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
Aug 14 19:06:47 k8s_Master kube-controller-manager[27370]: I0814 19:06:47.610838   27370 leaderelection.go:243] attempting to acquire leader lease  kube-system/kube-controller-manager...
Aug 14 19:06:47 k8s_Master kube-controller-manager[27370]: E0814 19:06:47.611591   27370 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: Get "http://192.168.0.221:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-ma>
Aug 14 19:06:50 k8s_Master kube-controller-manager[27370]: E0814 19:06:50.172180   27370 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: Get "http://192.168.0.221:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-ma>
Aug 14 19:06:52 k8s_Master kube-controller-manager[27370]: E0814 19:06:52.347854   27370 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: Get "http://192.168.0.221:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-ma>
lines 1-20/20 (END)

5、配置和启动 kube-scheduler

服务定义文件/usr/lib/systemd/system/kube-scheduler.service内容为:

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=kube
ExecStart=/usr/local/bin/kube-scheduler \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

相关的配置文件/etc/kubernetes/scheduler内容为:

###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1"

设置开机启动并启动:

[root@k8s_Master kubernetes]# useradd kube
[root@k8s_Master kubernetes]# systemctl daemon-reload
[root@k8s_Master kubernetes]# systemctl enable kube-scheduler
Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /usr/lib/systemd/system/kube-scheduler.service.
[root@k8s_Master kubernetes]# systemctl start kube-scheduler
[root@k8s_Master kubernetes]# systemctl status kube-scheduler
● kube-scheduler.service - Kubernetes Scheduler Plugin
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2020-08-14 19:16:21 CST; 5s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 27679 (kube-scheduler)
    Tasks: 9 (limit: 17529)
   Memory: 16.1M
   CGroup: /system.slice/kube-scheduler.service
           └─27679 /usr/local/bin/kube-scheduler --logtostderr=true --v=0 --master=http://192.168.0.221:8080 --leader-elect=true --address=127.0.0.1

Aug 14 19:16:25 k8s_Master kube-scheduler[27679]: E0814 19:16:25.742638   27679 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "http://192.168.0.221:8080/api/v1/persi>
Aug 14 19:16:26 k8s_Master kube-scheduler[27679]: E0814 19:16:26.078687   27679 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "http://192.168.0.221:8080/apis/storage.k8s.io/v1/storage>
Aug 14 19:16:26 k8s_Master kube-scheduler[27679]: E0814 19:16:26.351974   27679 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "http://192.168.0.221:8080/apis/apps/v1/statefulsets?limit=>
Aug 14 19:16:26 k8s_Master kube-scheduler[27679]: E0814 19:16:26.357784   27679 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "http://192.168.0.221:8080/apis/apps/v1/replicasets?limit=500>
Aug 14 19:16:26 k8s_Master kube-scheduler[27679]: E0814 19:16:26.462878   27679 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "http://192.168.0.221:8080/api/v1/nodes?limit=500&resourceVersion=0": dia>
Aug 14 19:16:26 k8s_Master kube-scheduler[27679]: E0814 19:16:26.471460   27679 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "http://192.168.0.221:8080/api/v1/pods?fieldSelector=status.ph>
Aug 14 19:16:26 k8s_Master kube-scheduler[27679]: E0814 19:16:26.518839   27679 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "http://192.168.0.221:8080/apis/storage.k8s.io/v1/csinodes?limit=50>
Aug 14 19:16:26 k8s_Master kube-scheduler[27679]: E0814 19:16:26.521530   27679 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "http://192.168.0.221:8080/api/v1/persistentvolum>
Aug 14 19:16:26 k8s_Master kube-scheduler[27679]: E0814 19:16:26.617104   27679 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "http://192.168.0.221:8080/api/v1/pods?limit=500&resourceVersion=0": dial t>
Aug 14 19:16:26 k8s_Master kube-scheduler[27679]: E0814 19:16:26.625039   27679 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "http://192.168.0.221:8080/api/v1/services?limit=500&resourceVersio>

6、所有服务启动之后验证服务

首先ss -tanl查看端口:我的如下:

[root@k8s_Master kubernetes]# ss -tanl
State                               Recv-Q                               Send-Q                                                              Local Address:Port                                                               Peer Address:Port                               
LISTEN                              0                                    128                                                                     127.0.0.1:10251                                                                   0.0.0.0:*                                  
LISTEN                              0                                    128                                                                 192.168.0.221:6443                                                                    0.0.0.0:*                                  
LISTEN                              0                                    128                                                                 192.168.0.221:2379                                                                    0.0.0.0:*                                  
LISTEN                              0                                    128                                                                     127.0.0.1:2379                                                                    0.0.0.0:*                                  
LISTEN                              0                                    128                                                                       0.0.0.0:5355                                                                    0.0.0.0:*                                  
LISTEN                              0                                    128                                                                     127.0.0.1:10252                                                                   0.0.0.0:*                                  
LISTEN                              0                                    128                                                                 192.168.0.221:2380                                                                    0.0.0.0:*                                  
LISTEN                              0                                    128                                                                     127.0.0.1:8080                                                                    0.0.0.0:*                                  
LISTEN                              0                                    128                                                                       0.0.0.0:22                                                                      0.0.0.0:*                                  
LISTEN                              0                                    128                                                                          [::]:5355                                                                       [::]:*                                  
LISTEN                              0                                    128                                                                             *:10257                                                                         *:*                                  
LISTEN                              0                                    128                                                                             *:10259                                                                         *:*                                  
LISTEN                              0                                    128                                                                          [::]:22                                                                         [::]:* 

使用kubectl get命令获得组件信息:确保所有组件都是ok和healthy状态为true

[root@k8s_Master kubernetes]# kubectl get componentstatuses
2020-08-14 19:19:21.593962 I | etcdserver/api/etcdhttp: /health OK (status code 200)
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}

至此,master节点安装完成,在创建配置文件的过程中一定要信息,如果发现报错,使用journalctl -xe -u 服务名称  查看相关报错以及查看/var/log/message查看更详细的报错情况,具体情况具体解决即可。

注意事项:1、拷贝配置文件注意标点符号

                  2、需要创建kube账户,否则scheduler启动不了

补充:

source <(kubectl completion bash)

执行以上命令可以执行kubectl命令的自动补全,因为kubectl太多子命令了。

猜你喜欢

转载自blog.csdn.net/baidu_38432732/article/details/107999448