CentOS 8 安装 docker && kubernetes

CentOS 8 安装 docker && kubernetes && 相关工具

1、安装CentOS 8

a、下载IOS镜像

全量版本: http://mirrors.aliyun.com/centos/8.2.2004/isos/x86_64/CentOS-8.2.2004-x86_64-dvd1.iso
精简版: http://mirrors.aliyun.com/centos/8.2.2004/isos/x86_64/CentOS-8.2.2004-x86_64-minimal.iso

b、安装镜像

安装教程链接

c、启动网卡

最小化安装的时候, 出现无法访问外网时,
需要修改网卡配置

vim /etc/sysconfig/network-scripts/ifcfg-ens33

将 ONBOOT=no 修改为 ONBOOT=yes, 重新启动网卡

numctl c relaod

2、安装docker-ce

使用yum源安装docker

a.下载docker-ceyum源

curl https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo

# docker 官方依赖包下载地址
https://download.docker.com

b.安装额外依赖(containerd.io >= 1.2.2-3)

yum install -y https://download.docker.com/linux/fedora/30/x86_64/stable/Packages/containerd.io-1.2.6-3.3.fc30.x86_64.rpm

c.安装docker-ce

yum install -y docker-ce

d.设置自启并启动docker

systemctl enable docker && systemctl start docker

3、安装kubeadm 和相关工具

a.关闭SELINUX并重启linux

让容器可以读取主机系统文件

vim /etc/sysconfig/selinux

将 SELINUX=enforcing 修改为 SELINUX=disable

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
# SELINUX=enforcing
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

重启linux

b.添加kubernetes源

vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes Repository
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

c.安装kubelt、kubeadm、kubectl

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

d.导出配置并拉去kunernetes镜像

修改配置并拉取镜像
修改docker配置信息, 从国内托管站点获取加速

echo '{"registry-mirrors": ["https://registry.docker-cn.com"]}' > /etc/docker/daemon.json
# 重启docker服务
systemctl restart docker

导出kubernetes的默认配置

kubeadm config print init-defaults > ./init-default.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: localhost.localdomain
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

修改 kubernetes配置
imageRepository根据自己的版本,确认该仓库是否又符合自己版本的镜像

vim ./init-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
imageRepository: docker.io/aiotceo
kubernetesVersion: v1.18.0
networking:
    podSubnet: "192.166.0.0/16

拉取kubernetes镜像

kubeadm config images pull --config=./init-config.yaml
# kubernetes 所需的镜像
k8s.gcr.io/kube-apiserver:v1.18.3
k8s.gcr.io/kube-controller-manager:v1.18.3
k8s.gcr.io/kube-scheduler:v1.18.3
k8s.gcr.io/kube-proxy:v1.18.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

e.运行kubeadm init 命令安装 Master

关闭 swap, kubernetes 1.8 后的版本需要关闭swap

如果将node运行在传统的磁盘交换区,会丢失一些属性, 可能导致性能和io上的不可预见性的错误, 从而无法共享服务器

swapoff -a

kubernetes 1.8 之后要求系统关闭 swap,未关闭安装会报错

running with swap is not suppoerted, please disable swap

修改主机名称

默认的主机名称一般的 localhost.localdomain
安装node的时候,是不允许主机名称重复的, 所以在初始化前最好定义好主机的名称

# 查看当前主机名称
hostname

# 重命名
hostnamectl set-hostname new-hostname

使用 kubernetes init 命令对前面创建的配置文件进行集群的初始化

kubeadm init --config=init-config.yaml

按照命令, 复制文件至普通用户home目录下

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

这样Master就安装好Kubernetes,但集群此时还无法工作, 缺乏 node和容器网络的配置
注意: 记录最后的几行信息 kubeadm join ..., 主要用于节点的扩容

f.安装Node,加入集群

配置 docker & kubernetes 源
参考 2 安装docker-ce

安装kubeadm和相关工具

yum install kubeadm kubelet --disableexcludes=kubernetes

# 启动kunernetes & docker 并设置开机自启
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

创建配置文件 ./join-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
discovery:
    bootstrapToken:
        apiServerEndpoint: 192.168.233.130:6443
        token: bflf3m.jk6bkreuskodlm7w
        unsafeSkipCAVerification: true
    tlsBootstrapToken: bflf3m.jk6bkreuskodlm7w

apiServerEndpoint: master 服务器的ip及端口, 端口默认为6443
token: kubernetes的token信息
tlsBootstrapToken: kubernetes的token信息
unsafeSkipCAVerification: 跳过CA认证

# 查看当前的token(24h有效)信息
[root@localhost /]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
bflf3m.jk6bkreuskodlm7w   14h         2020-06-18T01:06:04-04:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

使用配置文件,执行 kubeadm join 命令, 将node加入集群

kubeadm join --config=./join-config.yaml

进入master服务器, 查看node信息

[root@localhost /]# kubectl get nodes
NAME                    STATUS   ROLES    AGE    VERSION
k8s-node-1              Ready    <none>   125m   v1.18.3
localhost.localdomain   Ready    master   9h     v1.18.3

g.安装网络插件

在Master上安装CNI网络插件

# kubectl apply -f [podnetwork.yaml]
# example weave
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | bas64 | tr -d '\n')"

h.验证集群是否正常安装

执行 kubectl get pods --all-namespaces 查看集群内所有节点的状态

[root@localhost /]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGE
default       mysql-x8gs2                                     1/1     Running   0          75m
default       myweb-cmkp9                                     1/1     Running   0          52m
default       myweb-r2mtt                                     1/1     Running   0          52m
kube-system   coredns-6b4b4997cc-742jv                        1/1     Running   0          8h
kube-system   coredns-6b4b4997cc-swx64                        1/1     Running   0          8h
kube-system   etcd-localhost.localdomain                      1/1     Running   0          115m
kube-system   kube-apiserver-localhost.localdomain            1/1     Running   0          116m
kube-system   kube-controller-manager-localhost.localdomain   1/1     Running   0          115m
kube-system   kube-proxy-s7bk6                                1/1     Running   0          98m
kube-system   kube-proxy-xcxdc                                1/1     Running   0          8h
kube-system   kube-scheduler-localhost.localdomain            1/1     Running   0          116m
kube-system   weave-net-gd7r5                                 2/2     Running   0          132m
kube-system   weave-net-zksgw                                 2/2     Running   0          98m

查看错误的主机状态:

kubectl --namespace=kube-system describe pod <pod_name>
[root@localhost /]# kubectl --namespace=kube-system describe pod weave-net-gd7r5
Name:                 weave-net-gd7r5
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 localhost.localdomain/192.168.233.130
Start Time:           Wed, 17 Jun 2020 07:46:03 -0400
Labels:               controller-revision-hash=6768fc7ccf
                      name=weave-net
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   192.168.233.130
IPs:
  IP:           192.168.233.130
Controlled By:  DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://7adc715594cdffb666e6622185af4a7b214126e8077106c4f2275a5377e0d5e0
    Image:         docker.io/weaveworks/weave-kube:2.6.5
    Image ID:      docker-pullable://weaveworks/weave-kube@sha256:703a045a58377cb04bc85d0f5a7c93356d5490282accd7e5b5d7a99fe2ef09e2
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Running
      Started:      Wed, 17 Jun 2020 07:48:13 -0400
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      10m
    Readiness:  http-get http://127.0.0.1:6784/status delay=0s timeout=1s period=10s #success=1 #failure=3

......

  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-fq98w:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-fq98w
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 :NoExecute
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:          <none>

重置主机状态:

kubeadm reset -y

猜你喜欢

转载自blog.csdn.net/qq_40601372/article/details/106793068