kubeadm安装kubernetes1.15


安装要求
在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  1. 一台或多台机器,操作系统 CentOS7.x-86_x64
  2. 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多
  3. 集群中所有机器之间网络互通
  4. 可以访问外网,需要拉取镜像
  5. 禁止swap分区

Kubernetes(k8s)是自动化容器管理的开源平台
使用k8s的优点:

  1. 自动化部署容器和复制
  2. 随时扩展和收缩容器规模
  3. 平滑升级
  4. 自动发现
  5. 等等...

实际上,使用Kubernetes只需一个部署文件,使用一条命令就可以部署多层容器(前端,后台等)的完整集群:
kubectl create -f single-config-file.yaml
kubectl是和Kubernetes API交互的命令行程序。现在介绍一些核心概念。

Master(管理节点)
API Server:供Kubernetes API接口,主要处理 Rest操作以及更新Etcd中的对象。 所有资源增删改查的唯一入口。
Scheduler:绑定Pod到Node上,资源调度。
Controller Manager: 所有其他群集级别的功能,目前由控制器Manager执行。资源对象的 自动化控制中心。
Etcd:所有持久化的状态信息存储在Etcd中。

Node(计算节点)
Kubelet:管理Pods以及容器、镜像、Volume等,实现对集群 对节点的管理
Kube-proxy:提供网络代理以及负载均衡,实现与Service通讯。
Docker Engine:负责节点的容器的管理工作。

POD(资源池)
Pod是K8s集群中所有业务类型的基础
Pod是在K8s集群中运行部署应用或服务的最小单元,它是可以支持多容器的。
Pod的设计理念是支持多个容器在一个Pod中共享网络地址和文件系统。
POD控制器Deployment、Job、DaemonSet和 PetSet


环境规划
192.168.2.243    k8s-master(4cpu,4G,50G)
kube-apiserver 
kube-schduler 
kube-controller-manager 
docker 
flannel 
kubelet
192.168.2.244    node1(2cpu,2G,30G)
kubelet 
kube-proxy 
docker 
flannel
192.168.2.245    node2(2cpu,2G,30G)
kubelet 
kube-proxy 
docker 
flannel

第一部分

设置主机名
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

1、修改host文件【非集群环境可省略】
vim /etc/hosts 
192.168.2.243     master
192.168.2.244     node1
192.168.2.245     node2

2、在Linux之间相互拷贝文件
scp /etc/hosts root@node1:/etc/hosts
scp /etc/hosts root@node2:/etc/hosts

3、安装依赖包
yum -y install conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget git vim net-tools

4、设置防火墙为lptables 并设置空规则
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

5、关闭swap分区和SELINUX(如果不关闭,默认配置下kubelet将无法启动。使用free -m确认swap已经关闭。)
swapoff -a && sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab 
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config 

6、修改时区并关闭系统不需要服务
timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0
systemctl restart rsyslog
systemctl restart crond
systemctl stop postfix && systemctl disable postfix

7、Kubernetes调整内核参数(三个重要部分是必须条件,将桥接的IPv4流量传递到iptables的链)

cat > kubernetes.conf<<EOF
#开启网桥模式【重要】
net.bridge.bridge-nf-call-iptables=1 
#开启网桥模式【重要】
net.bridge.bridge-nf-call-ip6tables=1 
net.ipv4.ip_forward=1 
net.ipv4.tcp_tw_recycle=0
#禁止使用swap空间,只有当系统OOM时才允许使用它 
vm.swappiness=0
#不检查物理内存是否够用
vm.overcommit_memory=1
#开启OOM 
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192 
fs.inotify.max_user_watches=1048576 
fs.file-max=52706963 
fs.nr_open=52706963 
#关闭ipv6【重要】
net.ipv6.conf.all.disable_ipv6=1 
net.netfilter.nf_conntrack_max=2310720 
EOF

8、将优化内核文件拷贝到/etc/sysctl.d/文件夹下,这样优化文件开机的时候能够被调用
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf

9、手动加载内核文件,立即生效
sysctl -p /etc/sysctl.d/kubernetes.conf

10、设置日志的保存方式
#在Centos7以后,因为引导方式改为了system.d,所以有两个日志系统同时在工作,默认的是rsyslogd,以及systemd journald,使用systemd journald更好一些,因此我们更改默认为systemd journald,只保留一个日志的保存方式。
mkdir /var/log/journal
mkdir /etc/systemd/journal.conf.d

cat >/etc/systemd/journal.conf.d/99-prophet.conf <<EOF
[Journal] 
#持久化保存到磁盘 
Storage=persistent 

#压缩历史日志 
Compress=yes 

SyncIntervalSec=5m 
RateLimitInterval=30s 
RateLimitBurst=1000 

#最大占用空间10G 
SystemMaxUse=10G 

#单日志文件最大200M 
SystemMaxFileSize=200M 

#日志保存时间2周 
MaxRetentionSec=2week 

#不将日志转发到syslog 
ForwardToSyslog=no
EOF

systemctl restart systemd-journald

11、升级Linux内核为4.44版本
#CentOS 7.x 系统自带的3.10.x内核存在一些Bugs.导致运行的Docker.Kubernetes不稳定。
#安装源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
#开始安装,安装完成后检查 /boot/grub2/grub.cfg中对应内核menuentry中是否包含 initrd16 配置,如果没有,再安装一次!
yum --enablerepo=elrepo-kernel install -y kernel-lt
#设置开机从新内核启动
grub2-set-default 'CentoS Linux(4.4.189-1.el7.elrepo.×86_64) 7 (Core)'
#重启服务器
reboot
#查看内核升级是否成功
[root@k8s-master ~]# uname -a
Linux k8s-master 4.4.196-1.el7.elrepo.x86_64 #1 SMP Mon Oct 7 16:17:40 EDT 2019 x86_64 x86_64 x86_64 GNU/Linux


12、kube-proxy开启ipvs的前置条件
modprobe br_netfilter

cat >/etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF


chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod |grep -e ip_vs -e nf_conntrack_ipv4


第二部分

1、安装docker
#一定要安装18.09版docker,否则在初始化k8s master的时候会提示:[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.3. Latest validated version: 18.09

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
#yum install docker-ce -y
yum -y install docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io
systemctl start docker
systemctl enable docker


2、修改docker cgroup driver为systemd
#根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。

创建或修改/etc/docker/daemon.json:

cat >/etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

3、重启docker:
systemctl restart docker

查看结果:
docker info | grep Cgroup
Cgroup Driver: systemd


第三部分

使用kubeadm部署Kubernetes
1、配置k8s源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#测试地址https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要科学上网。
curl https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

2、指定安装k8s组件,kubeadm,kubelet和kubectl
#由于版本更新频繁,这里指定版本号部署(我第一次安装没指定版本号报错提示版本过高,当时是1.16版)
yum makecache fast
yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
systemctl start kubelet && systemctl enable kubelet

#以上内容三台服务器均需安装

2、初始化Kubernetes Master
#只需要在Master 节点执行,这里的apiserve需要修改成自己的master地址,另外初始化之前必须保证每个节点对应的主机名和/etc/hosts隐射的域名一致。
master初始化
[root@k8s-master ~]# kubeadm init \
--apiserver-advertise-address=10.12.236.200 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.15.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

#最后输出内容如下:
......
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.12.236.200:6443 --token f106ws.8apa5sfegzsegw1b --discovery-token-ca-cert-hash sha256:f5ec112917e99aa995c049ac83460129b3b6cf3caac2281f1555d2309b9a7ada 

#初始化成功后注意将kubeadm join xxx保存下来,等下node节点需要使用。如果忘记了,可以在master上通过kubeadm token list得到。

如果初始化的时候报错如下:
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
解决方法:
#加上参数--ignore-preflight-errors=all
kubeadm init \
--apiserver-advertise-address=10.12.236.200 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all

#节点重置(如果节点要重复初始化就要执行下面的命令进行重置)
#kubeadm reset


3、查看镜像是否下载完成
[root@k8s-master ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.15.0             d235b23c3570        3 months ago        82.4MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.15.0             201c7a840312        3 months ago        207MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.15.0             2d3813851e87        3 months ago        81.1MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.15.0             8328bb49b652        3 months ago        159MB
registry.aliyuncs.com/google_containers/coredns                   1.3.1               eb516548c180        9 months ago        40.3MB
registry.aliyuncs.com/google_containers/etcd                      3.3.10              2c4adeb21b4f        10 months ago       258MB
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        22 months ago       742kB

4、然后根据提示操作:
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

#默认token的有效期为24小时,当过期之后,该token就不可用了,
5、如果后续有nodes节点加入,解决方法如下:
重新生成新的token
[root@k8s-master ~]# kubeadm token create
xqubh3.kn2wd0zu8b3cy8ao
[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
f106ws.8apa5sfegzsegw1b   23h       2019-10-15T12:00:11+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
xqubh3.kn2wd0zu8b3cy8ao   23h       2019-10-15T12:08:49+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

获取ca证书sha256编码hash值
[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
f5ec112917e99aa995c049ac83460129b3b6cf3caac2281f1555d2309b9a7ada


6、node1加入cluster(在两个 Node 节点执行)
#节点加入集群,通过kubeadm初始化后,最后的输出内容会提供node加入的token
[root@k8s-node01 ~]# kubeadm join 10.12.236.200:6443 --token q6cd0u.jlx616vnwmbm9kus \
    --discovery-token-ca-cert-hash sha256:43b3c117f6752bd919908d033a826b8dccb8ca2fc19ab90aca3502d13b14dfc7 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

#node2加入cluster
[root@node2 ~]# kubeadm join 10.12.236.200:6443 --token q6cd0u.jlx616vnwmbm9kus \
>     --discovery-token-ca-cert-hash sha256:43b3c117f6752bd919908d033a826b8dccb8ca2fc19ab90aca3502d13b14dfc7 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
#加入集群后node节点会生成如下两个镜像并对应的启动如下两个容器进程
[root@node2 ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.15.0             d235b23c3570        5 months ago        82.4MB
registry.aliyuncs.com/google_containers/pause        3.1                 da86e6ba6ca1        23 months ago       742kB
[root@node2 ~]# docker ps -a
CONTAINER ID        IMAGE                                                COMMAND                  CREATED              STATUS              PORTS               NAMES
e549feb9eebf        registry.aliyuncs.com/google_containers/kube-proxy   "/usr/local/bin/kube…"   About a minute ago   Up About a minute                       k8s_kube-proxy_kube-proxy-m5qgc_kube-system_d0f4d8fa-6987-47fb-816f-f13f1205d47b_0
898fb710294d        registry.aliyuncs.com/google_containers/pause:3.1    "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-proxy-m5qgc_kube-system_d0f4d8fa-6987-47fb-816f-f13f1205d47b_0

7、如果两个node节点接入结群没报错就回到master节点查看节点:
[root@k8s-master kubernetes]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@k8s-master kubernetes]# source ~/.bash_profile 
[root@k8s-master kubernetes]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   6m56s   v1.15.0
node1        NotReady   <none>   6m12s   v1.15.0
node2        NotReady   <none>   5m41s   v1.15.0

#发现“STATUS”这个地方都显示“NoteReady”,是因为网络组件的问题,且先安装网络组件flannel
8、配置添加网络组件(master操作)
组件flannel可以通过https://github.com/coreos/flannel  中获取,在master节点中执行以下命令安装flannel组件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

##安装完成,在master节点执行命令查看flannel是否被拉下来
docker image ls
#如果能查到,表示已经拉取下来了
9、执行命令查看所有名称空间的pod,查看flannel是否正常启动
[root@master ~]# kubectl get pods -n kube-system 
NAME                             READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-s4pb7          1/1     Running   0          7m22s
coredns-bccdc95cf-sqbqv          1/1     Running   0          7m23s
etcd-master                      1/1     Running   0          6m21s
kube-apiserver-master            1/1     Running   0          6m23s
kube-controller-manager-master   1/1     Running   1          6m43s
kube-flannel-ds-amd64-24vsq      1/1     Running   0          77s
kube-flannel-ds-amd64-4c58l      1/1     Running   0          77s
kube-flannel-ds-amd64-92l8f      1/1     Running   0          77s
kube-proxy-bz4p7                 1/1     Running   0          7m23s
kube-proxy-m5qgc                 1/1     Running   0          3m31s
kube-proxy-sw5rf                 1/1     Running   0          3m36s
kube-scheduler-master            1/1     Running   0          6m33s


#或者kubectl get pod -n kube-system
如果看到flannel的状态(STATUS)是Running状态表示能正常启动,否则需要使用journalctl -f 这个命令动态查看报错。也可以时候用docker logs 这个命令查看报错


#安装了网络组件之后,再次执行查看所有的node状态,就可以看到如下:
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   21h   v1.15.0
node1        Ready    <none>   21h   v1.15.0
node2        Ready    <none>   21h   v1.15.0
#看到状态是Ready状态。

10、查看集群健康状态:
[root@master ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

11、最后是k8s和docker的命令自动补全命令
yum install -y epel-release bash-completion
source /usr/share/bash-completion/completions/docker
source /usr/share/bash-completion/bash_completion

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
#到此为此kubeadm安装k8s集群完成。

#下面是第一和第二部分的纯执行命令,只需复制粘贴即可;
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-lt
grub2-set-default 'CentoS Linux(4.4.189-1.el7.elrepo.×86_64) 7 (Core)'
reboot

yum -y install conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget git vim net-tools
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

swapoff -a && sed -i '/swap/ s/^\(.*\)$/#\1/g' /etc/fstab 
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config 

timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0
systemctl restart rsyslog
systemctl restart crond
systemctl stop postfix && systemctl disable postfix

cat > kubernetes.conf<<EOF
#开启网桥模式【重要】
net.bridge.bridge-nf-call-iptables=1 
#开启网桥模式【重要】
net.bridge.bridge-nf-call-ip6tables=1 
net.ipv4.ip_forward=1 
net.ipv4.tcp_tw_recycle=0
#禁止使用swap空间,只有当系统OOM时才允许使用它 
vm.swappiness=0
#不检查物理内存是否够用
vm.overcommit_memory=1
#开启OOM 
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192 
fs.inotify.max_user_watches=1048576 
fs.file-max=52706963 
fs.nr_open=52706963 
#关闭ipv6【重要】
net.ipv6.conf.all.disable_ipv6=1 
net.netfilter.nf_conntrack_max=2310720 
EOF

cp kubernetes.conf /etc/sysctl.d/kubernetes.conf

sysctl -p /etc/sysctl.d/kubernetes.conf


mkdir /var/log/journal
mkdir /etc/systemd/journal.conf.d

cat >/etc/systemd/journal.conf.d/99-prophet.conf <<EOF
[Journal] 
#持久化保存到磁盘 
Storage=persistent 

#压缩历史日志 
Compress=yes 

SyncIntervalSec=5m 
RateLimitInterval=30s 
RateLimitBurst=1000 

#最大占用空间10G 
SystemMaxUse=10G 

#单日志文件最大200M 
SystemMaxFileSize=200M 

#日志保存时间2周 
MaxRetentionSec=2week 

#不将日志转发到syslog 
ForwardToSyslog=no
EOF

systemctl restart systemd-journald

modprobe br_netfilter

cat >/etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF


chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod |grep -e ip_vs -e nf_conntrack_ipv4

yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast
yum -y install docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io
systemctl start docker
systemctl enable docker

cat >/etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

systemctl restart docker


 

发布了201 篇原创文章 · 获赞 85 · 访问量 6万+

猜你喜欢

转载自blog.csdn.net/Doudou_Mylove/article/details/103901732