二进制部署K8S集群

参考视频:https://ke.qq.com/course/276857

1. 部署步骤

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-DOefRWEg-1587051417376)(_v_images/20200118102300674_20120.png)]

2. 环境规划

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-anFFNT6u-1587051417381)(_v_images/20200118102821410_16104.png =800x)]

master——172.16.38.208
node1——172.16.38.174
node2——172.16.38.234

2.1. 修改主机名

hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

2.2. 添加主机名解析

echo "172.16.38.174 node1" >>/etc/hosts
echo "172.16.38.234 node1" >>/etc/hosts

2.3. 关闭selinux和防火墙

systemctl disable firewalld
systemctl stop firewalld

2.4. 节点安装docker,配置国内镜像加速源

3. 自签TLS证书

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-f4lgU5LP-1587051417384)(_v_images/20200118110504464_1327.png =800x)]

3.1. 安装证书生成工具cfssl

mkdir ssl
cd ssl/
wget https://pkg.cfssl.ort/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x ./*
mv cfssl_linux-amd64 /usr/local/bin/cfssl            #生成证书
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson    #支持json
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo    #查看证书信息
#cfssl print-defaults config > config.json            #生成证书模板
#cfssl print-defaults csr > csr.json                  #生成证书的基本信息

3.2. 生成CA证书

vim ca-config.json

{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

vim ca-csr.json

{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

生成ca证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

3.3. 生成Server证书

vim server-csr.json

{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "172.16.38.208",
      "172.16.38.174",
      "172.16.38.234",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json |cfssljson -bare server

3.4. 生成admin证书

vim admin-csr.json

{
    "CN": "admin",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json |cfssljson -bare admin

3.5. 生成kube-proxy证书

{
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json |cfssljson -bare kube-proxy

3.6. 删除不用的文件

ls |grep -v pem |xargs rm -rf

4. 部署etcd集群

mkdir -p /opt/kubernetes/{bin,cfg,ssl}

4.1. 下载etcd-v3.2.29二进制包(Master节点操作)

下载地址:https://github.com/etcd-io/etcd/releases/

tar xvf etcd-v3.2.29-linux-amd64.tar.gz
mv etcd-v3.2.29-linux-amd64/{etcd,etcdctl} /opt/kubernetes/bin/
cp ssl/ca*.pem ssl/server*.pem /opt/kubernetes/ssl/

4.2. 写etcd的配置文件

cat > /opt/kubernetes/cfg/etcd << EOF

EOF

4.3. 写etcd.service

cat > /usr/lib/systemd/system/etcd.service << EOF

EOF

4.4. 启动etcd

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd    #会卡住,按Ctrl+c结束
ps -ef |grep etcd

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-WOHJwoFJ-1587051417390)(_v_images/20200328154347905_27918.png =800x)]

4.5. 配置ssh免密登录

ssh-keygen -t rsa    #一路回车
ssh-copy-id [email protected]    #拷贝公钥
ssh-copy-id [email protected]

4.6. 将文件拷贝到两个节点

rsync -avzP /opt/kubernetes node1:/opt/
rsync -avzP /opt/kubernetes node2:/opt/
scp /usr/lib/systemd/system/etcd.service node1:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service node2:/usr/lib/systemd/system/

4.7. 在节点上修改etcd配置文件

node1节点

扫描二维码关注公众号,回复: 11523593 查看本文章
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.38.174:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.38.174:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.38.174:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.38.174:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.16.38.208:2380,etcd02=https://172.16.38.174:2380,etcd03=https://172.16.38.234:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

systemctl start etcd
node2节点

#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.38.234:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.38.234:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.38.234:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.38.234:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.16.38.208:2380,etcd02=https://172.16.38.174:2380,etcd03=https://172.16.38.234:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

systemctl start etcd

4.8. 添加环境变量(master)

echo "PATH=$PATH:/opt/kubernetes/bin" >>/etc/profile
source /etc/profile

4.9. 验证集群

cd /opt/kubernetes/ssl

etcdctl --ca-file=ca.pem \
--cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://172.16.38.208:2379,https://172.16.38.174:2379,https://172.16.38.234:2379" \
cluster-health

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5a5db6vf-1587051417392)(_v_images/20200328160947734_4472.png =800x)]

5. 部署Flannel网络

5.1. 下载flannel二进制包,并复制到节点

wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
tar xvf flannel-v0.9.1-linux-amd64.tar.gz
scp flanneld mk-docker-opts.sh node1:/opt/kubernetes/bin/
scp flanneld mk-docker-opts.sh node2:/opt/kubernetes/bin/

5.2. 写入分配的子网段到etcd,供flanneld使用

cd /opt/kubernetes/ssl/
etcdctl --ca-file=ca.pem \
--cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://172.16.38.208:2379,https://172.16.38.174:2379,https://172.16.38.234:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan" }}'

5.3. 编写flanneld配置文件(在节点操作)

cat > /opt/kubernetes/cfg/flanneld << EOF
FLANNEL_OPTIONS="--etcd-endpoints=https://172.16.38.208:2379,https://172.16.38.174:2379,https://172.16.38.234:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF

5.4. 编写flanneld.service配置文件

cat > /usr/lib/systemd/system/flanneld.service <<EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

5.5. 修改docker.service配置文件

修改两个地方

EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-QOwCAu8Y-1587051417395)(_v_images/20200328191026365_8401.png =800x)]

5.6. 启动服务

systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

5.7. 查看网卡(docker0和flannel.1处于同一网络)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JE0gZNy3-1587051417397)(_v_images/20200328191130459_24377.png =800x)]

5.8. 将配置文件拷入另一节点,执行相同的操作

scp cfg/flanneld 172.16.38.234:/opt/kubernetes/cfg/
scp /usr/lib/systemd/system/{docker.service,flanneld.service} 172.16.38.234:/usr/lib/systemd/system/
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

6. 部署master节点组件

6.1. master二进制包下载:

wget https://storage.googleapis.com/kubernetes-release/release/v1.14.2/kubernetes-server-linux-amd64.tar.gz
tar xvf kubernetes-server-linux-amd64.tar.gz
mv kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/*

6.2. 创建TLS Bootstrapping Token

cd /opt/kubernetes/cfg/
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

6.3. apiserver.sh脚本

vim /opt/kubernetes/bin/apiserver.sh

#!/bin/bash

MASTER_ADDRESS=${1:-"192.168.1.195"}
ETCD_SERVERS=${2:-"http://127.0.0.1:2379"}

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--insecure-bind-address=127.0.0.1 \\
--bind-address=${MASTER_ADDRESS} \\
--insecure-port=8080 \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.10.10.0/24 \\
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
--etcd-certfile=/opt/kubernetes/ssl/server.pem \\
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

6.4. controller-manager.sh脚本

vim /opt/kubernetes/bin/controller-manager.sh

#!/bin/bash

MASTER_ADDRESS=${1:-"127.0.0.1"}

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.10.10.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem"
EOF

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

6.5. scheduler.sh脚本

vim /opt/kubernetes/bin/scheduler.sh

#!/bin/bash

MASTER_ADDRESS=${1:-"127.0.0.1"}

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"
EOF

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

6.6. 执行脚本

cd /opt/kubernetes/bin/
chmod +x *.sh
./apiserver.sh 172.16.38.208 https://172.16.38.208:2379,https://172.16.38.174:2379,https://172.16.38.234:2379
./controller-manager.sh
./scheduler.sh

6.7. 查看master集群状态

[root@master bin]# kubectl get cs

在这里插入图片描述

7. 创建node节点的kubeconfig文件

7.1. 指定访问入口

export KUBE_APISERVER="https://172.16.38.208:6443"

7.2. 创建kubelet kubeconfig

7.2.1. 设置集群参数

cd /opt/kubernetes/ssl
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

7.2.2. 设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

7.2.3. 设置上下文参数

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

7.2.4. 设置默认上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

7.3. 创建kube-proxy kubeconfig

7.3.1. 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

cp /root/ssl/kube-proxy* /opt/kubernetes/ssl/
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

7.4. 将bootstrap.kubeconfig kube-proxy.kubeconfig拷贝到所有节点

mv bootstrap.kubeconfig kube-proxy.kubeconfig ../cfg/
scp ../cfg/{bootstrap.kubeconfig,kube-proxy.kubeconfig} node1:/opt/kubernetes/cfg/
scp ../cfg/{bootstrap.kubeconfig,kube-proxy.kubeconfig} node2:/opt/kubernetes/cfg/

8. 部署node节点

8.1. 添加角色权限

[root@master ~]# kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

8.2. 将kubelet kube-proxy发送到node节点

[root@master ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node1:/opt/kubernetes/bin/
[root@master ~]# scp kubernetes/server/bin/{kubelet,kube-proxy} node2:/opt/kubernetes/bin/

8.3. 在node1上操作,编写kubelet.sh脚本

vim /opt/kubernetes/bin/kubelet.sh

#!/bin/bash

NODE_ADDRESS=${1:-"192.168.1.196"}
DNS_SERVER_IP=${2:-"10.10.10.2"}

cat <<EOF >/opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=${NODE_ADDRESS} \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--cert-dir=/opt/kubernetes/ssl \\
--allow-privileged=true \\
--cluster-dns=${DNS_SERVER_IP} \\
--cluster-domain=cluster.local \\
--fail-swap-on=false \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF

cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet

8.4. 编写proxy.sh脚本

vim /opt/kubernetes/bin/proxy.sh

#!/bin/bash

NODE_ADDRESS=${1:-"192.168.1.200"}

cat <<EOF >/opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=${NODE_ADDRESS} \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
EOF

cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy

8.5. 执行脚本

chmod +x *.sh
./kubelet.sh 172.16.38.174 10.10.10.2
./proxy.sh 172.16.38.174

8.6. 查看csr列表

[root@master ~]# kubectl get csr

在这里插入图片描述

8.7. 授权后状态变为Approved

[root@master ~]# kubectl  certificate approve node-csr-x4F5fniCL-kj0F_Dl-g2RKUWESv3kKC6nS7J-ZrE81U

8.8. 将脚本发送到node2节点

[root@node1 bin]# scp kubelet.sh proxy.sh 172.16.38.234:/opt/kubernetes/bin/

8.9. 去node2上执行脚本

[root@node2 bin]# ./kubelet.sh 172.16.38.234 10.10.10.2
[root@node2 bin]# ./proxy.sh 172.16.38.234

8.10. 到master上授权

[root@master ~]# kubectl get csr
[root@master ~]# kubectl  certificate approve node-csr-XjrmhFhj9gGdryQGduOvlA3eJ0THSXWiyRbcTpjyUeo

8.11. 查看node集群节点信息

[root@master ~]# kubectl get nodes

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-O6Cm0BtC-1587051417405)(_v_images/20200329170858190_7216.png =800x)]

9. 运行一个测试实例检查集群状态

kubectl run nginx --image=nginx --replicas=3

pod正在创建,会去拉镜像,如果下载慢,可以更换国内源
在这里插入图片描述
查看pod跑在哪个Node上

[root@master ~]# kubectl get pod -o wide

在这里插入图片描述

9.1. 暴露端口使外部可以访问

kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
kubectl get svc

88为内部Node访问端口,44353为外部访问端口
在这里插入图片描述
[root@node1 bin]# curl 10.10.10.31:88
在这里插入图片描述

9.2. 从外部通过浏览器访问(访问任意节点均可)

在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/anqixiang/article/details/105570091