kubernetes continually evict pod when node's inode exhausted

kubernetes等容器技术可以将所有的业务进程运行在公共的资源池中,提高资源利用率,节约成本,但是为避免不同进程之间相互干扰,对底层docker, kubernetes的隔离性就有了更高的要求,kubernetes作为一门新盛的技术,在这方面还不够成熟, 近期在一个staging集群就发生了,inode资源被耗尽的事件:

现象

在测试集群中,许多pod被Evicted掉

[root@node01 ~]$ kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
default-http-backend-78d96f979f-5ljx4   1/1       Running   0          8d
perfcounter-proxy-8b884c4ff-2ng4j       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-5hq5k       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-66qfw       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-6hf7f       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-6knrm       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-6m9p5       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-768g6       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-7d74k       0/2       Evicted   0          20h
perfcounter-proxy-8b884c4ff-998kx       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-bmvjc       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-cbh6m       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-cd8jb       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-d2m25       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-dgtkk       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-ftf2r       0/2       Evicted   0          20h
perfcounter-proxy-8b884c4ff-hdz9x       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-hgftx       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-ks5sq       0/2       Evicted   0          1d
perfcounter-proxy-8b884c4ff-kwf6x       0/2       Evicted   0          23h
perfcounter-proxy-8b884c4ff-lnqct       2/2       Running   0          20h
perfcounter-proxy-8b884c4ff-ngs9s       0/2       Evicted   0          2d

pod驱逐一般发生在某个node状态unready后,比如说磁盘写满,网络异常等,此时kuernetes就会将该异常node上面的pod进行驱逐,以免影响服务实例数,但是上面显示的如此频繁的驱逐是一定有问题的,机器状态不稳定,不断的flapping,执行kubectl describe node node04之后发现是由于node04机器上inode资源不足导致:

Events:
  Type     Reason                 Age               From                            Message
  ----     ------                 ----              ----                            -------
  Warning  EvictionThresholdMet   3d (x3 over 4d)   kubelet, node04.kscn  Attempting to reclaim nodefsInodes
  Normal   NodeHasDiskPressure    3d (x3 over 4d)   kubelet, node04.kscn  Node node04.kscn status is now: NodeHasDiskPressure
  Normal   NodeHasNoDiskPressure  3d (x3 over 4d)   kubelet, node04.kscn  Node node04.kscn status is now: NodeHasNoDiskPressure
  Warning  EvictionThresholdMet   2d (x8 over 2d)   kubelet, node04.kscn  Attempting to reclaim nodefsInodes
  Normal   NodeHasDiskPressure    2d (x2 over 2d)   kubelet, node04.kscn  Node node04.kscn status is now: NodeHasDiskPressure
  Normal   NodeHasNoDiskPressure  2d (x2 over 2d)   kubelet, node04.kscn  Node node04.kscn status is now: NodeHasNoDiskPressure
  Warning  EvictionThresholdMet   20h (x9 over 1d)  kubelet, node04.kscn  Attempting to reclaim nodefsInodes
  Normal   NodeHasDiskPressure    20h (x6 over 1d)  kubelet, node04.kscn  Node node04.kscn status is now: NodeHasDiskPressure
  Normal   NodeHasNoDiskPressure  20h (x6 over 1d)  kubelet, node04.kscn  Node node04.kscn status is now: NodeHasNoDiskPressure

登录到node04,查看后确实/home分区inode数据可用比较少

[root@node04 gaorong]# df -i
Filesystem      Inodes  IUsed   IFree IUse% Mounted on
/dev/vda1      1310720 150802 1159918   12% /
devtmpfs       4116012    359 4115653    1% /dev
tmpfs          4118414     61 4118353    1% /dev/shm
tmpfs          4118414   2083 4116331    1% /run
tmpfs          4118414     15 4118399    1% /sys/fs/cgroup
/dev/vdb        819200 762459   56741   94% /home

解决

首先需要确定到底是那个服务导致inode耗尽的,确定inode占用较多的是那个目录, 根据目录可以确定是那个服务了,这里使用inodes这个工具来显示每个目录的占用inodes数目,发现/home/docker/aufs这个目录占用较多

[root@node04 gaorong]# inodes  --d /home/docker/aufs/ -t 50000 -e 500
------------------------------------------
[CONFIG] Directory to scan specified as /home/docker/aufs/
------------------------------------------
[CONFIG] Tree directories above 50000 inodes
------------------------------------------
[CONFIG] Exclude directories below 100 inodes
------------------------------------------
    INODE USAGE SUMMARY
------------------------------------------
    INODES |       SIZE | DIRECTORY           
------------------------------------------
  493221   |   16G      | diff                
  --5417   |   --130M   | --016fe5bdd62c9264bcda0c44ef1548aaf8b82acfee2b0b8943e7394218118550
  --166    |   --65M    | --047d364456b37521445751910a4251faa6adcd50908f4d2f064ecbe80f20d332
  --10536  |   --203M   | --0a0b37e475c91a49cea4d732f83e6f87010edae86e25f4c7e14203e66c4122ea
   ... 此处省略若干行....
  --2050   |   --64M    | --f5fa7a7efabad90d1dfa4519f5c7fae611bdbfc9a9b43eb58b8c8bc035a15336
  --8714   |   --304M   | --f7618789addc900474c931ce9bbdc52abd5bf61ab98cd23d46e6189bd41094d1
  --241    |   --174M   | --faf7ea4dc438088688dded3164ce7a11018a7a5dc346d621b43965bc3b0e60cb
  --10921  |   --325M   | --fb08d1ed58bcdd3374ed835fcb27554ab52bcdd4822d502eb00066dec4d70650
  386      |   1.6M     | layers              
  179712   |   6.4G     | mnt                 
  --7110   |   --467M   | --06feaf7d4336e97f3b11440865b97c0fadedd5488126215216be616c656e82f5
  --10764  |   --337M   | --26f394cad00c1876f832e2a4fb83816253dd8d70303a0d3b96a46edaf3564a05
  --18914  |   --1.2G   | --3912a58cf254540753d8accf69ed7a8c1c9d9539d73d35be9bd105dde94effe5
  --2965   |   --168M   | --401c0221bb62e36b3fcfb181420b64ec21147783860c72732299dba2f69f0280
  --3759   |   --85M    | --773333aef4b7f00f1f585b4c41fcbe20906c28b53999cfb78df268034d9b59e4
  --18692  |   --626M   | --79a76870faa7ed5f218b87e942d12523e5a9dd14cdb8449ccea5af279f45b526
  --26462  |   --758M   | --81d5c11dcc68df6ceee547fe84236c0234e4a63844e4ffc19047b6846185871b
  --2333   |   --24M    | --91bc77e078602f1e7285d2d742368a7a17ad0c0c3c736305fcbe68a11aa23e4c
  --10623  |   --218M   | --97a5ff749684b366c4a3ecff5536990e1db907577065deb7839e9da3c116ff93
  --18914  |   --1.2G   | --a4b945841c02a8a38463096f5cc1fd314563d94c78f8ccc92221b7f2571b2c3c
  --6749   |   --157M   | --aaa03c1ad6cd6fb332cce4c033eb74e2ae9f62364d5ea6e9d434aa88b6925644
  --17989  |   --488M   | --b6fb0a75102f0f25aa14a2f76b3013fab19c3f49b9ab49a2a8c230d52b84ff74
  --28536  |   --674M   | --d01bdbddb09cf36411c6179cb9657390eb4ed0f440f8aa05c94e38f7de49439c
  --5284   |   --113M   | --e3c555a3df172a2cd852cff210b43693bbc1cfa0b7cb5dbaee7645a4abdcfdb0
------------------------------------------
673320     | 22G        | /home/docker/aufs/  
------------------------------------------

该目录是docker采用aufs作为storage driver时用来存储镜像layer的目录,镜像中的每个文件都会在这里找到对应的文件,aufs中小文件太多导致inode耗尽也是是比较常见的问题,如果使用deviceMapper等这种块设备的storage driver就不存在这个问题了。所以该目录只需要清理一下没用的image, 调整一下image-gc-high-threshold, image-gc-high-threshold两个值即可。
上面显示aufs占用的是673320,但df -i显示占用的inode数目是819200,说明还有其他地方导致占用比较多,约20w。仔细查找后发现另一个占用比较多的目录home/docker/volumes

[root@node04 docker]# inodes  --d /home/docker/volumes -t 50000 -e 500
------------------------------------------
[CONFIG] Directory to scan specified as /home/docker/volumes
------------------------------------------
[CONFIG] Tree directories above 50000 inodes
------------------------------------------
[CONFIG] Exclude directories below 500 inodes
------------------------------------------
    INODE USAGE SUMMARY
------------------------------------------
    INODES |       SIZE | DIRECTORY           
------------------------------------------
  10995    |   715M     | 0afe5e70b2b22d1bee735977a8e931e2f2a65da5d79c08babf08a4de1a69877a
  6576     |   378M     | 0d0453ed64d4e830f408cf920a5ae13cefe1d4bfe3444464b9e612e699872a17
  8497     |   414M     | 1708cd1ce59818132e93bada3db8926a9eb03e08f553ad66f9df738424408704
  6862     |   289M     | 2940dea37d29501423d6eb50056034c89a9caf92cd2cd567ea3ffc02b54c813d
  3224     |   102M     | 2fe657a647b3c8b9b9a4d552cdeee682bd465edff14d90477a3277d79eeed807
  9522     |   409M     | 354c182b47707cc173de72ccf7ad99cabccc61d1f44e70a72d66a32334894d14
  15204    |   533M     | 4d49d25d55ab02f59fef56d984bfd811c3a2c8ec02378c0d7e860fac9df9d3ee
  9546     |   407M     | 4e1d2bb4ab8ca730df17b8a16910740dba8046cf8d9dc71625a017d67e16f95f
  9205     |   408M     | 51acc932cc72f620c9fded3cb93ea877467b78364731d35e58c08a72b2b247c2
  9801     |   481M     | 56a0e0a016c8c65cc27430dcd4053a18dec81bfd444da505148fe3fc30f4506c
  17675    |   473M     | 6ee2b44c4351790ef36bb7a1774fa2647b6d4c9ecc5dce1bcb33a9a522ea0808
  9035     |   484M     | 7928e4c94867ee49de041565e92a4c36517a6a7f904149dbd5b7df331b4f6a0e
  7788     |   285M     | 85c76690344f6b11ccaa95153df0b317b6ade6a6f6179e19ad3c40b9c2ba4d27
  11193    |   856M     | 8d51785afbeb6754e70286c73ee1fa5044b8eab02b606d692e64eea342303449
  3054     |   79M      | 9252da456a675c2c6519568b1bae3c7e410f1d5dc89e47f4e13bdd26964367e4
  8604     |   377M     | 962d4c2e6dd47f642aafe49557b660b9767b859c472ae0002c4f2ef4789bb350
  5366     |   132M     | 9bb9e9d8eff02f63eeea8dee9c351d6608b3e061d659a013ee5370b961961707
  9171     |   480M     | b8edb07ae19b7e5920b3aa254fb437670cc2f21f570cde693a778400a8e2784f
  15088    |   592M     | bd017c95ed823d8bbe7e4d7ee8bb40160c86b613fe9683c9a09a1e4dde3f017e
  7813     |   285M     | bf89566ba7852b07dc52fa5ef9d3a467dbcc5608532f6c2b0682feebed508401
  14454    |   1.2G     | c5a858a1ea25032f90d97b3765bebd32ffcecd5b911087451cd6d4fdaee1c92c
  10427    |   467M     | c9548dfca3e3de3dd1387c47a1a6771fe8aaf3908ab17a58287dad8511fa416c
  4530     |   215M     | cc4bde58318575c1c5fa8dde53e05d1f082b5a9944cecec23ac50989cebd963c
  8036     |   381M     | ced3dd5ecabdde6e313db4514290b30f735f8f18431ac6cb848d65d980e101b5
  6439     |   374M     | d0b3ca6b65cf9a686f0d521a813cb27d91e9dfd6078b80171e1dc16fdba6a922
  15891    |   589M     | e6292e24cc7d4a5ec43ed5ede58e9356b369e8e72f24327d297dd668fc20a640
  11088    |   831M     | ef175210467e4b58fb2215e35d679435699f8ce1f043a6225ff4fa1033c0e345
------------------------------------------
259083     | 15G        | /home/docker/volumes
------------------------------------------

home/docker/volumes目录是docker创建的,用于支持VOLUME 定义匿名卷, 我们看看里面有些什么?

[root@node04 volumes]# tree -L 3
.
├── 0ad528934774355e22f4afa2df56a85cbe18bed0f922f37d054c1c0362baf648
│   └── _data
│       └── luhualin
├── 16d08b2bdb76ebeede9b6ee6da1378e44cf47ba825d6545f1a0ead8e21f9ffc0
│   └── _data
├── 1708cd1ce59818132e93bada3db8926a9eb03e08f553ad66f9df738424408704
│   └── _data
│       ├── global-bigdata-micsql
│       └── global-bigdata-micsql.tmp
├── 1a5c420084d42c370ac670d6e483cc2a960f4da71cf0fa0a765e1bf099a7c027
│   └── _data
│       ├── data
│       ├── meta
│       └── wal
├── 1fca6975f0af913c116aad7273a32f0a2795e62f3439ab779b39524c9434cf5d
│   └── _data
│       └── ContainerCloud
├── 3bb2d14dd39cca6d93296c742bbce8446c79a5c441b81c60d07a661a9a252b63
│   └── _data
├── 3c4845e864469457b3dac0a077e7f542db0dcfd9e20fbb112fde0e514d23a261
│   └── _data

├── 53ea91b9a833cb676d71cf336ffab97f20fe326bbd165427b64c9685e26a0f1f
│   └── _data
│       └── ContainerCloud
 ...此处省略若干行...
├── fc82dfd844191eb8baf1a68d3378e8fe0553d9cc705eb2fdfd3e5398a17783e2
│   └── _data
│       ├── k8s-node-frigga
│       └── k8s-node-frigga.tmp
├── fd3dec13f142dc59ad8dbbc2acf3841fd893f6d548ebe74aa2348af8f26fa448
│   └── _data
└── metadata.db

可以看到其中有许多小文件,难怪inode占用比较多了, 但是数据为什么会存在这个地方呐?

[root@node04 volumes]# docker volume ls
DRIVER              VOLUME NAME
local               3a42fa60c84ebf1363c9b65c874a86e5663c883f2629edd0ac22575893f9a9ae
local               2714f98661dfb0c6f94c99a0acf2c33f30d491667ce251e6499b069497095260
 ...此处省略若干行...
local               ac04c0925a0203cd87c55eaf1c6f094cf7e0b2cf3173c80d9de459f2aca1ccd4

执行docker ps -a | grep -v NAME | awk '{print $1}' | xargs docker inspect | grep 3a42fa60c84ebf1363c9b65c874a86e5663c883f2629edd0ac22575893f9a9ae看一看是哪个容器创建的这些volume,然后inpsect一下这个容器:

[root@node04 global-bigdata]# docker inspect 25b3d4902305
[
    {
        "Id": "25b3d4902305258371fbc71851dcb4aade62e393f567750f4776b7a3843e1ae4",
        "Created": "2019-02-27T06:58:27.091296766Z",
        "Path": "/docker-entrypoint.sh",
        "Args": [
            "zkServer.sh",
            "start-foreground"
        ],
       ...此处省略若干行...
        "GraphDriver": {
            "Data": null,
            "Name": "aufs"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/home/kubelet/pods/1461d38d-3a5d-11e9-a499-fa163e08f614/volumes/kubernetes.io~secret/default-token-h7vj9",
                "Destination": "/var/run/secrets/kubernetes.io/serviceaccount",
                "Mode": "ro,rslave",
                "RW": false,
                "Propagation": "rslave"
            },
            {
                "Type": "bind",
                "Source": "/home/kubelet/pods/1461d38d-3a5d-11e9-a499-fa163e08f614/etc-hosts",
                "Destination": "/etc/hosts",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/home/kubelet/pods/1461d38d-3a5d-11e9-a499-fa163e08f614/containers/zk/1d590aa4",
                "Destination": "/dev/termination-log",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "volume",
                "Name": "ac04c0925a0203cd87c55eaf1c6f094cf7e0b2cf3173c80d9de459f2aca1ccd4",
                "Source": "/home/docker/volumes/ac04c0925a0203cd87c55eaf1c6f094cf7e0b2cf3173c80d9de459f2aca1ccd4/_data",
                "Destination": "/data",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            },
            {
                "Type": "volume",
                "Name": "3a42fa60c84ebf1363c9b65c874a86e5663c883f2629edd0ac22575893f9a9ae",
                "Source": "/home/docker/volumes/3a42fa60c84ebf1363c9b65c874a86e5663c883f2629edd0ac22575893f9a9ae/_data",    <- 这里 
                "Destination": "/datalog",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            },
            {
                "Type": "volume",
                "Name": "2714f98661dfb0c6f94c99a0acf2c33f30d491667ce251e6499b069497095260",
                "Source": "/home/docker/volumes/2714f98661dfb0c6f94c99a0acf2c33f30d491667ce251e6499b069497095260/_data",
                "Destination": "/logs",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "zk-7848b46c9d-6nqhz",
            "Domainname": "",
            "User": "0",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "2181/tcp": {},
                "2888/tcp": {},
                "3888/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Cmd": [
                "zkServer.sh",
                "start-foreground"
            ],
            "Healthcheck": {
                "Test": [
                    "NONE"
                ]
            },
            "ArgsEscaped": true,
            "Volumes": {
                "/data": {},
                "/datalog": {},
                "/logs": {}
            },
            "WorkingDir": "/zookeeper-3.4.13",
            "Entrypoint": [
                "/docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": {
                "annotation.io.kubernetes.container.hash": "ee6d75c4",
                "annotation.io.kubernetes.container.ports": "[{\"containerPort\":2181,\"protocol\":\"TCP\"}]",
                "annotation.io.kubernetes.container.restartCount": "0",
                "annotation.io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
                "annotation.io.kubernetes.container.terminationMessagePolicy": "File",
                "annotation.io.kubernetes.pod.terminationGracePeriod": "30",
                "io.kubernetes.container.logpath": "/var/log/pods/1461d38d-3a5d-11e9-a499-fa163e08f614/zk/0.log",
                "io.kubernetes.container.name": "zk",
                "io.kubernetes.docker.type": "container",
                "io.kubernetes.pod.name": "zk-7848b46c9d-6nqhz",
                "io.kubernetes.pod.namespace": "rpc",
                "io.kubernetes.pod.uid": "1461d38d-3a5d-11e9-a499-fa163e08f614",
                "io.kubernetes.sandbox.id": "a0fd95b0bb06a573e385f23dc21187c23b8232d59a252d0d8790770e946851a5"
            }
        },
    }
]

看到确实就是该容器mount这个目录, 其对应的image是zookper这个image, 我们看看其DockerFile是怎么写的:

[root@node04 global-bigdata]# docker image  history zookper
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
06b178591ab3        3 weeks ago         /bin/sh -c #(nop)  CMD ["zkServer.sh" "start…   0B                  
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENTRYPOINT ["/docker-entr…   0B                  
<missing>           3 weeks ago         /bin/sh -c #(nop) COPY file:e241c4b758b1c071…   1.13kB              
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV PATH=/usr/local/sbin:…   0B                  
<missing>           3 weeks ago         /bin/sh -c #(nop)  EXPOSE 2181 2888 3888        0B                  
<missing>           3 weeks ago         /bin/sh -c #(nop)  VOLUME [/data /datalog /l…   0B                    <- 这里 
<missing>           3 weeks ago         /bin/sh -c #(nop) WORKDIR /zookeeper-3.4.13     0B                  
<missing>           3 weeks ago         |2 DISTRO_NAME=zookeeper-3.4.13 GPG_KEY=C61B…   61.1MB              
<missing>           3 weeks ago         /bin/sh -c #(nop)  ARG DISTRO_NAME=zookeeper…   0B                  
<missing>           3 weeks ago         /bin/sh -c #(nop)  ARG GPG_KEY=C61B346552DC5…   0B                  
<missing>           3 weeks ago         /bin/sh -c set -ex;     adduser -D "$ZOO_USE…   4.83kB              
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV ZOO_USER=zookeeper ZO…   0B                  
<missing>           3 weeks ago         /bin/sh -c apk add --no-cache     bash     s…   4.12MB              
<missing>           3 weeks ago         /bin/sh -c set -x  && apk add --no-cache   o…   79.5MB              
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV JAVA_ALPINE_VERSION=8…   0B                  
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV JAVA_VERSION=8u191       0B                  
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV PATH=/usr/local/sbin:…   0B                  
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV JAVA_HOME=/usr/lib/jv…   0B                  
<missing>           3 weeks ago         /bin/sh -c {   echo '#!/bin/sh';   echo 'set…   87B                 
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV LANG=C.UTF-8             0B                  
<missing>           4 weeks ago         /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B                  
<missing>           4 weeks ago         /bin/sh -c #(nop) ADD file:2a1fc9351afe35698…   5.53MB    

原来是dockerImage中指定了VOLUME,但在启动改镜像的时候却没有mount到具体的某个目录,所以他会在默认的位置/home/docker/volume里创建挂载点,在其中写入大量小文件,进而占用该文件系统的inode。
查看官方的Dockfile文档里有这么一句话:

The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability, since a given host directory can’t be guaranteed to be available on all hosts. For this reason, you can’t mount a host directory from within the Dockerfile. The VOLUME instruction does not support specifying a host-dir parameter. You must specify the mountpoint when you create or run the container.

也就是说我们在启动该镜像的时候必须覆盖该Volume mountpoint,如果不覆盖的话就会在/home/docker/volumes目录下生成一个默认的mountPoint, 对应于kubernetes,也就是我们必须手动指定volume来挂载到该目录上, 可以是emptyDir, PV等whatever只要覆盖就不会写到该默认的路径下面了。

如果不进行覆盖该mount point, 使用默认配置,则该匿名volume的生命周期对应于pod的生命周期,pod删除就会自动删除该volume,/home/docker/volumes下对应的数据也会删除。 所以考虑一种情况:如果docker使用的磁盘分区和kubelet使用的分区相同(默认是这样),一个使用的是匿名volum的pod疯狂写磁盘,kubelet就会检测到nodefs/inode pressure,进而导致主动驱逐该pod, 驱逐完成后node状态变为ready, 过一段时间后scheduler可能继续调度该pod到这台node, 随后又被自动驱逐, 如此循环往复,此时,kubernetes没办法区分到底是什么数据导致了nodefs pressue,无法进行智能的自动清理,只能驱逐,但是驱逐解决不了根本问题,这种情况下只能用户介入。归根结底, 使用的时候一定要在pod yaml中将该VOLUME挂载到一个自定义的目录中。

猜你喜欢

转载自www.cnblogs.com/gaorong/p/10472009.html