Kubernetes CKA认证运维工程师笔记-Kubernetes集群搭建

1. 生产环境部署K8s的2种方式

  • kubeadm
    Kubeadm是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
    部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
  • 二进制
    推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
    下载地址:https://github.com/kubernetes/kubernetes/releases

2. 服务器硬件配置推荐

在这里插入图片描述

3. 使用kubeadm快速部署一个K8s集群

# 创建一个Master 节点
kubeadm init
# 将一个Node 节点加入到当前集群中
kubeadm join <Master节点的IP和端口>

3.1 安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

  • 一台或多台机器,操作系统 CentOS7.x-86_x64,推荐7.5-7.8
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多,推荐2C4G
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像,要是无法联网就找一台能联网的服务器进行docker pull/load/save等操作
  • 禁止swap分区,防止内存溢出,落到磁盘上,导致速度变慢,影响工作效率

3.2 准备环境

kubernetes架构图

角色 IP
k8s-master 10.0.0.61
k8s-node1 10.0.0.62
k8s-node2 10.0.0.63
关闭防火墙(所有节点执行):
[root@k8s-cka-master01 ~]# systemctl stop firewalld
[root@k8s-cka-master01 ~]# systemctl disable firewalld

关闭selinux(所有节点执行):
[root@k8s-cka-master01 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config     # 永久
[root@k8s-cka-master01 ~]# cat /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     disabled - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of disabled.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

[root@k8s-cka-master01 ~]# setenforce 0   # 临时关闭
setenforce: SELinux is disabled   # 因为我已经关闭了,所以会有这个显示,要是没关闭就会有正常关闭

关闭swap(所有节点执行):
[root@k8s-cka-master01 ~]# swapoff -a   # 临时
[root@k8s-cka-master01 ~]# vim /etc/fstab  # 永久,注释掉swap这一条
[root@k8s-cka-master01 ~]# cat /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Sun Jul 26 16:55:01 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos_mobanji01-root /                       xfs     defaults        0 0
UUID=95be88d4-4459-4996-a8cf-6d2fa2aa6344 /boot                   xfs     defaults        0 0
#/dev/mapper/centos_mobanji01-swap swap                    swap    defaults        0 0

设置主机名(所有节点执行):
[root@k8s-cka-master01 ~]# hostnamectl set-hostname K8S-master
[root@k8s-cka-master01 ~]# bash
bash
[root@k8s-master ~]# 

在master添加hosts(只需在master节点下设置):
[root@k8s-master ~]# cat >> /etc/hosts << EOF   
10.0.0.61 k8s-master
10.0.0.62 k8s-node1
10.0.0.63 k8s-node2
EOF
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.61 k8s-master
10.0.0.62 k8s-node1
10.0.0.63 k8s-node2

将桥接的IPv4流量传递到iptables的链(所有节点执行):
[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master ~]# sysctl --system        # 生效
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

时间同步(所有节点执行)[root@k8s-master ~]# yum install ntpdate -y
Loaded plugins: fastestmirror
Determining fastest mirrors
....
Installed:
  ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2  
                                                               
Complete!
[root@k8s-master ~]# ntpdate time.windows.com
21 Nov 18:13:28 ntpdate[3724]: adjust time server 52.231.114.183 offset 0.007273 sec

3.3 安装Docker/kubeadm/kubelet【所有节点】

Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。

3.3.1 安装Docker

[root@k8s-master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@k8s-master ~]# yum -y install docker-ce
[root@k8s-master ~]# systemctl enable docker && systemctl start docker

配置镜像下载加速器:

[root@k8s-master ~]# cat > /etc/docker/daemon.json << EOF
{
    
    
 "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
[root@k8s-master ~]# systemctl restart docker
[root@k8s-master ~]# docker info

3.3.2 添加阿里云YUM软件源

[root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3.3.3 安装kubeadm,kubelet和kubectl

由于版本更新频繁,这里指定版本号部署:

[root@k8s-master ~]# yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0
[root@k8s-master ~]# systemctl enable kubelet

3.4 部署Kubernetes Master

https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node

在10.0.0.61(Master)执行。

[root@k8s-master ~]# kubeadm init \
>  --apiserver-advertise-address=10.0.0.61 \
>  --image-repository registry.aliyuncs.com/google_containers \
>  --kubernetes-version v1.19.0 \
>  --service-cidr=10.96.0.0/12 \
>  --pod-network-cidr=10.244.0.0/16 \
>  --ignore-preflight-errors=all
  • –apiserver-advertise-address 集群通告地址
  • –image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
  • –kubernetes-version K8s版本,与上面安装的一致
  • –service-cidr 集群内部虚拟网络,Pod统一访问入口
  • –pod-network-cidr Pod网络,,与下面部署的CNI网络组件yaml中保持一致

或者使用配置文件引导:

扫描二维码关注公众号,回复: 13337726 查看本文章
[root@k8s-master ~]# vi kubeadm.conf
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
imageRepository: registry.aliyuncs.com/google_containers 
networking:
  podSubnet: 10.244.0.0/16 
  serviceSubnet: 10.96.0.0/12 

[root@k8s-master ~]# kubeadm init --config kubeadm.conf --ignore-preflight-errors=all  


[root@k8s-master ~]# docker image ls
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
10.0.0.65/library/tomcat                                          <none>              b9479c1bc4c6        12 months ago       459MB
10.0.0.65/library/nginx                                           <none>              c345bbeb41b9        12 months ago       397MB
calico/node                                                       v3.16.5             c1fa37765208        12 months ago       163MB
calico/pod2daemon-flexvol                                         v3.16.5             178cfd5d2400        12 months ago       21.9MB
calico/cni                                                        v3.16.5             9165569ec236        12 months ago       133MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.19.0             bc9c328f379c        15 months ago       118MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.19.0             09d665d529d0        15 months ago       111MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.19.0             1b74e93ece2f        15 months ago       119MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.19.0             cbdc8369d8b1        15 months ago       45.7MB
registry.aliyuncs.com/google_containers/etcd                      3.4.9-1             d4ca8726196c        17 months ago       253MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0               bfe3a36ebd25        17 months ago       45.2MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        21 months ago       683kB

kubeadm init工作流程

kubeadm 组件容器化部署(镜像),kubelet没有容器化,systemctl
二进制  守护进程  systemctl

kubeadm init工作流程:
1、安装环境检查,例如swapoff有没有关、机器配置符合不符合
2、下载镜像 kubeadm config images pull
3、生成证书,保存路径/etc/kubernetes/pki(k8s、etcd)
4[kubeconfig] 生成kubeconfig文件,其他组件连接apiserver
5[kubelet-start] 生成kubelet配置文件并启动
6[control-plane] 启动master节点组件
7、将一些配置文件存储到configmap中,用于其他节点初始拉取
8[mark-control-plane] 给master节点打污点,不让pod在上面运行
9[bootstrap-token] 自动为kubelet颁发证书
10[addons] 安装插件 CoreDNS kube-proxy
最后拷贝kubectl工具用的kubeconfig到默认路径下。
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

输出其他节点加入master的命令:
kubeadm join 10.0.0.61:6443 --token 3abvx6.5em4yxz9fwzsroyi \
    --discovery-token-ca-cert-hash sha256:a2ce75ca1be016a9a679e2c5c4e9e4f8c97ed148dad6936edda6947a7e33aa97

遇到的问题

# 问题:controller-manager和scheduler状态不健康
[root@k8s-master kubernetes]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {
    
    "health":"true"}    
     
# 解决办法: 
# 注释掉/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml的- --port=0                                                                   
[root@k8s-master kubernetes]# cd
[root@k8s-master ~]# cd /etc/kubernetes/manifests/
[root@k8s-master manifests]# ll
total 16
-rw------- 1 root root 2109 Nov 21 23:18 etcd.yaml
-rw------- 1 root root 3166 Nov 21 23:18 kube-apiserver.yaml
-rw------- 1 root root 2858 Nov 21 23:18 kube-controller-manager.yaml
-rw------- 1 root root 1413 Nov 21 23:18 kube-scheduler.yaml
[root@k8s-master manifests]# 
[root@k8s-master manifests]# ll
total 16
-rw------- 1 root root 2109 Nov 21 23:18 etcd.yaml
-rw------- 1 root root 3166 Nov 21 23:18 kube-apiserver.yaml
-rw------- 1 root root 2858 Nov 21 23:18 kube-controller-manager.yaml
-rw------- 1 root root 1413 Nov 21 23:18 kube-scheduler.yaml
[root@k8s-master manifests]# vim kube-controller-manager.yaml 
[root@k8s-master ~]# cat /etc/kubernetes/manifests/kube-controller-manager.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --node-cidr-mask-size=24
    #- --port=0
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --use-service-account-credentials=true
...
[root@k8s-master manifests]# vim kube-scheduler.yaml 
[root@k8s-master ~]# cat /etc/kubernetes/manifests/kube-scheduler.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    #- --port=0
    image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
    imagePullPolicy: IfNotPresent
    ...
# 问题解决
[root@k8s-master manifests]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {
    
    "health":"true"}

3.5 加入Kubernetes Node

在10.0.0.62/63(Node)执行。
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

[root@k8s-node1 ~]# kubeadm join 10.0.0.61:6443 --token 3abvx6.5em4yxz9fwzsroyi \
     --discovery-token-ca-cert-hash sha256:a2ce75ca1be016a9a679e2c5c4e9e4f8c97ed148dad6936edda6947a7e33aa97

遇到的问题:

# 问题:节点上执行kubectl join 命令加入后,再执行kubectl get nodes报错如下
[root@k8s-node2 ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

# 解决问题:将主节点的/etc/kubernetes/admin.conf传到节点服务器上,mv到/etc/kubernetes目录下
[root@k8s-master ~]# scp .kube/config [email protected]:~
The authenticity of host '10.0.0.62 (10.0.0.62)' can't be established.
ECDSA key fingerprint is SHA256:OWuZy2NmY2roM1RqIamUATXYA+wqXai6nqsA1LesvjU.
ECDSA key fingerprint is MD5:04:af:eb:98:a5:8d:e0:a4:b4:16:29:80:8e:f9:e6:fc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.62' (ECDSA) to the list of known hosts.
[email protected]'s password: 
Permission denied, please try again.
[email protected]'s password: 
config                                                    100% 5565     1.3MB/s   00:00
[root@k8s-node1 ~]# mv config .kube/
或者
[root@k8s-node2 ~]# mv admin.conf /etc/kubernetes/
[root@k8s-node2 ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   11h   v1.19.0
k8s-node1    Ready    <none>   11h   v1.19.0
k8s-node2    Ready    <none>   11h   v1.19.0
[root@k8s-node2 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {
    
    "health":"true"} 
# 问题解决

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:

$ kubeadm token create
$ kubeadm token list
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
63bca849e0e01691ae14eab449570284f0c3ddeea590f8da988c07fe2729e924

$ kubeadm join 192.168.31.61:6443 --token nuja6n.o3jrhsffiqs9swnu --discovery-token-ca-cert-hash sha256:63bca849e0e01691ae14eab449570284f0c3ddeea590f8da988c07fe2729e924

或者直接命令快捷生成:kubeadm token create --print-join-command

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/

3.6 部署容器网络(CNI)

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
注意:只需要部署下面其中一个,推荐Calico。
Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。
Calico 在每一个计算节点利用 Linux Kernel 实现了一个高效的虚拟路由器( vRouter) 来负责数据转发,而每个 vRouter 通过 BGP 协议负责把自己上运行的 workload 的路由信息向整个 Calico 网络内传播。
此外,Calico 项目还实现了 Kubernetes 网络策略,提供ACL功能。
https://docs.projectcalico.org/getting-started/kubernetes/quickstart

$ wget https://docs.projectcalico.org/manifests/calico.yaml

下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init指定的一样(10.244.0.0/16)
修改完后应用清单:

[root@k8s-master ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-z6npb   1/1     Running   0          8m18s
calico-node-4pwdc                         1/1     Running   0          8m18s
calico-node-9r6zd                         1/1     Running   0          8m18s
calico-node-vqzdj                         1/1     Running   0          8m18s
coredns-6d56c8448f-gcgrh                  1/1     Running   0          26m
coredns-6d56c8448f-tbsmv                  1/1     Running   0          26m
etcd-k8s-master                           1/1     Running   0          26m
kube-apiserver-k8s-master                 1/1     Running   0          26m
kube-controller-manager-k8s-master        1/1     Running   0          26m
kube-proxy-5qpgc                          1/1     Running   0          22m
kube-proxy-q2xfq                          1/1     Running   0          22m
kube-proxy-tvzpd                          1/1     Running   0          26m
kube-scheduler-k8s-master                 1/1     Running   0          26m

3.7 测试kubernetes集群

  • 验证Pod工作
  • 验证Pod网络通信
  • 验证DNS解析

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=NodePort
$ kubectl get pod,svc
[root@k8s-master ~]# kubectl create deployment web --image=nginx
deployment.apps/web created
[root@k8s-master ~]# kubectl get pods
NAME                  READY   STATUS              RESTARTS   AGE
web-96d5df5c8-ghb6g   0/1     ContainerCreating   0          17s
[root@k8s-master ~]# kubectl expose deployment web --port=80 --target-port=80 --type=NodePort
service/web exposed
[root@k8s-master ~]# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
web-96d5df5c8-ghb6g   1/1     Running   0          2m38s
[root@k8s-master ~]# kubectl get pods,svc
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-96d5df5c8-ghb6g   1/1     Running   0          2m48s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        12h
service/web          NodePort    10.96.132.243   <none>        80:31340/TCP   16s

# 查看pod日志
[root@k8s-master ~]# kubectl logs web-96d5df5c8-ghb6g -f
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/11/22 03:30:28 [notice] 1#1: using the "epoll" event method
2021/11/22 03:30:28 [notice] 1#1: nginx/1.21.4
2021/11/22 03:30:28 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2021/11/22 03:30:28 [notice] 1#1: OS: Linux 3.10.0-1160.45.1.el7.x86_64
2021/11/22 03:30:28 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/11/22 03:30:28 [notice] 1#1: start worker processes
2021/11/22 03:30:28 [notice] 1#1: start worker process 31
2021/11/22 03:30:28 [notice] 1#1: start worker process 32
10.0.0.62 - - [22/Nov/2021:03:34:04 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0" "-"
2021/11/22 03:34:04 [error] 31#31: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.0.0.62, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "10.0.0.62:31340", referrer: "http://10.0.0.62:31340/"
10.0.0.62 - - [22/Nov/2021:03:34:04 +0000] "GET /favicon.ico HTTP/1.1" 404 153 "http://10.0.0.62:31340/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0" "-"




10.0.0.62 - - [22/Nov/2021:03:38:29 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0" "-"
10.0.0.62 - - [22/Nov/2021:03:38:30 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0" "-"
10.244.169.128 - - [22/Nov/2021:03:40:10 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0" "-"
10.244.169.128 - - [22/Nov/2021:03:40:10 +0000] "GET /favicon.ico HTTP/1.1" 404 153 "http://10.0.0.63:31340/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0" "-"
2021/11/22 03:40:10 [error] 31#31: *3 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.244.169.128, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "10.0.0.63:31340", referrer: "http://10.0.0.63:31340/"
10.244.169.128 - - [22/Nov/2021:03:40:14 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0" "-"
10.244.169.128 - - [22/Nov/2021:03:40:15 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0" "-"

# 验证pod网络通信
[root@k8s-master ~]# kubectl get pods -o wide
NAME                  READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
web-96d5df5c8-ghb6g   1/1     Running   0          11m   10.244.36.66   k8s-node1   <none>           <none>
[root@k8s-node1 kubernetes]# ping 10.244.36.66
PING 10.244.36.66 (10.244.36.66) 56(84) bytes of data.
64 bytes from 10.244.36.66: icmp_seq=1 ttl=64 time=0.087 ms
64 bytes from 10.244.36.66: icmp_seq=2 ttl=64 time=0.063 ms
64 bytes from 10.244.36.66: icmp_seq=3 ttl=64 time=0.145 ms
^C
--- 10.244.36.66 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.063/0.098/0.145/0.035 ms

#验证DNS解析
[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        12h
web          NodePort    10.96.132.243   <none>        80:31340/TCP   11m

访问地址:http://NodeIP:Port
在这里插入图片描述

3.8 部署 Dashboard

$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:

$ vi recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort
...
$ kubectl apply -f recommended.yaml
$ kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-6b4884c9d5-gl8nr   1/1     Running   0          13m
kubernetes-dashboard-7f99b75bf4-89cds        1/1     Running   0          13m
[root@k8s-master ~]# vim kubernertes-dashboard.yaml 
[root@k8s-master ~]# kubectl apply -f kubernertes-dashboard.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

[root@k8s-master ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b59f7d4df-jxb4b   1/1     Running   0          3m28s
kubernetes-dashboard-5dbf55bd9d-zpr7t        1/1     Running   0          3m28s
[root@k8s-master ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.108.245.81   <none>        8000/TCP        5h47m
kubernetes-dashboard        NodePort    10.99.22.58     <none>        443:30001/TCP   5h47m

访问地址:https://NodeIP:30001
在这里插入图片描述

客户端使用HTTPS登陆
在这里插入图片描述

创建service account并绑定默认cluster-admin管理员集群角色:

# 创建用户
$ kubectl create serviceaccount dashboard-admin -n kube-system
# 用户授权
$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
# 获取用户Token
$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-wnt6h
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: f2277e2c-4c4d-401c-a0fa-f48110f6e259

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImVJVEZNbVB3Y0xXMWVWMU1xYk9RdUVPdFhvM1ByTUdZY2xjS0I0anhjMlkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4td250NmgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjIyNzdlMmMtNGM0ZC00MDFjLWEwZmEtZjQ4MTEwZjZlMjU5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.hcxJhsSGcgwpoQnRqeWIZLFRZM4kp0itTPYJjFuSpUOLObiLfBcQqXq4zCNAAYI3axpilv0kQA5A-l_clqpTmEKQzSa2wwx5KA-V-KNHVJ9animI691INL_5Qe9O7qyF6QybnOeVXm6K-VaC2a-njAigF_0VcSweX3VdVg9Qr0ck_RsbyerP-Hhxuo-6uep_V0AfeD4ex3OE8DTlZtktAjvvalm0hNPMq-cWVdPENe-ml7Gk0NC8iyNGNbvkwk4-z3vYj2C_4Vx3JxXTAiPRqneg_NSQKsR6H7lvZ6bPvG1OW1CeZ52JiFVErdSowRh32G5sIoF7dzQFBLYiAk6mYw

在这里插入图片描述

使用输出的token登录Dashboard。
在这里插入图片描述
查看pod日志
在这里插入图片描述
搭建过程中遇到的一些问题
1、calico node pod状态 CrashLoopBackOff

  • 配置文件有问题
  • 宿主机网卡名不太常规
    - name: IP_AUTODETECTION_METHOD
    value: “interface=eth0”

2、下载镜像慢,甚至超时 ImagePullBackOff
可以手动在每个节点docker pull

3、kubeadm init、join初始化失败
kubeadm reset 清空当前机器kubeadm执行记录

4、calico pod启动失败排查思路:
kubectl describe pod calico-node-b2lkr -n kube-system
kubectl logs calico-node-b2lkr -n kube-system

5、kubectl get cs 健康检查失败
vi /etc/kubernetes/manifests/kube-scheduler.yaml
# --port=0 注释掉这个参数
kubectl delete pod kube-scheduler-k8s-master -n kube-system

4. K8s CNI网络模型

两台Docker主机如何实现容器互通?
目前存在的问题:
1、两台docker主机网络是独立
2、怎么统一管理多台docker主机的容器ip
3、两台docker主机的网段是一个docker内部网络
如果要想将容器1访问容器2,就需要借助宿主机网络。container1 -> docker1 <-> docker2 -> container2
实现这种跨主机容器通信的技术有:flannel、calico等。

CNI网络组件选型依据:

  • 服务器规模
  • 支持功能
  • 性能
    在这里插入图片描述Q:
    1、统一管理这些k8s node网段,保障每个容器分配不一样的IP
    2、要知道转发哪个docker主机?
    3、怎么实现这个转发(从docker主机1上容器发送到另一台docker主机上容器)

A:
1、给每个docker主机分配唯一的网段
2、做好记录,每个docker主机应对的网段
3、可以使用iptables或者把宿主机当做一个路由器,配置路由表

K8s是一个扁平化网络。

即所有部署的网络组件都必须满足如下要求:

  • 一个Pod一个IP
  • 所有的Pod 可以与任何其他Pod 直接通信
  • 所有节点可以与所有Pod 直接通信
  • Pod 内部获取到的IP 地址与其他Pod 或节点与其通信时的IP 地址是同一个

主流网络组件有:Flannel、Calico等

Calico

Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。

Calico在每一个计算节点利用Linux Kernel 实现了一个高效的虚拟路由器(vRouter)来负责数据转发,而每个vRouter 通过BGP 协议负责把自己上运行的workload 的路由信息向整个Calico 网络内传播。

此外,Calico 项目还实现了Kubernetes 网络策略,提供ACL功能。

Calico部署:

wget https://docs.projectcalico.org/manifests/calico.yaml

下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init指定的一样

kubectl apply -f calico.yaml
kubectl get pods -n kube-system

Flannel

Flannel是CoreOS维护的一个网络组件,Flannel为每个Pod提供全局唯一的IP,Flannel使用ETCD来存储Pod子网与Node IP之间的关系。flanneld守护进程在每台主机上运行,并负责维护ETCD信息和路由数据包。

Flannel部署:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.11.0-amd64#g" kube-flannel.yml

5. kubectl命令行管理工具

kubectl使用kubeconfig认证文件连接K8s集群,使用kubectl config指令生成kubeconfig文件。

kubeconfig连接K8s认证文件

apiVersion: v1
kind: Config
#集群
clusters:
-cluster:
  certificate-authority-data:
  server: https://192.168.31.61:6443
 name: kubernetes

#上下文
contexts:
-context:
  cluster: kubernetes
  user: kubernetes-admin
 name: kubernetes-admin@kubernetes

#当前上下文
current-context: kubernetes-admin@kubernetes

#客户端认证
users:
-name: kubernetes-admin
 user:
  client-certificate-data:
  client-key-data:

官方文档参考地址:https://kubernetes.io/zh/docs/reference/kubectl/overview/

类型 命令 描述
基础命令 create 通过文件名或标准输入创建资源
基础命令 expose 为Deployment,Pod创建Service
基础命令 run 在集群中运行一个特定的镜像
基础命令 set 在对象上设置特定的功能
基础命令 explain 文档参考资料
基础命令 get 显示一个或多个资源
基础命令 edit 使用系统编辑器编辑一个资源。
基础命令 delete 通过文件名、标准输入、资源名称或标签选择器来删除资源。
部署命令 rollout 管理Deployment,Daemonset资源的发布(例如状态、发布记录、回滚等)
部署命令 rolling-update
部署命令 scale 对Deployment、ReplicaSet、RC或Job资源扩容或缩容Pod数量
部署命令 autoscale 为Deploy,RS,RC配置自动伸缩规则(依赖metrics-server和hpa)
集群管理命令 certificate 修改证书资源
集群管理命令 cluster-info 显示集群信息
集群管理命令 top 查看资源利用率(依赖metrics-server)
集群管理命令 cordon 标记节点不可调度
集群管理命令 uncordon 标记节点可调度
集群管理命令 drain 驱逐节点上的应用,准备下线维护
集群管理命令 taint 修改节点taint标记
故障诊断和调试命令 describe 显示资源详细信息
故障诊断和调试命令 logs 查看Pod内容器日志,如果Pod有多个容器,-c参数指定容器名称
故障诊断和调试命令 attach 附加到Pod内的一个容器
故障诊断和调试命令 exec 在容器内执行命令
故障诊断和调试命令 port-forward 为Pod创建本地端口映射
故障诊断和调试命令 proxy 为KubernetesAPIserver创建代理
故障诊断和调试命令 cp 拷贝文件或目录到容器中,或者从容器内向外拷贝
高级命令 apply 从文件名或标准输入对资源创建/更新
高级命令 patch 使用补丁方式修改、更新资源的某些字段
高级命令 replace 从文件名或标准输入替换一个资源
高级命令 convert 在不同API版本之间转换对象定义
设置命令 label 给资源设置、更新标签
设置命令 annotate 给资源设置、更新注解
设置命令 completion kubectl工具自动补全,source<(kubectlcompletionbash)(依赖软件包bash-completion)
其他命令 api-resources 查看所有资源
其他命令 api-versions 打印受支持的API版本
其他命令 config 修改kubeconfig文件(用于访问API,比如配置认证信息)
其他命令 help 所有命令帮助
其他命令 version 查看kubectl和k8s版本

kubectl create支持创建的资源
Available Commands:

clusterrole Create a ClusterRole.
clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole
configmap Create a configmap from a local file, directory or literal value
cronjob Create a cronjob with the specified name.
deployment Create a deployment with the specified name.
job Create a job with the specified name.
namespace Create a namespace with the specified name
poddisruptionbudget Create a pod disruption budget with the specified name.
priorityclass Create a priorityclass with the specified name.
quota Create a quota with the specified name.
role Create a role with single rule.
rolebinding Create a RoleBinding for a particular Role or ClusterRole
secret Create a secret using specified subcommand
service Create a service using specified subcommand.
serviceaccount Create a service account with the specified name
[root@k8s-master ~]# yum install bash-completion -y
[root@k8s-master ~]# source <(kubectl completion bash)
补全不了的话就执行下bash,再用source重新导入下,然后就可以补全了


[root@k8s-master ~]# kubectl version 
Client Version: version.Info{
    
    Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{
    
    Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}


牛刀小试,快速部署一个网站
使用Deployment控制器部署镜像:
kubectl create deployment web --image=lizhenliang/java-demo
kubectl get deploy,pods
使用Service将Pod暴露出去:
kubectl expose deployment web --port=80 --type=NodePort --target-port=8080 --name=web
kubectl get service
访问应用:
http://NodeIP:Port # 端口随机生成,通过get svc获取

[root@k8s-master ~]# kubectl create deployment my-dep --image=lizhenliang/demo --replicas=3
deployment.apps/my-dep created
[root@k8s-master ~]# kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
my-dep-99596b7c8-8tt6f   0/1     ContainerCreating   0          26s
my-dep-99596b7c8-9xgn4   0/1     ImagePullBackOff    0          26s
my-dep-99596b7c8-lpbzt   0/1     ContainerCreating   0          26s
web-96d5df5c8-ghb6g      1/1     Running             0          2d18h
[root@k8s-master ~]# kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
my-dep-99596b7c8-8tt6f   0/1     ImagePullBackOff    0          64s
my-dep-99596b7c8-9xgn4   0/1     ImagePullBackOff    0          64s
my-dep-99596b7c8-lpbzt   0/1     ContainerCreating   0          64s
web-96d5df5c8-ghb6g      1/1     Running             0          2d18h
[root@k8s-master ~]# kubectl get deployments
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
my-dep   0/3     3            0           96s
web      1/1     1            1           2d18h
[root@k8s-master ~]# kubectl delete deployments my-dep
deployment.apps "my-dep" deleted
[root@k8s-master ~]# kubectl get deployments
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
web    1/1     1            1           2d18h
[root@k8s-master ~]# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
web-96d5df5c8-ghb6g   1/1     Running   0          2d18h

[root@k8s-master ~]# kubectl create deployment my-dep --image=lizhenliang/java-demo --replicas=3
deployment.apps/my-dep created
[root@k8s-master ~]# kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
my-dep-5f8dfc8c78-dvxp8   0/1     ContainerCreating   0          7s
my-dep-5f8dfc8c78-f4ln4   0/1     ContainerCreating   0          7s
my-dep-5f8dfc8c78-j9fqp   0/1     ContainerCreating   0          7s
web-96d5df5c8-ghb6g       1/1     Running             0          2d18h
[root@k8s-master ~]# kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
my-dep-5f8dfc8c78-dvxp8   0/1     ContainerCreating   0          48s
my-dep-5f8dfc8c78-f4ln4   0/1     ContainerCreating   0          48s
my-dep-5f8dfc8c78-j9fqp   0/1     ContainerCreating   0          48s
web-96d5df5c8-ghb6g       1/1     Running             0          2d18h
[root@k8s-master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5f8dfc8c78-dvxp8   1/1     Running   0          69s
my-dep-5f8dfc8c78-f4ln4   1/1     Running   0          69s
my-dep-5f8dfc8c78-j9fqp   1/1     Running   0          69s
web-96d5df5c8-ghb6g       1/1     Running   0          2d18h

[root@k8s-master ~]# kubectl create deployment my-dep --image=lizhenliang/java-demo --replicas=3
deployment.apps/my-dep created
[root@k8s-master ~]# kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
my-dep-5f8dfc8c78-dvxp8   0/1     ContainerCreating   0          7s
my-dep-5f8dfc8c78-f4ln4   0/1     ContainerCreating   0          7s
my-dep-5f8dfc8c78-j9fqp   0/1     ContainerCreating   0          7s
web-96d5df5c8-ghb6g       1/1     Running             0          2d18h
[root@k8s-master ~]# kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
my-dep-5f8dfc8c78-dvxp8   0/1     ContainerCreating   0          48s
my-dep-5f8dfc8c78-f4ln4   0/1     ContainerCreating   0          48s
my-dep-5f8dfc8c78-j9fqp   0/1     ContainerCreating   0          48s
web-96d5df5c8-ghb6g       1/1     Running             0          2d18h
[root@k8s-master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5f8dfc8c78-dvxp8   1/1     Running   0          69s
my-dep-5f8dfc8c78-f4ln4   1/1     Running   0          69s
my-dep-5f8dfc8c78-j9fqp   1/1     Running   0          69s
web-96d5df5c8-ghb6g       1/1     Running   0          2d18h
[root@k8s-master ~]# kubectl expose deployment my-dep --port=80 --target-port=8080 --type=NodePort
service/my-dep exposed
[root@k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        3d6h
my-dep       NodePort    10.111.199.51   <none>        80:31734/TCP   18s
web          NodePort    10.96.132.243   <none>        80:31340/TCP   2d18h
[root@k8s-master ~]# kubectl get ep
NAME         ENDPOINTS                                                 AGE
kubernetes   10.0.0.61:6443                                            3d6h
my-dep       10.244.169.134:8080,10.244.36.69:8080,10.244.36.70:8080   40s
web          10.244.36.66:80                                           2d18h

在这里插入图片描述

资源概念

  • Pod
    • 最小部署单元
    • 一组容器的集合
    • 一个Pod中的容器共享网络命名空间
    • Pod是短暂的
  • Controllers
    • Deployment :无状态应用部署
    • StatefulSet :有状态应用部署
    • DaemonSet :确保所有Node运行同一个Pod
    • Job :一次性任务
    • Cronjob :定时任务

更高级层次对象,部署和管理Pod

  • Service
    • 防止Pod失联
    • 定义一组Pod的访问策略
  • Label :标签,附加到某个资源上,用于关联对象、查询和筛选
    标签管理
    kubectl get pods --show-labels # 查看资源标签
    kubectl get pods -l app=my-dep # 根据标签列出资源
[root@k8s-master ~]# kubectl get pods --show-labels
NAME                      READY   STATUS    RESTARTS   AGE     LABELS
my-dep-5f8dfc8c78-dvxp8   1/1     Running   0          13m     app=my-dep,pod-template-hash=5f8dfc8c78
my-dep-5f8dfc8c78-f4ln4   1/1     Running   0          13m     app=my-dep,pod-template-hash=5f8dfc8c78
my-dep-5f8dfc8c78-j9fqp   1/1     Running   0          13m     app=my-dep,pod-template-hash=5f8dfc8c78
web-96d5df5c8-ghb6g       1/1     Running   0          2d18h   app=web,pod-template-hash=96d5df5c8
[root@k8s-master ~]# kubectl get pods -l app=my-dep
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5f8dfc8c78-dvxp8   1/1     Running   0          14m
my-dep-5f8dfc8c78-f4ln4   1/1     Running   0          14m
my-dep-5f8dfc8c78-j9fqp   1/1     Running   0          14m
  • Namespaces:命名空间,将对象逻辑上隔离
    命名空间
    Namespaces引入的目的:
    • 资源隔离
    • 对命名空间进行权限控制

-n 参数指定命名空间

[root@k8s-master ~]# kubectl get namespaces 
NAME                   STATUS   AGE
default                Active   3d7h
kube-node-lease        Active   3d7h
kube-public            Active   3d7h
kube-system            Active   3d7h
kubernetes-dashboard   Active   2d14h
[root@k8s-master ~]# kubectl create namespace test
namespace/test created
[root@k8s-master ~]# kubectl get namespaces 
NAME                   STATUS   AGE
default                Active   3d7h
kube-node-lease        Active   3d7h
kube-public            Active   3d7h
kube-system            Active   3d7h
kubernetes-dashboard   Active   2d15h
test                   Active   8s
[root@k8s-master ~]# kubectl create deployment my-dep --image=lizhenliang/java-demo --replicas=3 -n test
deployment.apps/my-dep created
[root@k8s-master ~]# kubectl get pods -n test
NAME                      READY   STATUS              RESTARTS   AGE
my-dep-5f8dfc8c78-58sdk   0/1     ContainerCreating   0          21s
my-dep-5f8dfc8c78-77cld   0/1     ContainerCreating   0          21s
my-dep-5f8dfc8c78-965w7   0/1     ContainerCreating   0          21s
[root@k8s-master ~]# kubectl get pods -n default 
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5f8dfc8c78-dvxp8   1/1     Running   0          62m
my-dep-5f8dfc8c78-f4ln4   1/1     Running   0          62m
my-dep-5f8dfc8c78-j9fqp   1/1     Running   0          62m
web-96d5df5c8-ghb6g       1/1     Running   0          2d19h
[root@k8s-master ~]# kubectl get pods,deployment -n test
NAME                          READY   STATUS    RESTARTS   AGE
pod/my-dep-5f8dfc8c78-58sdk   1/1     Running   0          69s
pod/my-dep-5f8dfc8c78-77cld   1/1     Running   0          69s
pod/my-dep-5f8dfc8c78-965w7   1/1     Running   0          69s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-dep   3/3     3            3           69s
[root@k8s-master ~]# kubectl edit svc my-dep
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2021-11-24T21:47:28Z"
  labels:
    app: my-dep
  name: my-dep
  namespace: default
  resourceVersion: "207081"
  selfLink: /api/v1/namespaces/default/services/my-dep
  uid: 34ae1f85-94a5-4c67-bfb4-9f73e2277f55
spec:
  clusterIP: 10.111.199.51
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 31734
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: my-dep
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {
    
    }
  

课后作业:

1、使用kubeadm搭建一个K8s集群

2、新建命名空间,在该命名空间中创建一个pod

  • 命名空间名称:cka
  • pod名称:pod-01
  • 镜像:nginx

3、创建一个deployment并暴露Service

  • 名称:aliang-666
  • 镜像:nginx

4、列出命名空间下指定标签pod

  • 命名空间名称:kube-system
  • 标签:k8s-app=kube-dns

猜你喜欢

转载自blog.csdn.net/dws123654/article/details/121278586
今日推荐