基于K8S搭建的一个多web服务的高可用集群

基于K8S搭建的一个多web服务的高可用集群

零、项目概要以及支持文档

本项目旨在以k8s(docker提供底层服务)搭建一个多web服务的高可用集群。内含知识点:配置静态IP地址、SSH免密通道、docker搭建、k8s搭建、linux上部署nfs服务、linux上部署ansible服务、linux上部署gitlab服务、linux上部署harbor服务、linux上部署Jenkins、HPA技术、K8S探针、ingress负载均衡、部署Prometheus
单击进入支持文档和问题解释

一、准备工作

0.ip分配

在这里插入图片描述

1.修改主机名(参照机器名表,每台机器都需要修改)

hostnamectl sethostname [机器名]
su

2.关闭NetworkManger(每台机器都需要关闭)

service NetworkManger stop
systemctl disable NetworkManger

3.关闭防火墙(每台机器都需要关闭)

service firewalld stop
systemctl disable firewalld

4.关闭selinux(每台机器都需要关闭)

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

5.关闭交换分区(每台机器都需要关闭)

swapoff -a
#注释swap挂载,给swap这行开头加一下注释
vim /etc/fstab

在这里插入图片描述

6.加载相关内核模块(每台机器都需要加载)

modprobe br_netfilter

echo "modprobe br_netfilter" >> /etc/profile
 
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

7.配置时间同步(每台机器都需要同步)

#安装ntpdate命令

[root@ xianchaonode1 ~]# yum install ntpdate -y
#跟网络时间做同步
[root@ xianchaonode1 ~]#ntpdate cn.pool.ntp.org
crontab -e
* */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org
service crond restart

配置epel源(每台机器都需要配置)

yum install epel-release -y

8.配置静态ip地址(每台机器都需要配置,注意IP地址不同!)

vim /etc/sysconfig/network-scripts/ifcfg-ens33

在这里插入图片描述
配置完后重启网卡

ifdown ens33
ifup ens33
service network restart

9.添加hosts解析

vim /etc/hosts

在这里插入图片描述

10.建立免密通道(每台机器都需要相互建立免密通道)

生成密钥

ssh-keygen
一路回车

传输公钥(master传给node-1,node-2,other。其他三台依次传给初自己外的机器)

ssh-copy-id [要传送的机器]

测试登录

[root@master ~]# ssh node-1
Last login: Wed Sep 20 14:36:09 2023 from master
[root@node-1 ~]# ip add show ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:c8:c1:95 brd ff:ff:ff:ff:ff:ff
    inet 192.168.224.147/24 brd 192.168.224.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fec8:c195/64 scope link 
       valid_lft forever preferred_lft forever
[root@node-1 ~]# exit
登出
Connection to node-1 closed.
[root@master ~]# 

如果ssh出现问题,参照此文

二、配置docker(master、node-1、node-2、other、test机器需要配置)

1.安装docker相关服务插件,配置国内阿里云docker的repo源

yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet ipvsadm

2.配置安装k8s组件需要的阿里云的repo源

[root@master ~]# vim  /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

3.安装docker服务

yum install docker-ce-20.10.6 -y

4.启动docker,设置开机自启

systemctl start docker && systemctl enable docker.service

5.配置docker镜像加速器和驱动

vim  /etc/docker/daemon.json 
 
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 

6.修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以

systemctl daemon-reload  && systemctl restart docker

三、配置k8s

1.安装初始化k8s需要的软件包(master、node-1、node-2需要配置)

yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6

2.设置kubelet开机启动

systemctl enable kubelet 

3.kubeadm初始化k8s集群

在master节点上下载k8simage-1-20-6.tar.gz镜像压缩包

把文件从master节点远程拷贝到node节点

root@master ~]# scp k8simage-1-20-6.tar.gz node-1
root@master ~]# scp k8simage-1-20-6.tar.gz node-2

导入包(master、node-1、node-2节点都导入)

docker load -i k8simage-1-20-6.tar.gz

4.使用kubeadm初始化k8s集群(仅在master节点上操作)

kubeadm config print init-defaults > kubeadm.yaml
[root@master ~]# vim kubeadm.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.224.146         #控制节点的ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master                        #控制节点主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master  
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 需要修改为阿里云的仓库
kind: ClusterConfiguration
kubernetesVersion: v1.20.6
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16         #指定pod网段,需要新增加这个
scheduler: {}
#追加如下几行
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

5.基于kubeadm.yaml文件初始化k8s(仅在master节点上操作)

[root@master ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification

配置kubectl的配置文件config,相当于对kubectl进行授权,这样kubectl命令可以使用这个证书对k8s集群进行管理

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

6.绑定工作节点

扩容k8s集群-添加第一个工作节点
在master上查看加入节点的命令:

[root@master~]# kubeadm token create --print-join-command

显示如下:

kubeadm join 192.168.224.146:6443 --token vulvta.9ns7da3saibv4pg1     --discovery-token-ca-cert-hash sha256:72a0896e27521244850b8f1c3b600087292c2d10f2565adb56381f1f4ba7057a
在两个node节点机器里依次输入
kubeadm join 192.168.224.146:6443 --token vulvta.9ns7da3saibv4pg1     --discovery-token-ca-cert-hash sha256:72a0896e27521244850b8f1c3b600087292c2d10f2565adb56381f1f4ba7057a

查看是否与管理节点绑定(注意,现在STATUS应该是NotReady,我这里显示Ready是因为我做完整个项目才截图)

kubectl get nodes

在这里插入图片描述
可以看到已经有了节点信息
可以看到node-1、node-2的ROLES角色为空,就表示这个节点是工作节点。
可以把node-1和node-2的ROLES变成work,按照如下方法:

[root@master ~]# kubectl label node node-1 node-role.kubernetes.io/worker=worker

[root@master ~]# kubectl label node node-2 node-role.kubernetes.io/worker=worker

为什么节点处于NotReady状态?
上面状态都是NotReady状态,说明没有安装网络插件
安装kubernetes网络组件-Calico
**上传calico.yaml到k8smaster上,使用yaml文件安装calico网络插件 **

wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate
[root@k8smaster ~]# kubectl apply -f  calico.yaml

再次查看集群状态

[root@k8smaster ~]# kubectl get nodes

在这里插入图片描述
STATUS状态是Ready,说明k8s集群正常运行了

四、部署ansible完成相关软件的自动化运维工作,部署防火墙服务器

1.ansible与其他机器建立免密通道

本次实验我部署的ansible放置在other机器上,所以已经建立过免密通道了。如若单开一台机器,这台ansible机器需要和master、node-1、node-2相互建立免密通道。

2.验证是否建立成功

在这里插入图片描述

3.在other机器上安装ansible

 yum  install ansible -y

4.编写主机清单

[root@other .ssh]# vim /etc/ansible/hosts
在最后追加如下几行
[master]
192.168.224.146
[node-1]
192.168.224.147
[node-2]
192.168.224.148

5.测试

[root@other]# ansible all -m shell -a "ip add show ens33"

在这里插入图片描述
黄色字体是正常反馈,其他颜色或者报错就证明没有搭建好。

五、部署nfs服务器,为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现。

1.搭建好nfs服务器

[root@other ~]# yum install nfs-utils -y

建议k8s集群内的所有的节点都安装nfs-utils软件,因为节点服务器里创建卷需要支持nfs网络文件系统

yum install nfs-utils -y
service nfs restart

2.设置共享目录

在other机器中

mkdir /web
echo "welcome to zyj999" >/web/index.html
vim /etc/exports
#添加以下内容(将目录输出到192.168.224.0/24网段所有的机器)
/web   192.168.224.0/24(rw,no_root_squash,sync)

3.刷新nfs或者重新输出共享目录

systemctl enable nfs
service nfs restart
exportfs -r   #输出所有共享目录
exportfs -v   #显示输出的共享目录
/web            192.168.224.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

4.在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录

[root@node-1 ~]# mkdir /nfstest
[root@node-1 ~]# mount 192.168.224.160:/web   /nfstest
您在 /var/spool/mail/root 中有新邮件
[root@node-1 ~]# df -Th|grep nfs
192.168.224.121:/web      nfs4       17G  1.5G   16G    9% /nfstest

5.取消挂载

[root@k8snode1 ~]# umount  /nfstest

6.创建pv使用nfs服务器上的共享目录

[root@master pv]# vim nfs-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-web
  labels:
    type: pv-web
spec:
  capacity:
    storage: 10Gi 
  accessModes:
    - ReadWriteMany
  storageClassName: nfs         # pv对应的名字
  nfs:
    path: "/web"       # nfs共享的目录
    server: 192.168.224.160   # nfs服务器的ip地址
    readOnly: false   # 访问模式
 
[root@master pv]# kubectl apply -f nfs-pv.yml 
[root@master pv]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv-web   10Gi       RWX            Retain           Available           nfs                     5s

7.创建pvc使用pv

[root@master pv]# vim nfs-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-web
spec:
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 1Gi
  storageClassName: nfs #使用nfs类型的pv
[root@master pv]# kubectl apply -f nfs-pvc.yml 
[root@master pv]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-web   Bound    pv-web   10Gi       RWX            nfs            6s

8.创建pod使用pvc

[root@k8smaster pv]# vim nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
        - name: sc-pv-storage-nfs
          persistentVolumeClaim:
            claimName: pvc-web
      containers:
        - name: sc-pv-container-nfs
          image: nginx     #不接版本默认latest
          imagePullPolicy: IfNotPresent   #建议每一个镜像后面都增设此规则,本地没有的话就去从网上拉取
          ports:
            - containerPort: 80
              name: "http-server"
          volumeMounts:
            - mountPath: "/usr/share/nginx/html"
              name: sc-pv-storage-nfs
 
[root@k8smaster pv]# kubectl apply -f nginx-deployment.yaml  
[root@k8smaster pv]# kubectl get pod -o wide

注意三个pod需要时running状态,否则是创建失败

9.测试访问

在master、node-1、node-2这三个节点上分别随机访问后面的IP,能有输出则说明搭建成功
在这里插入图片描述

curl [IP]
welcome to zyj999

10.实时监测

追加内容

[root@other]# echo "hello,world" >> /usr/share/nginx/html/index.html

再次访问

[root@k8snode1 ~]# curl 10.244.84.135
welcome to zyj999
hello,world

显示追加的“hello,world”表示成功挂载,实时加载
构建CI/CD环境,CI/CD 是一种通过在应用开发阶段引入自动化来频繁向客户交付应用的方法。 CI/CD 的核心概念是持续集成、持续交付和持续部署。

六、部署gitlab(部署在test机器上)

0.官方步骤:https://gitlab.cn/install/

1.安装和配置必须的依赖项

sudo yum install -y curl policycoreutils-python openssh-server perl

2.配置极狐GitLab 软件源镜像

curl -fsSL https://packages.gitlab.cn/repository/raw/scripts/setup.sh | /bin/bash
yum install gitlab-jh -y
配置相应端口号
[root@test ~]# vim /etc/gitlab/gitlab.rb 
external_url 'http://localhost:8981'
#重新配置
[root@test ~]# gitlab-ctl reconfigure

3.查看密码

[root@test ~]# cat /etc/gitlab/initial_root_password 

4.测试访问

浏览器访问192.168.224.133:8981
如果机器内存小于4G很容易启动不了,这时可以留出空闲的内存出来多试几次

在这里插入图片描述
因为本人笔记本硬件原因以及虚拟机所给的内存较小,所以只有极少几次成功进入页面,之后会补充一篇博客来专门写gitlab。
下面是正常页面。
在这里插入图片描述

七、部署Jenkins(Jenkins部署到k8s里)

1.安装git软件

[root@master]# yum install git -y

2.下载相关的yaml文件

[root@master]# git clone https://github.com/scriptcamp/kubernetes-jenkins
[root@master]# cd kubernetes-jenkins/
[root@master kubernetes-jenkins]# ls
deployment.yaml  namespace.yaml  README.md  serviceAccount.yaml  service.yaml  volume.yaml

3.创建命名空间

[root@master kubernetes-jenkins]# vim namespace.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: devops-tools
[root@master kubernetes-jenkins]# kubectl apply -f namespace.yaml 

4.查看命名空间,出现了devops-tools命名空间即为创建成功。

[root@master kubernetes-jenkins]# kubectl get ns
NAME                   STATUS   AGE
default                Active   22h
devops-tools           Active   19s
ingress-nginx          Active   139m
kube-node-lease        Active   22h
kube-public            Active   22h
kube-system            Active   22h

5.创建服务账号,集群角色,绑定

[root@master kubernetes-jenkins]# vim serviceAccount.yaml 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: jenkins-admin
rules:
  - apiGroups: [""]
    resources: ["*"]
    verbs: ["*"] 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins-admin
  namespace: devops-tools 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: jenkins-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: jenkins-admin
subjects:
- kind: ServiceAccount
  name: jenkins-admin
 
[root@master kubernetes-jenkins]# kubectl apply -f serviceAccount.yaml 

6.创建卷,用来存放数据

[root@master kubernetes-jenkins]# vim volume.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: jenkins-pv-volume
  labels:
    type: local
spec:
  storageClassName: local-storage
  claimRef:
    name: jenkins-pv-claim
    namespace: devops-tools
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  local:
    path: /mnt
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node-1   # 需要修改为k8s里的node节点的名字 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-pv-claim
  namespace: devops-tools
spec:
  storageClassName: local-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

[root@master kubernetes-jenkins]# kubectl apply -f volume.yaml 
[root@master kubernetes-jenkins]# kubectl get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS    REASON   AGE
jenkins-pv-volume   10Gi       RWO            Retain           Bound    devops-tools/jenkins-pv-claim   local-storage            33s
pv-web              10Gi       RWX            Retain           Bound    default/pvc-web                 nfs                      21h

[root@master kubernetes-jenkins]# kubectl describe pv jenkins-pv-volume
Name:              jenkins-pv-volume
Labels:            type=local
Annotations:       <none>
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      local-storage
Status:            Bound
Claim:             devops-tools/jenkins-pv-claim
Reclaim Policy:    Retain
Access Modes:      RWO
VolumeMode:        Filesystem
Capacity:          10Gi
Node Affinity:     
  Required Terms:  
    Term 0:        kubernetes.io/hostname in [k8snode1]
Message:           
Source:
    Type:  LocalVolume (a persistent volume backed by local storage on a node)
    Path:  /mnt
Events:    <none>

7.部署Jenkins

[root@master kubernetes-jenkins]# vim deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins
  namespace: devops-tools
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins-server
  template:
    metadata:
      labels:
        app: jenkins-server
    spec:
      securityContext:
            fsGroup: 1000 
            runAsUser: 1000
      serviceAccountName: jenkins-admin
      containers:
        - name: jenkins
          image: jenkins/jenkins:lts
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              memory: "2Gi"
              cpu: "1000m"
            requests:
              memory: "500Mi"
              cpu: "500m"
          ports:
            - name: httpport
              containerPort: 8080
            - name: jnlpport
              containerPort: 50000
          livenessProbe:
            httpGet:
              path: "/login"
              port: 8080
            initialDelaySeconds: 90
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 5
          readinessProbe:
            httpGet:
              path: "/login"
              port: 8080
            initialDelaySeconds: 60
            periodSeconds: 10
            timeoutSeconds: 5
            failureThreshold: 3
          volumeMounts:
            - name: jenkins-data
              mountPath: /var/jenkins_home         
      volumes:
        - name: jenkins-data
          persistentVolumeClaim:
              claimName: jenkins-pv-claim

[root@master kubernetes-jenkins]# kubectl apply -f deployment.yaml 
[root@master kubernetes-jenkins]# kubectl get deploy -n devops-tools
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
jenkins   1/1     1            1           5m36s

[root@master kubernetes-jenkins]# kubectl get pod -n devops-tools
NAME                       READY   STATUS    RESTARTS   AGE
jenkins-7fdc8dd5fd-bg66q   1/1     Running   0          19s

8.启动服务发布Jenkins的pod

[root@masterkubernetes-jenkins]# vim service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: jenkins-service
  namespace: devops-tools
  annotations:
      prometheus.io/scrape: 'true'
      prometheus.io/path:   /
      prometheus.io/port:   '8080'
spec:
  selector: 
    app: jenkins-server
  type: NodePort  
  ports:
    - port: 8080
      targetPort: 8080
      nodePort: 32000
[root@master kubernetes-jenkins]# kubectl apply -f service.yaml 
[root@master kubernetes-jenkins]# kubectl get svc -n devops-tools
NAME              TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
jenkins-service   NodePort   10.104.76.252   <none>        8080:32000/TCP   24s

9.进入pod里获取登录的密码

[root@master kubernetes-jenkins]# kubectl exec -it jenkins-7fdc8dd5fd-bg66q  -n devops-tools -- bash
bash-5.1$ cat /var/jenkins_home/secrets/initialAdminPassword
b0232e2dad164f89ad2221e4c46b0d46

10.在Windows机器上访问Jenkins,宿主机ip+端口号

http://192.168.224.146:32000/login?from=%2F在这里插入图片描述
在这里插入图片描述

八、部署harbor(前提是安装好 docker 和 docker compose)

1.安装 harbor,到 harbor 官网或者 github 下载harbor源码包

[root@other harbor]# ls
harbor-offline-installer-v2.4.1.tgz

2.解压

[root@other harbor]# tar xf harbor-offline-installer-v2.4.1.tgz 
[root@other harbor]# ls
harbor  harbor-offline-installer-v2.4.1.tgz
[root@other harbor]# cd harbor
[root@other harbor]# ls
common.sh  harbor.v2.4.1.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare
[root@other harbor]# pwd
/root/harbor/harbor

3.修改配置文件

[root@other harbor]# vim harbor.yml
# Configuration file of Harbor
 
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 192.168.224.149  # 修改为主机ip地址
 
# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 5000  # 修改成其他端口号
 
#https可以全关闭
# https related config
#https:
  # https port for harbor, default is 443
  #port: 443
  # The path of cert and key files for nginx
  #certificate: /your/certificate/path
  #private_key: /your/private/key/path
 
# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
#   # set enabled to true means internal tls is enabled
#   enabled: true
#   # put your cert and key files on dir
#   dir: /etc/harbor/tls/internal
 
# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433
 
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345  #登录密码
 
# Harbor DB configuration
database:
  # The password for the root user of Harbor DB. Change this before any production use.
  password: root123
  # The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
  max_idle_conns: 100
  # The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
  # Note: the default number of connections is 1024 for postgres of harbor.
  max_open_conns: 900
 
# The default data volume
data_volume: /data

4.执行部署脚本

[root@other harbor]# ./install.sh
✔ ----Harbor has been installed and started successfully.----

5.配置开机自启

[root@other harbor]# vim /etc/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
 
touch /var/lock/subsys/local
/usr/local/sbin/docker-compose -f /root/harbor/harbor/docker-compose.yml up -d
 

6.设置权限

[root@other harbor]# chmod +x /etc/rc.local /etc/rc.d/rc.local

7.登录

http://192.168.224.149:5000/
账号:admin 密码:Harbor12345
在这里插入图片描述

8.推送测试

先在harbor上建一个目录test来进行镜像推拉测试
新建一个项目,以nginx为例进行推送到harbor上
步骤:找到nginx镜像,给nginx镜像打一个新标签192.168.224.149:5000/test/nginx1:v1,相当于有了一个新镜像192.168.224.149:5000/test/nginx1:v1,将此镜像推送到harbor镜像库中。

[root@other harbor]# docker image ls | grep nginx
nginx                           latest    605c77e624dd   17 months ago   141MB
goharbor/nginx-photon           v2.4.1    78aad8c8ef41   18 months ago   45.7MB
 
[root@other harbor]# docker tag nginx:latest 192.168.224.149:5000/test/nginx1:v1
 
[root@other harbor]# docker image ls | grep nginx
192.168.224.149:5000/test/nginx1   v1        605c77e624dd   17 months ago   141MB
nginx                            latest    605c77e624dd   17 months ago   141MB
goharbor/nginx-photon            v2.4.1    78aad8c8ef41   18 months ago   45.7MB
[root@other harbor]# docker push 192.168.224.149:5000/test/nginx1:v1
The push refers to repository [192.168.224.149:5000/test/nginx1]
Get https://192.168.224.149:5000/v2/: http: server gave HTTP response to HTTPS client

harbor机器先自己登录上去:


[root@other harbor]# vim /etc/docker/daemon.json 
{
"insecure-registries":["192.168.224.149:5000"]
} 

[root@other harbor]# docker login 192.168.224.149:5000
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

[root@other harbor]# docker push 192.168.224.149:5000/test/nginx1:v1
The push refers to repository [192.168.224.149:5000/test/nginx1]
d874fd2bc83b: Pushed 
32ce5f6a5106: Pushed 
f1db227348d0: Pushed 
b8d6e692a25e: Pushed 
e379e8aedd4d: Pushed 
2edcec3590a4: Pushed 
v1: digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3 size: 1570
[root@otherr harbor]# vim /etc/docker/daemon.json 
{
"insecure-registries":["192.168.224.149:5000"]
} 

九、HPA

将自己用go开发的web接口系统制作成镜像,部署到k8s里作为web应用;采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小1个业务pod,最多10个业务pod。

1.k8s集群每个节点都登入到harbor中,以便于从harbor中拉回镜像。

[root@node-1 ~]# vim /etc/docker/daemon.json 
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
  "insecure-registries":["192.168.224.149:5000"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 
[root@node-2 ~]# cat /etc/docker/daemon.json 
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
  "insecure-registries":["192.168.224.149:5000"],
  "exec-opts": ["native.cgroupdriver=systemd"]
} 

2.重新加载配置,重启docker服务

systemctl daemon-reload  && systemctl restart docker

3.登录harbor

[root@master mysql]# docker login 192.168.224.149:5000
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
 
Login Succeeded
 
[root@node-1 ~]# docker login 192.168.224.149:5000
Username: admin   
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
 
Login Succeeded
 
[root@node-2 ~]# docker login 192.168.2.106:5000
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
 
Login Succeeded

4.测试:从harbor拉取nginx镜像

[root@k8snode1 ~]# docker pull 192.168.224.149:5000/test/nginx1:v1
 
[root@k8snode1 ~]# docker images

在这里插入图片描述

5.制作镜像

[root@other ~]# cd go
[root@otehr go]# ls
scweb  Dockerfile
[root@other go]# cat Dockerfile 
FROM centos:7
WORKDIR /go
COPY . /go
RUN ls /go && pwd
ENTRYPOINT ["/go/scweb"]
 
[root@other go]# docker build  -t scmyweb:1.1 .
 
[root@other go]# docker image ls | grep scweb
scweb                            1.1       f845e97e9dfd   4 hours ago      214MB
 
[root@other go]#  docker tag scweb:1.1 192.168.224.149:5000/test/web:v2
 
[root@other go]# docker image ls | grep web
192.168.2.106:5000/test/web      v2        00900ace4935   4 minutes ago   214MB
scweb                            1.1       00900ace4935   4 minutes ago   214MB

6.推送镜像

[root@other go]# docker push 192.168.224.149:5000/test/web:v2
The push refers to repository [192.168.224.149:5000/test/web]
3e252407b5c2: Pushed 
193a27e04097: Pushed 
b13a87e7576f: Pushed 
174f56854903: Pushed 
v1: digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29 size: 1153

[root@node-1 ~]# docker login 192.168.224.149:5000
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

[root@node-1 ~]# docker pull 192.168.224.149:5000/test/web:v2
v1: Pulling from test/web
2d473b07cdd5: Pull complete 
bc5e56dd1476: Pull complete 
694440c745ce: Pull complete 
78694d1cffbb: Pull complete 
Digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29
Status: Downloaded newer image for 192.168.2.106:5000/test/web:v2
192.168.224.149:5000/test/web:v1

[root@node-1 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
192.168.2.106:5000/test/web                                                    v2         f845e97e9dfd   4 hours ago     214MB

[root@node-2 ~]# docker login 192.168.224.149:5000
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

7.拉取镜像

 
[root@node-2 ~]# docker pull 192.168.224.149:5000/test/web:v2
v1: Pulling from test/web
2d473b07cdd5: Pull complete 
bc5e56dd1476: Pull complete 
694440c745ce: Pull complete 
78694d1cffbb: Pull complete 
Digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29
Status: Downloaded newer image for 192.168.2.106:5000/test/web:v2
192.168.224.149:5000/test/web:v1
 
[root@node-2 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
192.168.2.106:5000/test/web                                                    v2         f845e97e9dfd   4 hours ago     214MB

HPA:HorizontalPodAutoscaler(简称 HPA )自动更新工作负载资源(例如Deployment),目的是自动扩缩# 工作负载以满足需求。
https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

8.安装metrics server

下载components.yaml配置文件

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

9.修改components.yaml配置文件

[root@master ~]# vim components.yaml

在这里插入图片描述

   - args:
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
        imagePullPolicy: IfNotPresent

执行安装命令

[root@master metrics]# kubectl apply -f components.yaml 

查看效果

[root@master metrics]# kubectl get pod -n kube-system

在这里插入图片描述

10.确保metrics-server安装好,查看pod、apiservice验证metrics-server安装好了

[root@master HPA]# kubectl get pod -n kube-system|grep metrics

[root@master HPA]# kubectl get apiservice |grep metrics

在这里插入图片描述

 
[root@master HPA]# kubectl top node
NAME        CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   	 	349m         17%    1160Mi          67%       
node-1    		271m         13%    1074Mi          62%       
node-2 			226m         11%    1224Mi          71%  
 
[root@node-1 ~]# docker images|grep metrics

在这里插入图片描述
node节点上查看

[root@node-1 ~]# docker images|grep metrics

在这里插入图片描述

11.以yaml文件启动web并暴露服务

[root@master hpa]# vim my-web.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myweb
  name: myweb
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: 192.168.224.149:5000/test/web:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8000
        resources:
          limits:
            cpu: 300m
          requests:
            cpu: 100m
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myweb-svc
  name: myweb-svc
spec:
  selector:
    app: myweb
  type: NodePort
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 8000
    nodePort: 30001
 
[root@master HPA]# kubectl apply -f my-web.yaml 
deployment.apps/myweb created
service/myweb-svc created

12.创建HPA功能

[root@master HPA]# kubectl autoscale deployment myweb --cpu-percent=50 --min=1 --max=10
horizontalpodautoscaler.autoscaling/myweb autoscaled
 
[root@master HPA]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          3d2h
myweb-svc    NodePort    10.102.83.168   <none>        8000:30001/TCP   15s

[root@master HPA]# kubectl get hpa

在这里插入图片描述

在这里插入图片描述
因为无人访问,所以起初的三个pod已经缩减到一个pod

十、启动mysql的pod,为web业务提供数据库服务。

1.定义mysql的Deployment

[root@master mysql]# vim mysql-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: mysql
  name: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.7.42
        name: mysql
        imagePullPolicy: IfNotPresent
        env:
        - name: MYSQL_ROOT_PASSWORD   
          value: "123456"
        ports:
        - containerPort: 3306
---
#定义mysql的Service
apiVersion: v1
kind: Service
metadata:
  labels:
    app: svc-mysql
  name: svc-mysql
spec:
  selector:
    app: mysql
  type: NodePort
  ports:
  - port: 3306
    protocol: TCP
    targetPort: 3306
    nodePort: 30007
[root@k8smaster mysql]# kubectl apply -f mysql-deployment.yaml 

2.查看运行情况

[root@master mysql]# kubectl get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP          28h
svc-mysql        NodePort    10.105.96.217   <none>        3306:30007/TCP   10m

[root@master mysql]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
mysql-5f9bccd855-6kglf              1/1     Running   0          8m59s

3.进入内存测试

[root@master mysql]# kubectl exec -it mysql-5f9bccd855-6kglf -- bash
bash-4.2# mysql -uroot -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.42 MySQL Community Server (GPL)

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.01 sec)

mysql> exit
Bye
bash-4.2# exit
exit
[root@master mysql]# 

十一、k8s部署有状态的MySQL

1.在nfs服务器上创建基础文件夹

[root@nfs data]# pwd
/data
[root@nfs data]# mkdir db replica  replica-3
[root@nfs data]# ls
db  replica  replica-3

2.在master节点部署mysql,创建 ConfigMap

[root@master mysql]# vim mysql-configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  labels:
    app: mysql
data:
  primary.cnf: |
    # 仅在主服务器上应用此配置
    [mysqld]
    log-bin
  replica.cnf: |
    # 仅在副本服务器上应用此配置
    [mysqld]
    super-read-only
    
[root@master mysql]# kubectl apply -f mysql-configmap.yaml 
[root@k8smaster mysql]# kubectl get cm
NAME               DATA   AGE
kube-root-ca.crt   1      6d22h
mysql              2      5s

3.为 StatefulSet 成员提供稳定的 DNS 表项的无头服务(Headless Service)

[root@k8smaster mysql]# vim mysql-services.yaml 
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
    app.kubernetes.io/name: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  clusterIP: None
  selector:
    app: mysql
---
# 用于连接到任一 MySQL 实例执行读操作的客户端服务
# 对于写操作,你必须连接到主服务器:mysql-0.mysql
apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  labels:
    app: mysql
    app.kubernetes.io/name: mysql
    readonly: "true"
spec:
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql

[root@k8smaster mysql]# kubectl apply -f mysql-services.yaml 
 
[root@k8smaster mysql]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    6d22h
mysql        ClusterIP   None            <none>        3306/TCP   7s
mysql-read   ClusterIP   10.102.31.144   <none>        3306/TCP   7s

4.创建 StatefulSet

[root@k8smaster mysql]# cat mysql-pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
spec:
  capacity:
    storage: 1Gi 
  accessModes:
    - ReadWriteOnce
  nfs:
    path: "/data/db"       # nfs共享的目录
    server: 192.168.2.121   # nfs服务器的ip地址
 
[root@k8smaster mysql]# kubectl apply -f mysql-pv.yaml 
[root@k8smaster mysql]# kubectl patch pv jenkins-pv-volume -p '{"metadata":{"finalizers":null}}'
persistentvolume/jenkins-pv-volume patched
 
[root@k8smaster mysql]# kubectl patch pv mysql-pv -p '{"metadata":{"finalizers":null}}'
persistentvolume/mysql-pv patched
 
[root@master mysql]# vim mysql-statefulset.yaml 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
      app.kubernetes.io/name: mysql
  serviceName: mysql
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
        app.kubernetes.io/name: mysql
    spec:
      initContainers:
      - name: init-mysql
        image: mysql:5.7.42
        imagePullPolicy: IfNotPresent
        command:
        - bash
        - "-c"
        - |
          set -ex
          # 基于 Pod 序号生成 MySQL 服务器的 ID。
          [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          echo [mysqld] > /mnt/conf.d/server-id.cnf
          # 添加偏移量以避免使用 server-id=0 这一保留值。
          echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
          # 将合适的 conf.d 文件从 config-map 复制到 emptyDir。
          if [[ $ordinal -eq 0 ]]; then
            cp /mnt/config-map/primary.cnf /mnt/conf.d/
          else
            cp /mnt/config-map/replica.cnf /mnt/conf.d/
          fi         
        volumeMounts:
        - name: conf
          mountPath: /mnt/conf.d
        - name: config-map
          mountPath: /mnt/config-map
      - name: clone-mysql
        image: registry.cn-hangzhou.aliyuncs.com/google_samples_thepoy/xtrabackup:1.0
        command:
        - bash
        - "-c"
        - |
          set -ex
          # 如果已有数据,则跳过克隆。
          [[ -d /var/lib/mysql/mysql ]] && exit 0
          # 跳过主实例(序号索引 0)的克隆。
          [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
          ordinal=${BASH_REMATCH[1]}
          [[ $ordinal -eq 0 ]] && exit 0
          # 从原来的对等节点克隆数据。
          ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
          # 准备备份。
          xtrabackup --prepare --target-dir=/var/lib/mysql               
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
      containers:
      - name: mysql
        image: mysql:5.7.42
        imagePullPolicy: IfNotPresent
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 500m
            memory: 1Gi
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # 检查我们是否可以通过 TCP 执行查询(skip-networking 是关闭的)。
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
      - name: xtrabackup
        image: registry.cn-hangzhou.aliyuncs.com/google_samples_thepoy/xtrabackup:1.0
        ports:
        - name: xtrabackup
          containerPort: 3307
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql
 
          # 确定克隆数据的 binlog 位置(如果有的话)。
          if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
            # XtraBackup 已经生成了部分的 “CHANGE MASTER TO” 查询
            # 因为我们从一个现有副本进行克隆。(需要删除末尾的分号!)
            cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
            # 在这里要忽略 xtrabackup_binlog_info (它是没用的)。
            rm -f xtrabackup_slave_info xtrabackup_binlog_info
          elif [[ -f xtrabackup_binlog_info ]]; then
            # 我们直接从主实例进行克隆。解析 binlog 位置。
            [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
            rm -f xtrabackup_binlog_info xtrabackup_slave_info
            echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
          fi
 
          # 检查我们是否需要通过启动复制来完成克隆。
          if [[ -f change_master_to.sql.in ]]; then
            echo "Waiting for mysqld to be ready (accepting connections)"
            until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
 
            echo "Initializing replication from clone position"
            mysql -h 127.0.0.1 \
                  -e "$(<change_master_to.sql.in), \
                          MASTER_HOST='mysql-0.mysql', \
                          MASTER_USER='root', \
                          MASTER_PASSWORD='', \
                          MASTER_CONNECT_RETRY=10; \
                        START SLAVE;" || exit 1
            # 如果容器重新启动,最多尝试一次。
            mv change_master_to.sql.in change_master_to.sql.orig
          fi
 
          # 当对等点请求时,启动服务器发送备份。
          exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
            "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"         
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
      volumes:
      - name: conf
        emptyDir: {}
      - name: config-map
        configMap:
          name: mysql
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 1Gi

[root@master mysql]# kubectl apply -f mysql-statefulset.yaml 

5.检查搭载情况

[root@master mysql]# kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   2/2     Running   0          21m
mysql-1   0/2     Pending   0          2m34s
[root@master mysql]# kubectl describe  pod mysql-1
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  58s (x4 over 3m22s)  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
[root@k8smaster mysql]# vim mysql-pv-2.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv-2
spec:
  capacity:
    storage: 1Gi 
  accessModes:
    - ReadWriteOnce
  nfs:
    path: "/data/replica"       # nfs共享的目录
    server: 192.168.224.160   # nfs服务器的ip地址
[root@master mysql]# kubectl apply -f mysql-pv-2.yaml 
persistentvolume/mysql-pv-2 created
[root@master mysql]# kubectl get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
mysql-pv     1Gi        RWO            Retain           Bound    default/data-mysql-0                           24m
mysql-pv-2   1Gi        RWO            Retain           Bound    default/data-mysql-1                           7s
[root@master mysql]# kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   2/2     Running   0          25m
mysql-1   1/2     Running   0          7m20s
[root@master mysql]# vim mysql-pv-3.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv-3
spec:
  capacity:
    storage: 1Gi 
  accessModes:
    - ReadWriteOnce
  nfs:
    path: "/data/replicai-3"       # nfs共享的目录
    server: 192.168.224.160   # nfs服务器的ip地址
[root@master mysql]# kubectl apply -f mysql-pv-3.yaml 
persistentvolume/mysql-pv-3 created
[root@master mysql]# kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
mysql-0   2/2     Running   0          29m
mysql-1   2/2     Running   0          11m
mysql-2   0/2     Pending   0          3m46s
[root@master mysql]# kubectl describe pod mysql-2
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  2m13s (x4 over 4m16s)  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  47s (x2 over 2m5s)     default-scheduler  0/3 nodes are available: 1 Insufficient cpu, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient memory.

十二、使用探针(liveness、readiness、startup)的(httpget、exec)方法对web业务pod进行监控,一旦出现问题马上重启,增强业务pod的可靠性。

1.起一个web服务(注意:之前起过一个没有探针的web,起这个之前要把之前那个停掉)

[root@master probe]# vim my-web.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myweb
  name: myweb
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: 192.168.224.149:5000/test/web:v2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8000
        resources:
          limits:
            cpu: 300m
          requests:
            cpu: 100m
        livenessProbe:
          exec:
            command:
            - ls
            - /tmp
          initialDelaySeconds: 5
          periodSeconds: 5
        readinessProbe:
          exec:
            command:
            - ls
            - /tmp
          initialDelaySeconds: 5
          periodSeconds: 5   
        startupProbe:
          httpGet:
            path: /
            port: 8000
          failureThreshold: 30
          periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myweb-svc
  name: myweb-svc
spec:
  selector:
    app: myweb
  type: NodePort
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 8000
    nodePort: 30001
[root@k8smaster probe]# kubectl apply -f my-web.yaml 

2.查看探针情况

在这里插入图片描述

[root@master probe]# kubectl describe pod myweb-744595d5c5-dl5m8

在这里插入图片描述
探针绑定成功!

十三、使用ingress给web业务做负载均衡,使用dashboard对整个集群资源进行掌控。

ingress controller 本质上是一个nginx软件,用来做负载均衡。ingress 是k8s内部管理nginx配置(nginx.conf)的组件,用来给ingress controller传参。

1.准备相关文件

[root@master ingress]# ls
ingress-controller-deploy.yaml         kube-webhook-certgen-v1.1.0.tar.gz  sc-nginx-svc-1.yaml
ingress-nginx-controllerv1.1.0.tar.gz  sc-ingress.yaml

ingress-controller-deploy.yaml 是部署ingress controller使用的yaml文件
ingress-nginx-controllerv1.1.0.tar.gz ingress-nginx-controller镜像
kube-webhook-certgen-v1.1.0.tar.gz kube-webhook-certgen镜像
sc-ingress.yaml 创建ingress的配置文件
sc-nginx-svc-1.yaml 启动sc-nginx-svc-1服务和相关pod的yaml
nginx-deployment-nginx-svc-2.yaml 启动nginx-deployment-nginx-svc-2服务和相关pod的yaml

2.安装ingress controller

将镜像scp到所有的node节点服务器上

[root@master ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz node-1:/root
ingress-nginx-controllerv1.1.0.tar.gz                                                  100%  276MB 101.1MB/s   00:02    
[root@master ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz node-2:/root
ingress-nginx-controllerv1.1.0.tar.gz                                                  100%  276MB  98.1MB/s   00:02    
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz node-1:/root
kube-webhook-certgen-v1.1.0.tar.gz                                                     100%   47MB  93.3MB/s   00:00    
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz node-2:/root
kube-webhook-certgen-v1.1.0.tar.gz                                                     100%   47MB  39.3MB/s   00:01    

导入镜像,在所有的节点服务器上进行

[root@node-1 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
[root@node-1 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
[root@node-2 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
[root@node-2 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
 
[root@node-1 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
nginx                                                                          latest     605c77e624dd   17 months ago   141MB
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller   v1.1.0     ae1a7201ec95   19 months ago   285MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen       v1.1.1     c41e9fcadf5a   20 months ago   47.7MB
 
[root@node-2 ~]# docker images
REPOSITORY                                                                     TAG        IMAGE ID       CREATED         SIZE
nginx                                                                          latest     605c77e624dd   17 months ago   141MB
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller   v1.1.0     ae1a7201ec95   19 months ago   285MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen       v1.1.1     c41e9fcadf5a   20 months ago   47.7MB

3.master执行yaml文件去创建ingres controller

[root@master ingress]# kubectl apply -f ingress-controller-deploy.yaml 

4.查看ingress controller的相关命名空间

[root@k8smaster ingress]# kubectl get ns
NAME                   STATUS   AGE
default                Active   20h
ingress-nginx          Active   30s
kube-node-lease        Active   20h
kube-public            Active   20h
kube-system            Active   20h

可以看到ingress-nginx已经创建

5.查看ingress controller的相关service

[root@master ingress]# kubectl get svc -n ingress-nginx

在这里插入图片描述

6.查看ingress controller的相关pod

[root@master ingress]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-9sg56        0/1     Completed   0          80s
ingress-nginx-admission-patch-8sctb         0/1     Completed   1          80s
ingress-nginx-controller-6c8ffbbfcf-bmdj9   1/1     Running     0          80s
ingress-nginx-controller-6c8ffbbfcf-j576v   1/1     Running     0          80s

7.创建pod和暴露pod的服务

[root@master new]# vim sc-nginx-svc-1.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sc-nginx-deploy
  labels:
    app: sc-nginx-feng
spec:
  replicas: 3
  selector:
    matchLabels:
      app: sc-nginx-feng
  template:
    metadata:
      labels:
        app: sc-nginx-feng
    spec:
      containers:
      - name: sc-nginx-feng
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name:  sc-nginx-svc
  labels:
    app: sc-nginx-svc
spec:
  selector:
    app: sc-nginx-feng
  ports:
  - name: name-of-service-port
    protocol: TCP
    port: 80
    targetPort: 80
[root@master new]# kubectl apply -f sc-nginx-svc-1.yaml 

[root@master ingress]# kubectl get pod

在这里插入图片描述

[root@master ingress]# kubectl get svc

在这里插入图片描述

8.查看服务器的详细信息,查看Endpoints对应的pod的ip和端口是否正常

[root@master ingress]# kubectl describe svc sc-nginx-svc

在这里插入图片描述

9.访问服务暴露的ip

[root@master ingress]# curl 10.111.154.100

在这里插入图片描述

10.启用ingress关联ingress controller 和service

创建一个yaml文件,去启动ingress

[root@master ingress]# vim sc-ingress.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sc-ingress
  annotations:
    kubernets.io/ingress.class: nginx  #注释 这个ingress 是关联ingress controller的
spec:
  ingressClassName: nginx  #关联ingress controller
  rules:
  - host: www.feng.com
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: sc-nginx-svc
            port:
              number: 80
  - host: www.zhang.com
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: sc-nginx-svc-2
            port:
              number: 80
[root@k8smaster ingress]# kubectl apply -f my-ingress.yaml 
ingress.networking.k8s.io/my-ingress created

11.查看ingress

[root@master ingress]# kubectl get ingress

在这里插入图片描述

12.查看ingress controller 里的nginx.conf 文件里是否有ingress对应的规则

[root@master ingress]# kubectl get pod -n ingress-nginx

在这里插入图片描述

[root@master ingress]# kubectl exec -n ingress-nginx -it ingress-nginx-controller-6c8ffbbfcf-bmdj9 -- bash

在这里插入图片描述

获取ingress controller对应的service暴露宿主机的端口,访问宿主机和相关端口,就可以验证ingress controller是否能进行负载均衡

[root@master ingress]# kubectl get svc -n ingress-nginx

在这里插入图片描述

13.在其他的宿主机或者windows机器上使用域名进行访问

[root@test ~]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.224.147 www.feng.com
192.168.224.148 www.zhang.com

因为我们是基于域名做的负载均衡的配置,所以必须要在浏览器里使用域名去访问,不能使用ip地址,同时ingress controller做负载均衡的时候是基于http协议的,7层负载均衡。

 
[root@test ~]# curl www.feng.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
 
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
 
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

访问www.zhang.com出现异常,503错误,是nginx内部错误

[root@test ~]# curl www.zhang.com
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx</center>
</body>
</html>

14.启动第2个服务和pod,使用了pv+pvc+nfs(需要提前准备好nfs服务器+创建pv和pvc)

[root@master pv]# pwd
/root/pv
[root@master pv]# ls
nfs-pvc.yml  nfs-pv.yml  nginx-deployment.yml
 
[root@master pv]# vim nfs-pv.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-web
  labels:
    type: pv-web
spec:
  capacity:
    storage: 10Gi 
  accessModes:
    - ReadWriteMany
  storageClassName: nfs         # pv对应的名字
  nfs:
    path: "/web"       # nfs共享的目录
    server: 192.168.2.121   # nfs服务器的ip地址
    readOnly: false   # 访问模式
 
[root@master pv]# kubectl apply -f nfs-pv.yaml
[root@master pv]# kubectl apply -f nfs-pvc.yaml
[root@master pv]# kubectl get pv

[root@master pv]# kubectl get pvc

在这里插入图片描述

[root@master ingress]# vim nginx-deployment-nginx-svc-2.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
 name: nginx-deployment
 labels:
   app: nginx
spec:
 replicas: 3
 selector:
   matchLabels:
     app: sc-nginx-feng-2
 template:
   metadata:
     labels:
       app: sc-nginx-feng-2
   spec:
     volumes:
       - name: sc-pv-storage-nfs
         persistentVolumeClaim:
           claimName: pvc-web
     containers:
       - name: sc-pv-container-nfs
         image: nginx
         imagePullPolicy: IfNotPresent
         ports:
           - containerPort: 80
             name: "http-server"
         volumeMounts:
           - mountPath: "/usr/share/nginx/html"
             name: sc-pv-storage-nfs
---
apiVersion: v1
kind: Service
metadata:
 name:  sc-nginx-svc-2
 labels:
   app: sc-nginx-svc-2
spec:
 selector:
   app: sc-nginx-feng-2
 ports:
 - name: name-of-service-port
   protocol: TCP
   port: 80
   targetPort: 80

[root@master ingress]# kubectl apply -f nginx-deployment-nginx-svc-2.yaml 
[root@master ingress]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.105.213.95   <none>        80:31457/TCP,443:32569/TCP   24m
ingress-nginx-controller-admission   ClusterIP   10.98.225.196   <none>        443/TCP                      24m
 
[root@master ingress]# kubectl get ingress
NAME         CLASS   HOSTS                        ADDRESS                       PORTS   AGE
sc-ingress   nginx   www.feng.com,www.zhang.com   192.168.224.147,192.168.224.148   80      18m
 

访问宿主机暴露的端口号30092或者80都可以

15.使用ingress controller暴露服务,感觉不需要使用30000以上的端口访问,可以直接访问80或者443

比使用service 暴露服务还是有点优势

 
[root@test ~]# curl www.zhang.com
welcome to zyj999
hello,world
[root@test ~]# curl www.feng.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
 
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
 
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

16.使用dashboard对整个集群资源进行掌控

先下载recommended.yaml文件

[root@master dashboard]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
[root@master dashboard]# ls
recommended.yaml

17.recommended,启动!

[root@master dashboard]# kubectl apply -f recommended.yaml 

18.查看是否启动dashboard的pod

[root@master dashboard]# kubectl get ns
NAME                   STATUS   AGE
default                Active   18h
ingress-nginx          Active   13h
kube-node-lease        Active   18h
kube-public            Active   18h
kube-system            Active   18h
kubernetes-dashboard   Active   9s

kubernetes-dashboard 是dashboard自己的命名空间

[root@master dashboard]# kubectl get pod -n kubernetes-dashboard

在这里插入图片描述

19.查看dashboard对应的服务,因为发布服务的类型是ClusterIP ,外面的机器不能访问,不便于我们通过浏览器访问,因此需要改成NodePort

[root@master dashboard]# kubectl get svc -n kubernetes-dashboard

在这里插入图片描述

20.删除已经创建的dashboard 的服务

[root@master dashboard]# kubectl delete svc kubernetes-dashboard -n kubernetes-dashboard
service "kubernetes-dashboard" deleted
[root@master dashboard]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.110.32.41   <none>        8000/TCP   5m39s

21.创建一个nodeport的service

[root@master dashboard]# vim dashboard-svc.yml
[root@master dashboard]# cat dashboard-svc.yml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
 
[root@master dashboard]# kubectl apply -f dashboard-svc.yml
service/kubernetes-dashboard created
 
[root@master dashboard]# kubectl get svc -n kubernetes-dashboard

在这里插入图片描述

22.想要访问dashboard服务,就要有访问权限,创建kubernetes-dashboard管理员角色

[root@master dashboard]# vim dashboard-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: dashboard-admin
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
 
[root@k8smaster dashboard]# kubectl apply -f dashboard-svc-account.yaml 

23.获取dashboard的secret对象的名字

[root@master dashboard]# kubectl get secret -n kube-system|grep admin|awk '{print $1}'

在这里插入图片描述

 
[root@master dashboard]# kubectl describe secret dashboard-admin-token-4tvc9 -n kube-system

在这里插入图片描述

24.浏览器里访问(找准端口号)

[root@master dashboard]# kubectl get svc -n kubernetes-dashboard

在这里插入图片描述
访问宿主机的ip+端口号

https://192.168.224.146:30421/#/login
在这里插入图片描述
在这里插入图片描述
输入获取到的token
在这里插入图片描述

在这里插入图片描述
登陆成功!因为没有配置所以显示404。

十四、 使用Prometheus监控Kubernetes

安装promethues对整个集群资源(cpu,内存,网络带宽,web服务,数据库服务,磁盘IO等)进行监控。

1.在所有节点提前下载镜像

docker pull prom/node-exporter 
docker pull prom/prometheus:v2.0.0
docker pull grafana/grafana:6.1.4

查看镜像

[root@master ~]# docker images

在这里插入图片描述

2.采用daemonset方式部署node-exporter

[root@master prometheus]# vim node-exporter.yaml 
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-system
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
      k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - containerPort: 9100
          protocol: TCP
          name: http
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 9100
    nodePort: 31672
    protocol: TCP
  type: NodePort
  selector:
    k8s-app: node-exporter
[root@master prometheus]# kubectl apply -f node-exporter.yaml

[root@master prometheus]# kubectl get pods -A

在这里插入图片描述

[root@master prometheus]# kubectl get daemonset -A

在这里插入图片描述

[root@master prometheus]# kubectl get service -A

在这里插入图片描述

3.部署Prometheus

[root@master prometheus]# vim rbac-setup.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: kube-system
[root@k8smaster prometheus]# kubectl apply -f rbac-setup.yaml
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
[root@k8smaster prometheus]# vim configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-system
data:
  prometheus.yml: |
    global:
      scrape_interval:     15s
      evaluation_interval: 15s
    scrape_configs:
 
    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
 
    - job_name: 'kubernetes-nodes'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics
 
    - job_name: 'kubernetes-cadvisor'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
 
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name
 
    - job_name: 'kubernetes-services'
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module: [http_2xx]
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name
 
    - job_name: 'kubernetes-ingresses'
      kubernetes_sd_configs:
      - role: ingress
      relabel_configs:
      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
        regex: (.+);(.+);(.+)
        replacement: ${1}://${2}${3}
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_ingress_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_ingress_name]
        target_label: kubernetes_name
 
    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name

[root@master prometheus]# kubectl apply -f configmap.yaml
[root@master prometheus]# vim prometheus.deploy.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
 labels:
   name: prometheus-deployment
 name: prometheus
 namespace: kube-system
spec:
 replicas: 1
 selector:
   matchLabels:
     app: prometheus
 template:
   metadata:
     labels:
       app: prometheus
   spec:
     containers:
     - image: prom/prometheus:v2.0.0
       name: prometheus
       command:
       - "/bin/prometheus"
       args:
       - "--config.file=/etc/prometheus/prometheus.yml"
       - "--storage.tsdb.path=/prometheus"
       - "--storage.tsdb.retention=24h"
       ports:
       - containerPort: 9090
         protocol: TCP
       volumeMounts:
       - mountPath: "/prometheus"
         name: data
       - mountPath: "/etc/prometheus"
         name: config-volume
       resources:
         requests:
           cpu: 100m
           memory: 100Mi
         limits:
           cpu: 500m
           memory: 2500Mi
     serviceAccountName: prometheus
     volumes:
     - name: data
       emptyDir: {}
     - name: config-volume
       configMap:
         name: prometheus-config

[root@master prometheus]# kubectl apply -f prometheus.deploy.yml
[root@master prometheus]# vim prometheus.svc.yml 
kind: Service
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 30003
  selector:
    app: prometheus
[root@master prometheus]# kubectl apply -f prometheus.svc.yml

4.部署grafana

[root@master prometheus]# vim grafana-deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana-core
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
        component: core
    spec:
      containers:
      - image: grafana/grafana:6.1.4
        name: grafana-core
        imagePullPolicy: IfNotPresent
        # env:
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 100Mi
        env:
          # The following env variables set up basic auth twith the default admin user and admin password.
          - name: GF_AUTH_BASIC_ENABLED
            value: "true"
          - name: GF_AUTH_ANONYMOUS_ENABLED
            value: "false"
          # - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          #   value: Admin
          # does not really work, because of template variables in exported dashboards:
          # - name: GF_DASHBOARDS_JSON_ENABLED
          #   value: "true"
        readinessProbe:
          httpGet:
            path: /login
            port: 3000
          # initialDelaySeconds: 30
          # timeoutSeconds: 1
        #volumeMounts:   #先不进行挂载
        #- name: grafana-persistent-storage
        #  mountPath: /var
      #volumes:
      #- name: grafana-persistent-storage
        #emptyDir: {}
 
[root@master prometheus]# kubectl apply -f grafana-deploy.yaml
[root@master prometheus]# vim grafana-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  type: NodePort
  ports:
    - port: 3000
  selector:
    app: grafana
    component: core
[root@master prometheus]# kubectl apply -f grafana-svc.yaml 
[root@master prometheus]# vim grafana-ing.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
   name: grafana
   namespace: kube-system
spec:
   rules:
   - host: k8s.grafana
     http:
       paths:
       - path: /
         backend:
          serviceName: grafana
          servicePort: 3000
 
[root@master prometheus]# kubectl apply -f grafana-ing.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/grafana created

4.检查、测试

[root@master prometheus]# kubectl get pods -A

在这里插入图片描述

[root@master mysql]# kubectl get svc -A

在这里插入图片描述

5.访问

node-exporter采集的数据
http://192.168.224.146:31672/metrics
在这里插入图片描述
Prometheus的页面
http://192.168.24.146:30003
在这里插入图片描述
grafana的页面,
http://192.168.224.146:30562
在这里插入图片描述
账户:admin;密码:*******

猜你喜欢

转载自blog.csdn.net/zhiyoujiu/article/details/133049609