KubeEdge entry to proficiency-KubeEdge v1.3 deployment guide!

KubeEdge  is an open source system that can extend native containerized business processes and device management to hosts on Edge. It is built on Kubernetes and provides core infrastructure support for network, application deployment, and metadata synchronization between the cloud and the edge. It also supports MQTT and allows developers to write custom logic and enable resource-constrained device communication on Edge. KubeEdge consists of a cloud part and an edge part. The edge and cloud part are now open source. This article will compile and deploy KugeEdge based on the Centos8.0 system.

One, system configuration

1.1 Cluster environment

CPU name Roles IP Workload
ke-cloud Cloud 192.168.1.66 k8s、docker、cloudcore
to-edge1 Edge 192.168.1.56 docker、edgecore
ke-edge2 Edge 192.168.1.218 docker、edgecore

1.2 Disable boot firewall

# systemctl disable firewalld

1.3 Permanently disable SELinux

Edit the file /etc/selinux/config and change SELINUX to disabled, as follows:

# sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
SELINUX=disabled

1.4 Close the system swap (optional)

Kbernetes 1.8 began to require the system's Swap to be closed. If it is not closed, the kubelet will not be able to start under the default configuration, as follows:

# sed -i 's/.*swap.*/#&/' /etc/fstab
#/dev/mapper/centos-swap swap           swap   defaults     0 0

1.5 Install Docker on all machines

# yum install wget container-selinux -y
# wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
# yum erase runc -y
# rpm -ivh containerd.io-1.2.6-3.3.el7.x86_64.rpm
注意:上面的步骤在centos7中无须操作
# update-alternatives --set iptables /usr/sbin/iptables-legacy
# yum install -y yum-utils device-mapper-persistent-data lvm2 && yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo && yum makecache
# yum -y install docker-ce
# systemctl enable docker.service && systemctl start docker

说明:如果想安装指定版本的Docker
# yum -y install docker-ce-18.06.3.ce

1.6 Restart the system

# reboot

Two, cloud node deployment K8s

2.1 Configure Yum source

[root@ke-cloud ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.2 Install kubeadm, kubectl

[root@ke-cloud ~]# yum makecache
[root@ke-cloud ~]# yum install -y kubelet kubeadm kubectl ipvsadm

说明:如果想安装指定版本的kubeadm
[root@ke-cloud ~]# yum install kubelet-1.17.0-0.x86_64 kubeadm-1.17.0-0.x86_64 kubectl-1.17.0-0.x86_64

2.3 Configure kernel parameters

[root@ke-cloud ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

[root@ke-cloud ~]# sysctl --system
[root@ke-cloud ~]# modprobe br_netfilter
[root@ke-cloud ~]# sysctl -p /etc/sysctl.d/k8s.conf

加载ipvs相关内核模块
如果重新开机,需要重新加载(可以写在 /etc/rc.local 中开机自动加载)
[root@ke-cloud ~]# modprobe ip_vs
[root@ke-cloud ~]# modprobe ip_vs_rr
[root@ke-cloud ~]# modprobe ip_vs_wrr
[root@ke-cloud ~]# modprobe ip_vs_sh
[root@ke-cloud ~]# modprobe nf_conntrack_ipv4

查看是否加载成功
[root@ke-cloud ~]# lsmod | grep ip_vs

2.4 Pull image

Use the command to view the mirror version of the k8s component corresponding to the current kubeadm version, as follows:

[root@ke-cloud ~]# kubeadm config images list
I0716 20:10:22.666500    6001 version.go:251] remote version is much newer: v1.18.6; falling back to: stable-1.17
W0716 20:10:23.059486    6001 validation.go:28] Cannot validate kubelet config - no validator is available
W0716 20:10:23.059501    6001 validation.go:28] Cannot validate kube-proxy config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.9
k8s.gcr.io/kube-controller-manager:v1.17.9
k8s.gcr.io/kube-scheduler:v1.17.9
k8s.gcr.io/kube-proxy:v1.17.9
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5

Use the kubeadm config images pull command to pull the above image, as follows:

[root@ke-cloud ~]# kubeadm config images pull
I0716 20:11:12.188139    6015 version.go:251] remote version is much newer: v1.18.6; falling back to: stable-1.17
W0716 20:11:12.580861    6015 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0716 20:11:12.580877    6015 validation.go:28] Cannot validate kubelet config - no validator is available
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.17.9
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.17.9
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.17.9
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.17.9
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.4.3-0
[config/images] Pulled k8s.gcr.io/coredns:1.6.5

Check the downloaded image, as follows:

[root@ke-cloud ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.17.9             ddc09a4c2193        19 hours ago        117MB
k8s.gcr.io/kube-controller-manager   v1.17.9             c7f1dde319ee        19 hours ago        161MB
k8s.gcr.io/kube-apiserver            v1.17.9             7417868987f3        19 hours ago        171MB
k8s.gcr.io/kube-scheduler            v1.17.9             f7b1228fa995        19 hours ago        94.4MB
k8s.gcr.io/coredns                   1.6.5               70f311871ae1        8 months ago        41.6MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        8 months ago        288MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB

2.5 Configure kubelet (optional)

It is not necessary to configure kubelet on the cloud side. It is mainly to verify the correct deployment of the K8s cluster. You can also build applications such as Dashboard in the cloud.

Get Docker's cgroups

[root@ke-cloud ~]# DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f4)
[root@ke-cloud ~]# echo $DOCKER_CGROUPS
cgroupfs

Configure cgroups for kubelet

[root@ke-cloud ~]# cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=k8s.gcr.io/pause:3.1"
EOF

Start kubelet

[root@ke-cloud ~]# systemctl daemon-reload
[root@ke-cloud ~]# systemctl enable kubelet && systemctl start kubelet

Special note: If you use systemctl status kubeletit here, you will find an error message. This error will be automatically resolved after running kubeadm init to generate the CA certificate. You can ignore it here.

2.6 Initialize the cluster

Use the kubeadm init command to initialize the cluster. After the initialization is complete, you must record the final command of the initialization process, as shown in the following figure:

[root@ke-cloud ~]# kubeadm init --kubernetes-version=v1.17.9 \
                      --pod-network-cidr=10.244.0.0/16 \
                      --apiserver-advertise-address=192.168.1.66 \
                      --ignore-preflight-errors=Swap

[init] Using Kubernetes version: v1.17.9

...

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.66:6443 --token mskyxo.1gvtjenik78cm1ip \
    --discovery-token-ca-cert-hash sha256:89fd70b84d6beaff3c0223f288a675080034d108b5673cf66502267291078a04

Further configure kubectl

[root@ke-cloud ~]# rm -rf $HOME/.kube
[root@ke-cloud ~]# mkdir -p $HOME/.kube
[root@ke-cloud ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@ke-cloud ~]# chown $(id -u):$(id -g) $HOME/.kube/config

View node node

[root@ke-cloud ~]# systemctl status kubelet
NAME       STATUS     ROLES    AGE    VERSION
ke-cloud   NotReady   master   3m3s   v1.17.0

2.7 Configure network plug-in (optional)

Special note: The version will be updated frequently, if the configuration is successful, manually go to https://raw.githubusercontent.com/coreos/flannel/master/Documentation/ to download the latest version of the yaml file

Download the yaml file of the flannel plugin

[root@ke-cloud ~]# cd ~ && mkdir flannel && cd flannel
[root@ke-cloud ~]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

start up

[root@ke-cloud ~]# kubectl apply -f ~/flannel/kube-flannel.yml

View

[root@ke-cloud ~]# kubectl get node
NAME       STATUS   ROLES    AGE   VERSION
ke-cloud   Ready    master   12h   v1.17.0

[root@ke-cloud ~]# kubectl get pods -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-6955765f44-k65xx           1/1     Running   0          12h
coredns-6955765f44-r2q6g           1/1     Running   0          12h
etcd-ke-cloud                      1/1     Running   0          12h
kube-apiserver-ke-cloud            1/1     Running   0          12h
kube-controller-manager-ke-cloud   1/1     Running   0          12h
kube-flannel-ds-amd64-lrsrh        1/1     Running   0          12h
kube-proxy-vr44d                   1/1     Running   0          12h
kube-scheduler-ke-cloud            1/1     Running   0          12h

[root@ke-cloud ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   12h

Note: Only after the network plug-in is installed and configured can the node be displayed as ready.

2.8 K8s install dashboard (optional)

Reference: https://github.com/kubernetes/dashboard

Third, the installation and configuration of KubeEdge

3.1 Cloud configuration

The cloud side is responsible for compiling related components of KubeEdge and running cloudcore.

3.1.1 Preparation

Download golang

[root@ke-cloud ~]# wget https://golang.google.cn/dl/go1.14.4.linux-amd64.tar.gz
[root@ke-cloud ~]# tar -zxvf go1.14.4.linux-amd64.tar.gz -C /usr/local

Configure golang environment

[root@ke-cloud ~]# vim /etc/profile
文件末尾添加:
# golang env
export GOROOT=/usr/local/go
export GOPATH=/data/gopath
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin

[root@ke-cloud ~]# source /etc/profile
[root@ke-cloud ~]# mkdir -p /data/gopath && cd /data/gopath
[root@ke-cloud ~]# mkdir -p src pkg bin

Download KubeEdge source code

[root@ke-cloud ~]# git clone https://github.com/kubeedge/kubeedge $GOPATH/src/github.com/kubeedge/kubeedge

3.1.2 Deploy cloudcore

It can be deployed locally deployment of KubeEdge (* this requires a separate compilation cloudcore and edgecore *, deployment reference: https://docs.kubeedge.io/en/latest/setup/local.html)  , also by way of keadm Deploy KubeEdge (deployment method reference: https://docs.kubeedge.io/en/latest/setup/keadm.html)  . This article chooses the simpler keadm deployment method.

Compile kubeadm

[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/kubeedge
[root@ke-cloud ~]# make all WHAT=keadm

说明:编译后的二进制文件在./_output/local/bin下,单独编译cloudcore与edgecore的方式如下:
[root@ke-cloud ~]# make all WHAT=cloudcore && make all WHAT=edgecore

Create cloud node

[root@ke-cloud ~]# keadm init --advertise-address="192.168.1.66"

Kubernetes version verification passed, KubeEdge installation will start...

...

KubeEdge cloudcore is running, For logs visit:  /var/log/kubeedge/cloudcore.log
CloudCore started

3.2 Edge configuration

The edge side can also be configured through Keadm, and the binary executable file compiled and generated on the cloud side can be copied to the edge side through the scp command.

3.2.1 Obtaining tokens from the cloud

Running in the cloud will return a token, which will be used when joining edge nodes.keadm gettoken

[root@ke-cloud ~]# keadm gettoken
8ca5a29595498fbc0648ca59208681f9d18dae86ecff10e70991cde96a6f4199.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTUwMzU0Mjh9.YR-N628S5wEFLifC0sM9t-IuIWkgSK-kizFnyAy5Q50

3.2.2 Join edge node

keadm joinEdgecore and mqtt will be installed. It also provides a flag through which a specific version can be set.

[root@ke-edge1 ~]# keadm join --cloudcore-ipport=192.168.1.66:10000 --token=8ca5a29595498fbc0648ca59208681f9d18dae86ecff10e70991cde96a6f4199.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTUwMzU0Mjh9.YR-N628S5wEFLifC0sM9t-IuIWkgSK-kizFnyAy5Q50

Host has mosquit+ already installed and running. Hence skipping the installation steps !!!

...

KubeEdge edgecore is running, For logs visit:  /var/log/kubeedge/edgecore.log

Important note: * The –cloudcore-ipport flag is mandatory. * If you want to automatically apply the certificate to the edge node, -token is required. * The kubeEdge version used by the cloud and edge end should be the same.

3.3 Verification

After starting edgecore, the edge will communicate with cloudcore in the cloud, and K8s will then incorporate the edge as a node into the control of K8s.

[root@ke-cloud ~]# kubectl get node
NAME       STATUS   ROLES        AGE   VERSION
ke-cloud   Ready    master       13h   v1.17.0
ke-edge1   Ready    agent,edge   64s   v1.17.1-kubeedge-v1.3.1

[root@ke-cloud ~]# kubectl get pods -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-6955765f44-k65xx           1/1     Running   0          13h
coredns-6955765f44-r2q6g           1/1     Running   0          13h
etcd-ke-cloud                      1/1     Running   0          13h
kube-apiserver-ke-cloud            1/1     Running   0          13h
kube-controller-manager-ke-cloud   1/1     Running   0          13h
kube-flannel-ds-amd64-fgtdq        0/1     Error     5          5m55s
kube-flannel-ds-amd64-lrsrh        1/1     Running   0          13h
kube-proxy-vr44d                   1/1     Running   0          13h
kube-scheduler-ke-cloud            1/1     Running   0          13h

说明:如果在K8s集群中配置过flannel网络插件(见2.7),这里由于edge节点没有部署kubelet,
所以调度到edge节点上的flannel pod会创建失败。这不影响KubeEdge的使用,可以先忽略这个问题。

Fourth, run the KubeEdge example

Selection example: KubeEdge Counter Demo  counter is a pseudo device, users can run this demo without any additional physical devices. The counter runs on the edge, and the user can control it on the Web from the cloud side, or obtain the counter value on the Web from the cloud side. The schematic diagram is as follows:

image

Detailed document reference: https://github.com/kubeedge/examples/tree/master/kubeedge-counter-demo

4.1 Preparation

1) This example requires that the KubeEdge version must be v1.2.1+

[root@ke-cloud ~]# kubectl get node
NAME       STATUS   ROLES        AGE   VERSION
ke-cloud   Ready    master       13h   v1.17.0
ke-edge1   Ready    agent,edge   64s   v1.17.1-kubeedge-v1.3.1

说明:本文接下来的验证将使用边缘节点ke-edge1进行,如果你参考本文进行相关验证,后续边缘节点名称的配置需要根据你的实际情况进行更改。

2) Download the sample code:

[root@ke-cloud ~]# git clone https://github.com/kubeedge/examples.git $GOPATH/src/github.com/kubeedge/examples

4.2 Create device model and device

1) Create a device model

[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/crds
[root@ke-cloud crds~]# kubectl create -f kubeedge-counter-model.yaml

2) Create device

Modify matchExpressions according to your actual situation:

[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/crds
[root@ke-cloud crds~]# vim kubeedge-counter-instance.yaml
apiVersion: devices.kubeedge.io/v1alpha1
kind: Device
metadata:
  name: counter
  labels:
    description: 'counter'
    manufacturer: 'test'
spec:
  deviceModelRef:
    name: counter-model
  nodeSelector:
    nodeSelectorTerms:
    - matchExpressions:
      - key: 'kubernetes.io/hostname'
        operator: In
        values:
        - ke-edge1

status:
  twins:
    - propertyName: status
      desired:
        metadata:
          type: string
        value: 'OFF'
      reported:
        metadata:
          type: string
        value: '0'

[root@ke-cloud crds~]# kubectl create -f kubeedge-counter-instance.yaml

4.3 Deploy cloud applications

1) Modify the code

The cloud application web-controller-app is used to control the pi-counter-app application on the edge. The default listening port number of the program is 80, which is modified here to 8089, as shown below:

[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/web-controller-app
[root@ke-cloud web-controller-app~]# vim main.go
package main

import (
        "github.com/astaxie/beego"
        "github.com/kubeedge/examples/kubeedge-counter-demo/web-controller-app/controller"
)

func main() {
        beego.Router("/", new(controllers.TrackController), "get:Index")
        beego.Router("/track/control/:trackId", new(controllers.TrackController), "get,post:ControlTrack")

        beego.Run(":8089")
}

2) Build an image

Note: When building the image, please copy the source code to the path corresponding to GOPATH, if you have opened go mod, please close it.

[root@ke-cloud web-controller-app~]# make all
[root@ke-cloud web-controller-app~]# make docker

3) Deploy web-controller-app

[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/crds
[root@ke-cloud crds~]# kubectl apply -f kubeedge-web-controller-app.yaml

4.3 Deploying edge applications

The pi-counter-app application on the edge is controlled by the cloud application and mainly communicates with the mqtt server for simple counting functions.

1) Modify the code and build the image

You need to modify GOARCH in the Makefile to amd64 to run the container.

[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/counter-mapper
[root@ke-cloud counter-mapper~]# vim Makefile
.PHONY: all pi-execute-app docker clean
all: pi-execute-app

pi-execute-app:
        GOARCH=amd64 go build -o pi-counter-app main.go

docker:
        docker build . -t kubeedge/kubeedge-pi-counter:v1.0.0

clean:
        rm -f pi-counter-app

[root@ke-cloud counter-mapper~]# make all
[root@ke-cloud counter-mapper~]# make docker

2) Deploy Pi Counter App

[root@ke-cloud ~]# cd $GOPATH/src/github.com/kubeedge/examples/kubeedge-counter-demo/crds
[root@ke-cloud crds~]# kubectl apply -f kubeedge-pi-counter-app.yaml

说明:为了防止Pod的部署卡在`ContainerCreating`,这里直接通过docker save、scp和docker load命令将镜像发布到边缘端
[root@ke-cloud ~]# docker save -o kubeedge-pi-counter.tar kubeedge/kubeedge-pi-counter:v1.0.0
[root@ke-cloud ~]# scp kubeedge-pi-counter.tar [email protected]:/root
[root@ke-edge1 ~]# docker load -i kubeedge-pi-counter.tar

4.4 Body Demo

Now, the cloud part and edge part of KubeEdge Demo have been deployed, as follows:

[root@ke-cloud ~]# kubectl get pods -o wide
NAME                                    READY   STATUS    RESTARTS   AGE     IP             NODE       NOMINATED NODE   READINESS GATES
kubeedge-counter-app-758b9b4ffd-f8qjj   1/1     Running   0          26m     192.168.1.66   ke-cloud   <none>           <none>
kubeedge-pi-counter-c69698d6-rb4xz      1/1     Running   0          2m      192.168.1.56   ke-edge1   <none>           <none>

We will now start to test the running effect of the Demo:

1) Execute ON command

Select ON on the web page and click Execute. You can view the execution result with the following command on the edge node:

[root@ke-edge1 ~]# docker logs -f counter-container-id

image

2) View counter STATUS

Select STATUS on the web page, and click Execute, the current status of the counter will be returned on the web page, as shown below:image

2) Execute the OFF command

Select OFF on the web page, and click Execute, you can view the execution result with the following command on the edge node:

[root@ke-edge1 ~]# docker logs -f counter-container-id

image

End~

Official document:  https://kubeedge.io/zh/blog/kubeedge-deployment-manual/

Welcome everyone to join the discussion! ! !

Guess you like

Origin blog.csdn.net/wxb880114/article/details/109193543