k8s安装部署

版权声明:版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_40195432/article/details/84988405

k8s-init.sh

#!/bin/bash

安装基本工具,关闭防火墙

yum install vim wget git net-tools lrzsz unzip zip libtool* ntp -y
systemctl stop firewalld.service
systemctl disable firewalld.service
sed -i ‘s/enforcing/disabled/’ /etc/sysconfig/selinux
################################################

配置hosts 文件

cat >> /etc/hosts << EOF
192.168.1.100 master
192.168.1.101 k8s1
192.168.1.102 k8s2
EOF
####################################################
echo “nameserver 114.114.114.114” >> /etc/resolv.conf
################################################

配置k8s 和 docker的阿里云的yum 源,以及导入yum 秘钥

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat > kubernetes.repo << EOF
[kubernetes]
name=kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1
EOF
rpm --import http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
rpm --import http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
#################################################

安装 docker-18.06 和 kubelet-1.12.2-0

yum -y remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine
yum install -y yum-utils device-mapper-persistent-data lvm2
yum -y install docker-ce-18.06.0.ce-3.el7 kubelet-1.12.2 kubeadm-1.12.2 kubectl-1.12.2
systemctl enable kubelet.service
systemctl enable docker.service
################################################

配置k8s 关闭swap

echo KUBELET_EXTRA_ARGS="–fail-swap-on=false" > /etc/sysconfig/kubelet

cat >> /etc/security/limits.conf<< EOF

  • soft nofile 262140
  • hard nofile 262140
    root soft nofile 262140
    root hard nofile 262140
  • soft core unlimited
  • hard core unlimited
    root soft core unlimited
    root hard core unlimited
    EOF
    ###############################################
    cat > /etc/sysctl.d/k8s.conf<< EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    vm.swappiness=0
    EOF
    sysctl -p /etc/sysctl.d/k8s.conf
    ###############################################
    mkdir -pv /home/k8s
    cd /home/k8s
    cat > /home/k8s/image-init.sh << EOF
    docker pull mirrorgooglecontainers/kube-apiserver:v1.12.2
    docker pull mirrorgooglecontainers/kube-controller-manager:v1.12.2
    docker pull mirrorgooglecontainers/kube-scheduler:v1.12.2
    docker pull mirrorgooglecontainers/kube-proxy:v1.12.2
    docker pull mirrorgooglecontainers/pause:3.1
    docker pull mirrorgooglecontainers/etcd:3.2.24
    docker pull coredns/coredns:1.2.2

docker tag mirrorgooglecontainers/kube-proxy:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2
docker tag mirrorgooglecontainers/kube-scheduler:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2
docker tag mirrorgooglecontainers/kube-apiserver:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2
docker tag mirrorgooglecontainers/kube-controller-manager:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

docker rmi mirrorgooglecontainers/kube-apiserver:v1.12.2
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.12.2
docker rmi mirrorgooglecontainers/kube-scheduler:v1.12.2
docker rmi mirrorgooglecontainers/kube-proxy:v1.12.2
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.2.24
docker rmi coredns/coredns:1.2.2
EOF
###############################################
docker -v
kubelet --version
到此,基本准备工作都准备OK啦,接下来我们开始安装集群啦!
#################################################

1、基本工作和基本安装都OK啦
2、现执行docker 下载脚本,缓存镜像
3、导入镜像:
[root@master k8s]# docker load --input nginx-ingress-controller.tar
a0a0851f9638: Loading layer [>] 52.12MB/52.12MB
a1580168d7c5: Loading layer [
>] 31.74kB/31.74kB
fc49d3f2539c: Loading layer [>] 1.891MB/1.891MB
20706f1034da: Loading layer [
>] 397.2MB/397.2MB
788172f73e09: Loading layer [>] 3.072kB/3.072kB
e0c5b54f3061: Loading layer [
>] 3.072kB/3.072kB
5428a3856b67: Loading layer [>] 799.7kB/799.7kB
a5002ff4a9fc: Loading layer [
>] 33.15MB/33.15MB
034d7d167301: Loading layer [>] 61.44kB/61.44kB
6679ebebb97e: Loading layer [
>] 33MB/33MB
7056b839edcc: Loading layer [>] 3.072kB/3.072kB
40998b31d24d: Loading layer [
>] 3.072kB/3.072kB
Loaded image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0

查看这些镜像是否都导入成功:
[root@master k8s]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.12.2 15e9da1ca195 5 weeks ago 96.5MB
k8s.gcr.io/kube-apiserver v1.12.2 51a9c329b7c5 5 weeks ago 194MB
k8s.gcr.io/kube-controller-manager v1.12.2 15548c720a70 5 weeks ago 164MB
k8s.gcr.io/kube-scheduler v1.12.2 d6d57c76136c 5 weeks ago 58.3MB
quay.io/kubernetes-ingress-controller/nginx-ingress-controller 0.20.0 a3f21ec4bd11 8 weeks ago 513MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 3 months ago 39.2MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 11 months ago 742kB

4、初始化集群:
[root@master k8s]# kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.100 --ignore-preflight-errors=Swap
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.100]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.1.100 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in “/etc/kubernetes/pki”
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”
[kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”
[controlplane] wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane] wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane] wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 22.002021 seconds
[uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.12” in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label “node-role.kubernetes.io/master=’’”
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “master” as an annotation
[bootstraptoken] using token: augswm.kfiora4s8vc884vj
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d u ) : (id -u): (id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.1.100:6443 --token augswm.kfiora4s8vc884vj --discovery-token-ca-cert-hash sha256:d2d167c13319e7162a19e456bebdd1ed3dfa97175c700cbfb54f5f7955f6a940

5、配置环境变量以及安装flannel插件:

[root@master k8s]# kubectl get cs
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[root@master k8s]# echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> ~/.bash_profile
[root@master k8s]# source ~/.bash_profile
[root@master k8s]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {“health”: “true”}
[root@master k8s]# wget wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
–2018-12-02 22:35:16-- http://wget/
正在解析主机 wget (wget)… 失败:未知的名称或服务。
wget: 无法解析主机地址 “wget”
–2018-12-02 22:35:17-- https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)… 151.101.108.133
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|151.101.108.133|:443… 已连接。
已发出 HTTP 请求,正在等待回应… 200 OK
长度:10599 (10K) [text/plain]
正在保存至: “kube-flannel.yml”

100%[===================================================================================================================================================>] 10,599 --.-K/s 用时 0.001s

2018-12-02 22:35:18 (10.3 MB/s) - 已保存 “kube-flannel.yml” [10599/10599])

FINISHED --2018-12-02 22:35:18–
Total wall clock time: 1.8s
Downloaded: 1 files, 10K in 0.001s (10.3 MB/s)

[root@master k8s]# vim kube-flannel.yml
[root@master k8s]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@master k8s]# kubectl get ds -l app=flannel -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-flannel-ds-amd64 1 1 1 1 1 beta.kubernetes.io/arch=amd64 24s
kube-flannel-ds-arm 0 0 0 0 0 beta.kubernetes.io/arch=arm 24s
kube-flannel-ds-arm64 0 0 0 0 0 beta.kubernetes.io/arch=arm64 24s
kube-flannel-ds-ppc64le 0 0 0 0 0 beta.kubernetes.io/arch=ppc64le 24s
kube-flannel-ds-s390x 0 0 0 0 0 beta.kubernetes.io/arch=s390x 24s

6、查看主节点的状态:
[root@master k8s]# kubectl describe node master
Name: master
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=master
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {“VtepMAC”:“a6:f4:6d:62:84:4f”}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.1.100
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 02 Dec 2018 22:33:03 +0800
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


OutOfDisk False Sun, 02 Dec 2018 22:36:44 +0800 Sun, 02 Dec 2018 22:32:56 +0800 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Sun, 02 Dec 2018 22:36:44 +0800 Sun, 02 Dec 2018 22:32:56 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 02 Dec 2018 22:36:44 +0800 Sun, 02 Dec 2018 22:32:56 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 02 Dec 2018 22:36:44 +0800 Sun, 02 Dec 2018 22:32:56 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 02 Dec 2018 22:36:44 +0800 Sun, 02 Dec 2018 22:36:14 +0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.1.100
Hostname: master
Capacity:
cpu: 1
ephemeral-storage: 17878Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1868660Ki
pods: 110
Allocatable:
cpu: 1
ephemeral-storage: 16871797528
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1766260Ki
pods: 110
System Info:
Machine ID: 121e426a736f4a6794877bd08ccfdb3e
System UUID: 564D18FA-C5A1-88F8-2D20-C53DF278CAB5
Boot ID: 367cc475-d82c-43e3-b16b-bff71614e2e4
Kernel Version: 3.10.0-327.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.0
Kubelet Version: v1.12.2
Kube-Proxy Version: v1.12.2
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits


kube-system coredns-576cbf47c7-v7lrg 100m (10%) 0 (0%) 70Mi (4%) 170Mi (9%)
kube-system coredns-576cbf47c7-x47xt 100m (10%) 0 (0%) 70Mi (4%) 170Mi (9%)
kube-system etcd-master 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-master 250m (25%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-master 200m (20%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-flannel-ds-amd64-rjcxr 100m (10%) 100m (10%) 50Mi (2%) 50Mi (2%)
kube-system kube-proxy-99zb8 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-master 100m (10%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 850m (85%) 100m (10%)
memory 190Mi (11%) 390Mi (22%)
Events:
Type Reason Age From Message


Normal Starting 3m52s kubelet, master Starting kubelet.
Normal NodeHasSufficientDisk 3m52s (x6 over 3m52s) kubelet, master Node master status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 3m52s (x6 over 3m52s) kubelet, master Node master status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m52s (x5 over 3m52s) kubelet, master Node master status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m52s (x6 over 3m52s) kubelet, master Node master status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 3m52s kubelet, master Updated Node Allocatable limit across pods
Normal Starting 3m22s kube-proxy, master Starting kube-proxy.

[root@master k8s]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
kube-system coredns-576cbf47c7-v7lrg 1/1 Running 0 3m37s 10.244.0.2 master
kube-system coredns-576cbf47c7-x47xt 1/1 Running 0 3m37s 10.244.0.3 master
kube-system etcd-master 1/1 Running 0 2m54s 192.168.1.100 master
kube-system kube-apiserver-master 1/1 Running 0 2m47s 192.168.1.100 master
kube-system kube-controller-manager-master 1/1 Running 0 2m59s 192.168.1.100 master
kube-system kube-flannel-ds-amd64-rjcxr 1/1 Running 0 67s 192.168.1.100 master
kube-system kube-proxy-99zb8 1/1 Running 0 3m37s 192.168.1.100 master
kube-system kube-scheduler-master 1/1 Running 0 2m30s 192.168.1.100 master
[root@master k8s]#

[root@master k8s]# kubectl describe node master | grep Taint
Taints: node-role.kubernetes.io/master:NoSchedule
[root@master k8s]# kubectl taint nodes master node-role.kubernetes.io/master-
node/master untainted
[root@master k8s]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
If you don’t see a command prompt, try pressing enter.
[ root@curl-5cc7b478b6-mfmz9:/ ]$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
[ root@curl-5cc7b478b6-mfmz9:/ ]$ exit
Session ended, resume using ‘kubectl attach curl-5cc7b478b6-mfmz9 -c curl -i -t’ command when the pod is running
[root@master k8s]#

在另外2个节点分别执行加入master:(两个都要执行)

kubeadm join 192.168.1.100:6443 --token augswm.kfiora4s8vc884vj --discovery-token-ca-cert-hash sha256:d2d167c13319e7162a19e456bebdd1ed3dfa97175c700cbfb54f5f7955f6a940


[root@k8s1 k8s]# kubeadm join 192.168.1.100:6443 --token augswm.kfiora4s8vc884vj --discovery-token-ca-cert-hash sha256:d2d167c13319e7162a19e456bebdd1ed3dfa97175c700cbfb54f5f7955f6a940
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:

  1. Run 'modprobe – ’ to load missing kernel modules;
  2. Provide the missing builtin kernel ipvs support

[discovery] Trying to connect to API Server “192.168.1.100:6443”
[discovery] Created cluster-info discovery client, requesting info from “https://192.168.1.100:6443
[discovery] Requesting info from “https://192.168.1.100:6443” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “192.168.1.100:6443”
[discovery] Successfully established connection with API Server “192.168.1.100:6443”
[kubelet] Downloading configuration for the kubelet from the “kubelet-config-1.12” ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8s1” as an annotation

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the master to see this node join the cluster.

[root@k8s1 k8s]#

在master节点查看:(加入节点要等几分钟,从NotReady 变成Ready…)

[root@master k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s1 Ready 2m25s v1.12.2
k8s2 Ready 2m18s v1.12.3
master Ready master 10m v1.12.2

root@master k8s >>>wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
–2018-11-21 15:40:43-- https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
正在解析主机 storage.googleapis.com (storage.googleapis.com)… 172.217.160.112, 2404:6800:4008:803::2010
正在连接 storage.googleapis.com (storage.googleapis.com)|172.217.160.112|:443… 已连接。
已发出 HTTP 请求,正在等待回应… 200 OK
长度:19149273 (18M) [application/x-tar]
正在保存至: “helm-v2.11.0-linux-amd64.tar.gz”

100%[===================================================================================================================================================>] 19,149,273 11.7MB/s 用时 1.6s

2018-11-21 15:40:45 (11.7 MB/s) - 已保存 “helm-v2.11.0-linux-amd64.tar.gz” [19149273/19149273])

[root@master k8s >>>tar -zxvf helm-v2.11.0-linux-amd64.tar.gz
linux-amd64/
linux-amd64/tiller
linux-amd64/README.md
linux-amd64/helm
linux-amd64/LICENSE
[root@master k8s >>>ll
总用量 431392
-rw-r–r--. 1 root root 19149273 9月 26 02:18 helm-v2.11.0-linux-amd64.tar.gz
-rw-r–r--. 1 root root 10599 11月 21 15:03 kube-flannel.yml
-rw-r–r--. 1 root root 422579800 10月 26 23:16 kubernetes-server-linux-amd64.tar.gz
drwxr-xr-x. 2 root root 60 9月 26 02:17 linux-amd64
[root@master k8s >>>cd linux-amd64/
[root@master linux-amd64 >>>ll

[root@master k8s]# cd linux-amd64/
[root@master linux-amd64]# ll
总用量 62288
-rwxr-xr-x. 1 root root 32062656 9月 26 02:16 helm
-rw-r–r--. 1 root root 11343 9月 26 02:17 LICENSE
-rw-r–r--. 1 root root 3126 9月 26 02:17 README.md
-rwxr-xr-x. 1 root root 31701376 9月 26 02:16 tiller
[root@master linux-amd64]# cp -ar helm /usr/bin/
[root@master linux-amd64]# cd …
[root@master k8s]# vim rbac-config.yaml
[root@master k8s]# rz

[root@master k8s]# cat rbac-config.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:

  • kind: ServiceAccount
    name: tiller
    namespace: kube-system
    [root@master k8s]#

[root@master k8s]# kubectl create -f rbac-config.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
[root@master k8s]# helm init --service-account tiller --skip-refresh
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure ‘allow unauthenticated users’ policy.
To prevent this, run helm init with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
[root@master k8s]#

[root@master k8s]# docker pull mirrorgooglecontainers/defaultbackend:1.4
1.4: Pulling from mirrorgooglecontainers/defaultbackend
5f68dfd9f8d7: Pull complete
Digest: sha256:05cb942c5ff93ebb6c63d48737cd39d4fa1c6fa9dc7a4d53b2709f5b3c8333e8
Status: Downloaded newer image for mirrorgooglecontainers/defaultbackend:1.4
[root@master k8s]# docker tag mirrorgooglecontainers/defaultbackend:1.4 k8s.gcr.io/defaultbackend:1.4
[root@master k8s]# helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.11.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!

[root@master k8s]# kubectl get pod -n kube-system -l app=helm
NAME READY STATUS RESTARTS AGE
tiller-deploy-7876979db7-m42zc 1/1 Running 0 45s
[root@master k8s]# helm version
Client: &version.Version{SemVer:“v2.11.0”, GitCommit:“2e55dbe1fdb5fdb96b75ff144a339489417b146b”, GitTreeState:“clean”}
Server: &version.Version{SemVer:“v2.11.0”, GitCommit:“2e55dbe1fdb5fdb96b75ff144a339489417b146b”, GitTreeState:“clean”}
[root@master k8s]# kubectl label node master node-role.kubernetes.io/edge=
node/master labeled
[root@master k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s1 Ready 10m v1.12.2
k8s2 Ready 9m54s v1.12.3
master Ready edge,master 18m v1.12.2
[root@master k8s]# helm repo update
Hang tight while we grab the latest from your chart repositories…
…Skip local chart repository
…Unable to get an update from the “stable” chart repository (https://kubernetes-charts.storage.googleapis.com):
read tcp 192.168.1.100:54828->172.217.160.80:443: read: connection reset by peer
Update Complete. ⎈ Happy Helming!⎈
[root@master k8s]# vim ingress-nginx.yaml
[root@master k8s]# cat ingress-nginx.yaml
controller:
service:
externalIPs:
- 192.168.1.100
nodeSelector:
node-role.kubernetes.io/edge: ‘’
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule

defaultBackend:
nodeSelector:
node-role.kubernetes.io/edge: ‘’
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
[root@master k8s]#

[root@master k8s]# helm repo update
Hang tight while we grab the latest from your chart repositories…
…Skip local chart repository
…Successfully got an update from the “stable” chart repository
Update Complete. ⎈ Happy Helming!⎈
[root@master k8s]# helm install stable/nginx-ingress -n nginx-ingress --namespace ingress-nginx -f ingress-nginx.yaml
NAME: nginx-ingress
LAST DEPLOYED: Sun Dec 2 22:56:15 2018
NAMESPACE: ingress-nginx
STATUS: DEPLOYED

RESOURCES:
==> v1/ServiceAccount
NAME AGE
nginx-ingress 1s

==> v1beta1/Role
nginx-ingress 1s

==> v1/Service
nginx-ingress-controller 1s
nginx-ingress-default-backend 1s

==> v1beta1/Deployment
nginx-ingress-controller 1s
nginx-ingress-default-backend 1s

==> v1/Pod(related)

NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-8658f85fd-4t2f9 0/1 ContainerCreating 0 1s
nginx-ingress-default-backend-684f76869d-5dkh2 0/1 ContainerCreating 0 1s

==> v1/ConfigMap

NAME AGE
nginx-ingress-controller 1s

==> v1beta1/ClusterRole
nginx-ingress 1s

==> v1beta1/ClusterRoleBinding
nginx-ingress 1s

==> v1beta1/RoleBinding
nginx-ingress 1s

NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running ‘kubectl --namespace ingress-nginx get services -o wide -w nginx-ingress-controller’

An example Ingress that makes use of the controller:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt:
tls.key:
type: kubernetes.io/tls

每个节点都要导入镜像:

[root@master k8s]# docker load --input k8s.gcr.io_kubernetes-dashboard-amd64_v1.10.0.tar
5f222ffea122: Loading layer [==================================================>] 123MB/123MB
Loaded image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
[root@master k8s]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.12.2 15e9da1ca195 5 weeks ago 96.5MB
k8s.gcr.io/kube-apiserver v1.12.2 51a9c329b7c5 5 weeks ago 194MB
k8s.gcr.io/kube-controller-manager v1.12.2 15548c720a70 5 weeks ago 164MB
k8s.gcr.io/kube-scheduler v1.12.2 d6d57c76136c 5 weeks ago 58.3MB
quay.io/kubernetes-ingress-controller/nginx-ingress-controller 0.20.0 a3f21ec4bd11 8 weeks ago 513MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 3 months ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 3 months ago 122MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 10 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 11 months ago 742kB
mirrorgooglecontainers/defaultbackend 1.4 846921f0fe0e 13 months ago 4.84MB
k8s.gcr.io/defaultbackend 1.4 846921f0fe0e 13 months ago 4.84MB
radial/busyboxplus curl 71fa7369f437 4 years ago 4.21MB

[root@master k8s]# helm install stable/kubernetes-dashboard -n kubernetes-dashboard --namespace kube-system -f kubernetes-dashboard.yaml
NAME: kubernetes-dashboard
LAST DEPLOYED: Sun Dec 2 23:06:01 2018
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME AGE
kubernetes-dashboard 1s

==> v1beta1/Deployment
kubernetes-dashboard 1s

==> v1beta1/Ingress
kubernetes-dashboard 1s

==> v1/Secret
kubernetes-dashboard 1s

==> v1/ServiceAccount
kubernetes-dashboard 1s

==> v1beta1/ClusterRoleBinding
kubernetes-dashboard 1s

NOTES:


*** PLEASE BE PATIENT: kubernetes-dashboard may take a few minutes to install ***


From outside the cluster, the server URL(s) are:
https://k8s.zhang.com

[root@master k8s]# kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-5jqd8 kubernetes.io/service-account-token 3 33s
[root@master k8s]# kubectl describe -n kube-system secret/kubernetes-dashboard-token-5jqd8
Name: kubernetes-dashboard-token-5jqd8
Namespace: kube-system
Labels:
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: c7af05f0-f643-11e8-a686-000c2978cab5

Type: kubernetes.io/service-account-token

Data

ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi01anFkOCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImM3YWYwNWYwLWY2NDMtMTFlOC1hNjg2LTAwMGMyOTc4Y2FiNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.0AafHARCF9PriYKZ7E99_e8eH-SBUiWfE17Pf_g2wFVb9Y5xNGpNvbtrpWCWA0FtM7rk0Clrwh9CvLZHFrWFCdIxNs584XpLw-6mJZu-LPcFktNVoeO3JxrvQW-LJwxtS76A-ptIW-JVQ9RDIa6Slb0v8htONZRQty-eAFnmFFyfXfAggFR2lAtsAUxD_VlRJNbq-SskGA7uMx_EJwvsFQU8_W4GVyZqOEjc0xtLCy2F9cNGv7yhUYQVxKP-anG0jlAwJj2Qjh6lSprKNpsMdsiHXkhDalrQFu_upgx0HB3eb-fCzTYMPG5o55qgZCE_mFDLPH1OpuijhqNfB31NGg
[root@master k8s]#

[root@master k8s]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-v7lrg 1/1 Running 0 34m
coredns-576cbf47c7-x47xt 1/1 Running 0 34m
etcd-master 1/1 Running 0 33m
kube-apiserver-master 1/1 Running 0 33m
kube-controller-manager-master 1/1 Running 0 33m
kube-flannel-ds-amd64-h8wvj 1/1 Running 0 26m
kube-flannel-ds-amd64-qsgmg 1/1 Running 0 26m
kube-flannel-ds-amd64-rjcxr 1/1 Running 0 31m
kube-proxy-7d9tq 1/1 Running 0 26m
kube-proxy-82b49 1/1 Running 0 26m
kube-proxy-99zb8 1/1 Running 0 34m
kube-scheduler-master 1/1 Running 0 32m
kubernetes-dashboard-5746dd4544-2sbdr 1/1 Running 0 83s
tiller-deploy-7876979db7-m42zc 1/1 Running 0 17m
[root@master k8s]#

配置本地的hosts文件:
192.168.1.12 k8s.frognew.com

浏览器访问: https://k8s.zhang.com/

[root@master k8s]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-v7lrg 1/1 Running 0 38m
coredns-576cbf47c7-x47xt 1/1 Running 0 38m
etcd-master 1/1 Running 0 37m
kube-apiserver-master 1/1 Running 0 37m
kube-controller-manager-master 1/1 Running 0 37m
kube-flannel-ds-amd64-h8wvj 1/1 Running 0 30m
kube-flannel-ds-amd64-qsgmg 1/1 Running 0 30m
kube-flannel-ds-amd64-rjcxr 1/1 Running 0 35m
kube-proxy-7d9tq 1/1 Running 0 30m
kube-proxy-82b49 1/1 Running 0 30m
kube-proxy-99zb8 1/1 Running 0 38m
kube-scheduler-master 1/1 Running 0 36m
kubernetes-dashboard-5746dd4544-2sbdr 1/1 Running 0 5m21s
tiller-deploy-7876979db7-m42zc 1/1 Running 0 21m

猜你喜欢

转载自blog.csdn.net/qq_40195432/article/details/84988405