kubeadm简易安装kubernetes1.13

修改hosts文件
cat /etc/hosts
192.168.61.11 node1
192.168.61.12 node2

停止firewalld
systemctl stop firewalld
systemctl disable firewalld

禁用selinux
setenforce 0
vi /etc/selinux/config
SELINUX=disabled


创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

1.2kube-proxy开启ipvs的前置条件
由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块在所有的Kubernetes节点node1和node2上执行以下脚本:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm。

yum -y install ipset ipvsadm


安装docker的yum源:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

查看版本docker
yum list docker-ce.x86_64  --showduplicates |sort -r

yum makecache fast

yum install -y --setopt=obsoletes=0 \
  docker-ce-18.06.1.ce-3.el7

systemctl start docker
systemctl enable docker


安装kubernetes的时候,需要安装kubelet, kubeadm等包,但k8s官网给的yum源是packages.cloud.google.com,国内访问不了,此时我们可以使用阿里云的yum仓库镜像。
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum makecache fast
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet.service

...
Installed:
  kubeadm.x86_64 0:1.13.0-0                                    kubectl.x86_64 0:1.13.0-0                                                           kubelet.x86_64 0:1.13.0-0

Dependency Installed:
  cri-tools.x86_64 0:1.12.0-0                                  kubernetes-cni.x86_64 0:0.6.0-0
 

 关闭系统的Swap方法如下:

  swapoff -a

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,
使用free -m确认swap已经关闭。 swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

  vm.swappiness=0

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

修改/etc/sysconfig/kubelet,加入:

echo "KUBELET_EXTRA_ARGS=--fail-swap-on=false" > /etc/sysconfig/kubelet


执行kubeadm init操作
kubeadm init \
   --kubernetes-version=v1.13.0 \
   --pod-network-cidr=10.244.0.0/16 \
   --apiserver-advertise-address=10.199.5.54 \
   --ignore-preflight-errors=Swap
   
输出需要pull得镜像文件
kube-apiserver:v1.13.0
kube-controller-manager:v1.13.0
kube-scheduler:v1.13.0
kube-proxy:v1.13.0
pause:3.1
etcd:3.2.24
coredns:1.2.6

使用脚本下载镜像
[root@k8s-master ~]# cat k8s.sh
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.0
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.0
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.0
docker pull mirrorgooglecontainers/kube-proxy:v1.13.0
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6

docker tag mirrorgooglecontainers/kube-proxy:v1.13.0  k8s.gcr.io/kube-proxy:v1.13.0
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0
docker tag mirrorgooglecontainers/kube-apiserver:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0
docker tag mirrorgooglecontainers/etcd:3.2.24  k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1


docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.0
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.0
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.0
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.0
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.2.24
docker rmi coredns/coredns:1.2.6


查询是否下载镜像
docker images

执行kubeadm init操作
kubeadm init \
   --kubernetes-version=v1.13.0 \
   --pod-network-cidr=10.244.0.0/16 \
   --apiserver-advertise-address=192.168.142.132 \
   --ignore-preflight-errors=Swap

输出信息
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.142.132:6443 --token 2ac8ey.l5run8ujg7wkkykg --discovery-token-ca-cert-hash sha256:5183cb2ecab2084e80caa926d93d6e0ae515dfa646088b5efa5ff13a8ac11347
 黑体是node添加到k8s管理里使用的命令秘钥


下面的命令是配置常规用户如何使用kubectl访问集群:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

查看一下集群状态:确认个组件都处于healthy状态。
kubectl get cs


可以省略###################
集群初始化如果遇到问题,可以使用下面的命令进行清理:
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
########################备注reset后需要清除上面创建得。删除内容kube 重新init 重新cp 重新chown 重新kubectl apply -f  kube-flannel.yml

接下来安装flannel network add-on:

mkdir -p ~/k8s/
cd ~/k8s
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f  kube-flannel.yml

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

修改配置文件需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=<iface-name>
......
containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1    ###添加本地网卡
......

使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。

kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE
kube-system   coredns-576cbf47c7-njt7l        1/1     Running   0          12m    10.244.0.3      node1   <none>
kube-system   coredns-576cbf47c7-vg2gd        1/1     Running   0          12m    10.244.0.2      node1   <none>
kube-system   etcd-node1                      1/1     Running   0          12m    192.168.61.11   node1   <none>
kube-system   kube-apiserver-node1            1/1     Running   0          12m    192.168.61.11   node1   <none>
kube-system   kube-controller-manager-node1   1/1     Running   0          12m    192.168.61.11   node1   <none>
kube-system   kube-flannel-ds-amd64-bxtqh     1/1     Running   0          2m     192.168.61.11   node1   <none>
kube-system   kube-proxy-fb542                1/1     Running   0          12m    192.168.61.11   node1   <none>
kube-system   kube-scheduler-node1            1/1     Running   0          12m    192.168.61.11   node1   <none>


使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点node1被打上了node-role.kubernetes.io/master:NoSchedule的污点:

kubectl describe node node1 | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

因为这里搭建的是测试环境,去掉这个污点使node1参与工作负载:

kubectl taint nodes node1 node-role.kubernetes.io/master-
node "node1" untainted


2.5 测试DNS

kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-5cc7b478b6-r997p:/ ]$

进入后执行nslookup kubernetes.default确认解析正常:

nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local


2.6 向Kubernetes集群中添加Node节点

下面我们将node1 node2这个主机添加到Kubernetes集群中, 在node1和node2上执行:  以下是k8s init 生成的文件,
 kubeadm join 10.0.0.11:6443 --token i4us8x.pw2f3botcnipng8e --discovery-token-ca-cert-hash sha256:d16ac747c2312ae829aa29a3596f733f920ca3d372d9f1b34d33c938be067e51

 查看节点:
 kubectl get nodes
==============================================================================
从master节点如果需要移出这个node1节点

在master节点上执行:

kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-node1

在node2上执行:也就是docker机器

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/


在node1上执行 也就是k8s管理器:

kubectl delete node node2

重新添加节点,需要将配置文件移除cd /etc/kubernetes/manifests/ 内部得文件 停用apiserver端口,删除/var/lib/etcd
===============================================================================

2、更改kube-proxy配置 配置proxy

kubectl edit configmap kube-proxy -n kube-system
找到如下部分的内容

       minSyncPeriod: 0s
          scheduler: ""
          syncPeriod: 30s
        kind: KubeProxyConfiguration
        metricsBindAddress: 127.0.0.1:10249
        mode: "ipvs"                          # 加上这个
        nodePortAddresses: null

其中mode原来是空,默认为iptables模式,改为ipvs

之后重启各个集群节点上的kube-proxy pod:
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

查询启动状态ipvs
kubectl get pod -n kube-system | grep kube-proxy

查询logs是否是ipvs启动
kubectl logs kube-proxy-pf55q -n kube-system

scheduler默认是空,默认负载均衡算法为轮训
ipvsadm: 工作在用户空间,负责为ipvs内核框架编写规则,用于定义谁是集群服务,谁是后端真实服务器。我们可以通过ipvsadm指令创建集群服务

# ipvsadm -A -t 192.168.2.xx:80 -s rr //创建一个DR,并指定调度算法采用rr。
# ipvsadm -a -t 192.168.2.xx:80 -r 192.168.10.xx
# ipvsadm -a -t 192.168.2.xx:80 -r 192.168.11.xx //添加两个RS


3.1 Helm的安装

wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.0-linux-amd64.tar.gz
tar -zxvf helm-v2.12.0-linux-amd64.tar.gz
cd linux-amd64/
cp helm /usr/local/bin/

由于 Helm 默认会去 storage.googleapis.com 拉取镜像,如果你当前执行的机器不能访问该域名的话可以使用以下命令来安装:

    helm init --client-only --stable-repo-url https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts/
    helm repo add incubator https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/
    helm repo update
# 创建服务端
helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.9.1  --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
 
# 创建TLS认证服务端,参考地址:https://github.com/gjmzj/kubeasz/blob/master/docs/guide/helm.md
helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.9.1 --tiller-tls-cert /etc/kubernetes/ssl/tiller001.pem --tiller-tls-key /etc/kubernetes/ssl/tiller001-key.pem --tls-ca-cert /etc/kubernetes/ssl/ca.pem --tiller-namespace kube-system --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts


    # 先移除原先的仓库
    helm repo remove stable
    # 添加新的仓库地址
    helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
    # 更新仓库
    helm repo update
    
    ☆ 查询 charts
helm search mysql

☆ 查询 package 详细信息
helm inspect stable/mysql

☆ 部署 package
helm install stable/mysql

部署之前可以自定义 package 的选项:
# 查询支持的选项
helm inspect values stable/mysql

# 自定义 password 持久化存储
helm install --name db-mysql --set mysqlRootPassword=anoyi  stable/mysql

holm 卸载
$helm reset --force
===========================================================================================================#######################
    如果版本不同的话会有问题
    helm get cs
    Error: incompatible versions client[v2.10.0] server[v2.9.0]

解决方案:

helm init --upgrade
初始化helm
helm init --client-only
# 可以执行helm push
helm plugin install https://github.com/chartmuseum/helm-push
更新repo
helm repo remove stable
   ########   结果   :repo remove local
# google 的repo需要翻墙。我们用公司内部的repo,大家可以使用阿里的
helm repo add ali https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

搜索一个chart

helm search nginx
###################################################################################################################

卸载k8s并清理:
卸载:kubeadm reset
kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd

http://www.cnblogs.com/benjamin77/p/9783797.html  向导

猜你喜欢

转载自blog.csdn.net/zsw208703/article/details/88062028