kubernets: 和谐环境下的kubeadm安装

在和谐环境没有梯子一切都是空谈.  如果你有一个能科学上网的服务器, 或者有一个科学上网的代理, 就往下看.

1. 关闭防火墙

systemctl disable firewalld  
systemctl stop firewalld

 2. 安装docker

yum install docker-1.12.6-32.git88a4867.el7.centos.x86_64
systemctl start docker
systemctl enable docker 

 3. 重点: 在能够科学上网的centos环境上下载rpm安装包:

yum install yum-utils
yumdownloader kubelet kubeadm socat

    一共能下载5个rpm包. 将下载的包传到目标服务器上去.

4. 回到目标服务器, 安装kubeadm.

 rpm -i *.rpm

 5. 下载docker镜像. 参见: 曲线救国: gcr.io的kubernetes镜像备份

images=(kube-apiserver-amd64:v1.6.4 kube-proxy-amd64:v1.6.4 kube-controller-manager-amd64:v1.6.4 kube-scheduler-amd64:v1.6.4 kubernetes-dashboard-amd64:v1.6.2 k8s-dns-sidecar-amd64:1.14.4 k8s-dns-kube-dns-amd64:1.14.4 k8s-dns-dnsmasq-nanny-amd64:1.14.4 etcd-amd64:3.0.17 pause-amd64:3.0)
for imageName in ${images[@]} ; do
  docker pull rickgong/$imageName
  docker tag rickgong/$imageName gcr.io/google_containers/$imageName
done

    此时执行docker images应该能够看到如下镜像:

docker.io/rickgong/kubernetes-dashboard-amd64            v1.6.2      
gcr.io/google_containers/kubernetes-dashboard-amd64      v1.6.2      
docker.io/rickgong/k8s-dns-sidecar-amd64                 1.14.4      
gcr.io/google_containers/k8s-dns-sidecar-amd64           1.14.4      
docker.io/rickgong/k8s-dns-kube-dns-amd64                1.14.4      
gcr.io/google_containers/k8s-dns-kube-dns-amd64          1.14.4      
docker.io/rickgong/k8s-dns-dnsmasq-nanny-amd64           1.14.4      
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64     1.14.4      
docker.io/rickgong/kube-apiserver-amd64                  v1.6.4      
gcr.io/google_containers/kube-apiserver-amd64            v1.6.4      
docker.io/rickgong/kube-proxy-amd64                      v1.6.4      
gcr.io/google_containers/kube-proxy-amd64                v1.6.4      
docker.io/rickgong/kube-controller-manager-amd64         v1.6.4      
gcr.io/google_containers/kube-controller-manager-amd64   v1.6.4      
docker.io/rickgong/kube-scheduler-amd64                  v1.6.4      
gcr.io/google_containers/kube-scheduler-amd64            v1.6.4      
docker.io/rickgong/etcd-amd64                            3.0.17      
gcr.io/google_containers/etcd-amd64                      3.0.17      
docker.io/rickgong/pause-amd64                           3.0         
gcr.io/google_containers/pause-amd64                     3.0         

 好了, 万事俱备, 执行初始化:

kubeadm init --kubernetes-version v1.6.4

 应该能够得到类似的输出, 切记一定要将token保存到安全的地方:

[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 40.501453 seconds
[token] Using token: 2cdac4.90143c596cb731c9
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 2cdac4.90143c596cb731c9 192.168.85.69:6443

根据上面的提示执行命令:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时, 就可以看到node了.

[root@rick-centlos ~]# kubectl get node
NAME           STATUS     AGE       VERSION
rick-centlos   NotReady   21m       v1.7.2

可选: 如果只有一台服务器却又想建立pod, 则下面的命令允许在master上建立pod

kubectl taint nodes --all node-role.kubernetes.io/master-

注意到此时node还是NotReady的, 这是因为还没有安装cni的实现, 比如flannel, weave等. 以安装weave为例:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

过一分钟左右, 就可以看到node已经ready.

[root@rick-centlos ~]# kubectl get node
NAME           STATUS    AGE       VERSION
rick-centlos   Ready     26m       v1.7.2

安装dashboard. 

kubectl apply -f "https://raw.githubusercontent.com/rickgong/k8s/master/kubernetes/kubernetes-dashboard.yaml"

  

然后访问: http://192.168.85.69:30405/#!/pod?namespace=kube-system, 换成你自己的ip地址.  完毕. 



 

最后建议安装weave scope, 一大神器. 

 kubectl apply --namespace kube-system -f "https://raw.githubusercontent.com/rickgong/k8s/master/kubernetes/weave-scope.yaml"

然后通过浏览器访问scope app: http://192.168.85.69:30406. 换成你自己的IP.


 

最终效果:

[root@rick-centlos ~]# kubectl get po --all-namespaces -o wide
NAMESPACE       NAME                                        READY     STATUS    RESTARTS   AGE       IP              NODE
default         default-http-backend-2198840601-xkp11       1/1       Running   0          52m       10.32.0.9       rick-centlos
default         weave-scope-agent-b5ggg                     1/1       Running   1          19h       192.168.85.69   rick-centlos
default         weave-scope-app-1076900133-3sq9s            1/1       Running   1          19h       10.32.0.6       rick-centlos
kube-system     etcd-rick-centlos                           1/1       Running   1          21h       192.168.85.69   rick-centlos
kube-system     kube-apiserver-rick-centlos                 1/1       Running   1          21h       192.168.85.69   rick-centlos
kube-system     kube-controller-manager-rick-centlos        1/1       Running   1          21h       192.168.85.69   rick-centlos
kube-system     kube-dns-2838158301-3b48w                   3/3       Running   3          21h       10.32.0.2       rick-centlos
kube-system     kube-proxy-vhvv2                            1/1       Running   1          21h       192.168.85.69   rick-centlos
kube-system     kube-scheduler-rick-centlos                 1/1       Running   1          21h       192.168.85.69   rick-centlos
kube-system     kubernetes-dashboard-4169076864-fxm97       1/1       Running   1          21h       10.32.0.7       rick-centlos
kube-system     weave-net-lc4sq                             2/2       Running   2          21h       192.168.85.69   rick-centlos
kube-system     weave-scope-agent-83r35                     1/1       Running   1          20h       192.168.85.69   rick-centlos
kube-system     weave-scope-app-1076900133-f86qd            1/1       Running   1          20h       10.32.0.8       rick-centlos
nginx-ingress   nginx-ingress-controller-2245217873-4t3f6   1/1       Running   0          51m       10.32.0.11      rick-centlos
nginx-ingress   nginx-ingress-controller-2245217873-qg00v   1/1       Running   0          51m       10.32.0.10      rick-centlos
[root@rick-centlos ~]# kubectl get service --all-namespaces -o wide
NAMESPACE       NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE       SELECTOR
default         default-http-backend   10.96.123.174   <none>        80/TCP           53m       k8s-app=default-http-backend
default         kubernetes             10.96.0.1       <none>        443/TCP          21h       <none>
default         weave-scope-app        10.109.111.38   <nodes>       80:30406/TCP     19h       app=weave-scope,name=weave-scope-app,weave-cloud-component=scope,weave-scope-component=app
kube-system     kube-dns               10.96.0.10      <none>        53/UDP,53/TCP    21h       k8s-app=kube-dns
kube-system     kubernetes-dashboard   10.98.166.163   <nodes>       80:30405/TCP     21h       k8s-app=kubernetes-dashboard
nginx-ingress   nginx-ingress          10.103.82.143   <nodes>       8080:30080/TCP   52m       k8s-app=nginx-ingress-lb
[root@rick-centlos ~]# kubectl get deployment --all-namespaces
NAMESPACE       NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
default         default-http-backend       1         1         1            1           54m
default         weave-scope-app            1         1         1            1           19h
kube-system     kube-dns                   1         1         1            1           21h
kube-system     kubernetes-dashboard       1         1         1            1           21h
kube-system     weave-scope-app            1         1         1            1           20h
nginx-ingress   nginx-ingress-controller   2         2         2            2           53m
[root@rick-centlos ~]# kubectl get ds --all-namespaces
NAMESPACE     NAME                DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE-SELECTOR   AGE
default       weave-scope-agent   1         1         1         1            1           <none>          19h
kube-system   kube-proxy          1         1         1         1            1           <none>          21h
kube-system   weave-net           1         1         1         1            1           <none>          21h
kube-system   weave-scope-agent   1         1         1         1            1           <none>          20h
[root@rick-centlos ~]# kubectl get rs --all-namespaces
NAMESPACE       NAME                                  DESIRED   CURRENT   READY     AGE
default         default-http-backend-2198840601       1         1         1         55m
default         weave-scope-app-1076900133            1         1         1         19h
kube-system     kube-dns-2838158301                   1         1         1         21h
kube-system     kubernetes-dashboard-4169076864       1         1         1         21h
kube-system     weave-scope-app-1076900133            1         1         1         20h
nginx-ingress   nginx-ingress-controller-2245217873   2         2         2         54m

完毕. 

猜你喜欢

转载自rickgong.iteye.com/blog/2387740