kubernetes v1.5.2搭建,部署nginx,tomcat,三台centos7 集群,一篇秒懂kubernetes工具

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/c5113620/article/details/82718230

安装vmware

先安装一个,centos7 minimal模式,安装完重启,输入root与密码,进入控制台

ip addr
//可以看ip,minimal是没有安装ifconfig等等工具的

配置开机联网

cd /etc/sysconfig/network-scripts
vi ifcfg-ens33
ONBOOT=yes
service network restar
//然后就可以使用xshell连接了。

安装必要工具

yum upgrade
yum install net-tools   //可以使用yum search ifconfig 查找包
yum groupinstall development tools   //可选,安装gcc等等开发工具包

一台配置好了,vmware关机后使用vmware的克隆,复制两个,三台机器就准备完毕,先三台各自【快照】一次方便回退重试

kubernetes总体集群一个master,两个node

 - master&etcd    192.168.204.130
 - node      192.168.204.131
 - node      192.168.204.132

master安装kubernetes

systemctl stop firewalld && sudo systemctl disable firewalld
yum install -y kubernetes etcd docker flannel

node安装kubernetes

systemctl stop firewalld && sudo systemctl disable firewalld
yum install -y kubernetes  docker flannel

master配置修改,主要都是改ip

//etcd配置
vi /etc/etcd/etcd.conf    都有只需要修改ip

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS=http://192.168.204.130:2379


//apiserver 配置
vi /etc/kubernetes/apiserver

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.204.130:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.204.0/24"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
//KUBE_ADMISSION_CONTROL去掉SecurityContextDeny,ServiceAccount,因为kubectl create时会报错
Error from server (ServerTimeout): error when creating "/opt/dockerconfig/nginx-pod.yaml": 
No API token found for service account "default",retry after the token is automatically created and added to the service account


//Kubelet配置
vi /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=192.168.204.130"
KUBELET_API_SERVER="--api-servers=http://192.168.204.130:8080"


//config配置 
vi /etc/kubernetes/config

KUBE_MASTER="--master=http://192.168.204.130:8080"

//scheduler和proxy默认不要改,或者
vi /etc/kubernetes/scheduler
vi /etc/kubernetes/proxy

KUBE_SCHEDULER_ARGS="--address=0.0.0.0"
KUBE_PROXY_ARGS="--address=0.0.0.0"

//flannel配置
vi /etc/sysconfig/flanneld

FLANNEL_ETCD_ENDPOINTS="http://192.168.204.130:2379"
FLANNEL_OPTIONS="--logtostderr=false --log_dir=/var/log/k8s/flannel/ --iface=ens33"

master配置修改完,先启动etcd服务

systemctl start etcd
//检查etcd cluster状态,输出cluster is healthy
etcdctl cluster-health 
//检查etcd集群成员列表,这里只有一台,有显示,表示etcd配置好了
etcdctl member list
//添加kubernetes集群内的ip配置,/atomic.io/network是上面etcd里默认配置的,集群内会被flannel自动分配172.17.0.0网段地址
etcdctl  mk /atomic.io/network/config '{"Network":"172.17.0.0/16", "SubnetMin": "172.17.1.0", "SubnetMax": "172.17.254.0"}'

添加 redhat-uep.pem 证书文件,主要是kubectl create后,pull镜像会出错

failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt是/etc/rhsm/ca/redhat-uep.pem的软连接
/etc/rhsm/ca/redhat-uep.pem不存在

两种方法得到redhat-uep.pem文件,xshell的xftp直接放入目录/etc/rhsm/ca/
1。下载,解压
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
rpm2cpio xxx.rpm | cpio -idmv 
2。自己写
https://github.com/candlepin/python-rhsm/blob/master/etc-conf/ca/redhat-uep.pem

启动kubernetes各组件,tailf /var/log/messages 可以看所有日志

systemctl restart kube-apiserver
//上面执行后,看http://192.168.204.130:8080/  http://192.168.204.130:8080/healthz/ping有内容就kube-apiserver启动成功了
systemctl restart kube-controller-manager
systemctl restart kube-scheduler
systemctl restart kube-proxy
systemctl restart kubelet
systemctl restart flanneld
systemctl restart docker
//上面执行后,ps aux | grep docker,看dockerd-curren进程的参数 是否有--bip=172.17.11.1/24 --ip-masq=true --mtu=1472
//有,表示flannel已接管docker的ip配置
//再看ifconfig的flannel0  172.17.11.0 与docker0  172.17.11.1是否同一网段

如果上面都正常,master就完成了

node配置,与master差不多,很少

//Kubelet配置
vi /etc/kubernetes/kubelet

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=192.168.204.131"        //这里是node的ip,131   132
KUBELET_API_SERVER="--api-servers=http://192.168.204.130:8080"   //master的ip


//config配置 
vi /etc/kubernetes/config

KUBE_MASTER="--master=http://192.168.204.130:8080"

//scheduler和proxy默认不要改,或者
vi /etc/kubernetes/scheduler
vi /etc/kubernetes/proxy

KUBE_SCHEDULER_ARGS="--address=0.0.0.0"
KUBE_PROXY_ARGS="--address=0.0.0.0"

//flannel配置
vi /etc/sysconfig/flanneld

FLANNEL_ETCD_ENDPOINTS="http://192.168.204.130:2379"
FLANNEL_OPTIONS="--logtostderr=false --log_dir=/var/log/k8s/flannel/ --iface=ens33"

两个node配置完,就可以启动了

systemctl restart kube-proxy
systemctl restart kubelet
systemctl restart flanneld
systemctl restart docker

与master一样检查 docker与flannel

在master上看node

kubectl get nodes

在node上看node,加 -s 指定api server

kubectl -s 192.168.204.130:8080 get nodes

看版本

kubectl version

部署nginx, 写三个yaml文件

//nginx-pod.yaml

apiVersion: v1
kind: Pod
metadata:
 name: nginx-pod
 labels:
  name: nginx-pod
spec:
 containers:
 - name: nginx
   image: nginx
   ports:
   - containerPort: 80

//nginx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
 name: nginx-rc
spec:
 replicas: 2
 selector:
  name: nginx-pod
 template:
  metadata:
   labels:
    name: nginx-pod
  spec:
   containers:
   - name: nginx-pod
     image: nginx
     ports:
     - containerPort: 80

//nginx-service.yaml

apiVersion: v1
kind: Service
metadata:
 name: nginx-service
spec:
 type: NodePort
 ports:
 - port: 80
   nodePort: 30001
 selector:
  name: nginx-pod

使用kubectl 创建部署nginx,就是通过docker pull image nginx 完成部署

kubectl create -f nginx-pod.yaml
//执行完,虽然显示created,但是其实需要pull image 慢慢来的,使用
kubectl describe pod nginx
//看看具体的状态,如果出现open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt错误看上面
//Status:       Running,IP有了,Container ID有了,基本就好了
//还可以看看image有没有pull下来,docker ps -a 是否有nginx
kubectl create -f   nginx-rc.yaml
kubectl create -f   nginx-service.yaml


//有问题,可以把上面的create换成delete删除重建

//查看部署列表
kubectl get pods
kubectl get rc
kubectl get service

访问nginx,验证

http://192.168.204.131:30001/
http://192.168.204.132:30001/
如果打不开,试试下面(每个node节点都要) (https://github.com/kubernetes/kubernetes/issues/40182)
iptables -P FORWARD ACCEPT

如果上面没问题,部署tomcat

//tomcat-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: myweb
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: docker.io/tomcat
        ports:
        - containerPort: 80


//tomcat-service.yaml

apiVersion: v1
kind: Service
metadata:
 name: myweb
spec:
 type: NodePort
 ports:
 - port: 8080
   targetPort: 8080
   nodePort: 31111
 selector:
  app: myweb

执行命令

kubectl create -f tomcat-deployment.yaml
kubectl create -f tomcat-service.yaml

kubectl describe deployment myweb

访问tomcat

http://192.168.204.131:31111/
http://192.168.204.132:31111/

其他

kubectl get svc  显示端口映射

etcdctl --endpoints  http://192.168.204.130:2379 ls /  在安装了etcd的服务器,看其他服务器的etcd存储

netstat -antp |grep kube-proxy  显示被flannel监听的端口nodePort

repair.go:122] the cluster IP 10.51.0.1 for service kubernetes/default is not within the service CIDR 10.52.0.0/16; please recreate
检查/etc/kubernetes/apiserver的KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=192.168.98.0/24"
清除缓存
etcdctl rm /registry/services/specs/default/kubernetes
重启

猜你喜欢

转载自blog.csdn.net/c5113620/article/details/82718230