centos7.2+k8s1.4.1+docker1.12集群部署

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/biao_java/article/details/78553574

实验环境:

操作系统:centos7.2_x64

master192.168.72.183

minion01 : 192.168.72.184  容器网段:172.17.0.0/16

设置主机信息:

1. master上执行:

a) 设置主机名称命令:hostnamectl --static set-hostname  k8s-master

b) 关闭防火墙命令:

i. systemctl disable firewalld.service

ii. systemctl stop firewalld.service

c) 设置hosts命令:echo '192.168.72.183    k8s-master

192.168.72.183   etcd

192.168.72.183   registry

192.168.72.184   k8s-node-1' >> /etc/hosts

2. minion01上执行:

a) 设置主机名称命令:hostnamectl --static set-hostname  k8s-node-1

b) 关闭防火墙命令:

i. systemctl disable firewalld.service

ii. systemctl stop firewalld.service

c) 设置hosts命令:echo '192.168.72.183    k8s-master

192.168.72.183   etcd

192.168.72.183   registry

192.168.72.184   k8s-node-1' >> /etc/hosts

SSH免密匙安装:

1. master安装ssh-agent

a) 执行命令:ssh-keygen,会在~/.ssh/目录下多出来两个文件:id_rsa.pub(公钥文件)id_rsa(私钥文件)

b) 在所有minion节点中~/.ssh/authorized_keys配置文件添加公钥文件内容,(注:没有文件就创建文件)

i. Scp id_rsa.pub [email protected]:~/.ssh/

ii. Cat id_rsa.pub >> ~/.ssh/authorized_keys

2. 测试ssh

master执行ssh 192.168.72.184登录minion节点不需要密码则成功

Etcd安装:

1. Master安装etcd

a) yum install etcd -y

2. yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,更改以下带颜色部分信息

vi /etc/etcd/etcd.conf

# [member]

ETCD_NAME=master

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

#ETCD_WAL_DIR=""

#ETCD_SNAPSHOT_COUNT="10000"

#ETCD_HEARTBEAT_INTERVAL="100"

#ETCD_ELECTION_TIMEOUT="1000"

#ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

#ETCD_MAX_SNAPSHOTS="5"

#ETCD_MAX_WALS="5"

#ETCD_CORS=""

#

#[cluster]

#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."

#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"

#ETCD_INITIAL_CLUSTER_STATE="new"

#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"

#ETCD_DISCOVERY=""

#ETCD_DISCOVERY_SRV=""

#ETCD_DISCOVERY_FALLBACK="proxy"

#ETCD_DISCOVERY_PROXY=""

3. 启动并验证状态

a) systemctl start etcd

b) etcdctl set testdir/testkey0 0

0

c) etcdctl get testdir/testkey0

0

d) etcdctl -C http://etcd:4001 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379

cluster is healthy

e) etcdctl -C http://etcd:2379 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379

cluster is healthy

Docker安装:

1. Master安装docker

a) yum install docker

2. 配置Docker配置文件,使其允许从registry中拉取镜像

vi /etc/sysconfig/docker

# /etc/sysconfig/docker

# Modify these options if you want to change the way the docker daemon runs

OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'

if [ -z "${DOCKER_CERT_PATH}" ]; then

    DOCKER_CERT_PATH=/etc/docker

fi

OPTIONS='--insecure-registry registry:5000'

3. 设置开机自启动并开启服务

a) chkconfig docker on

b) systemctl start docker.service

4. 查看docker版本

a) docker version

k8s集群部署:

1. master安装kubernetes

a) yum install kubernetes

2. 配置并启动kubernetes

a) vi /etc/kubernetes/apiserver

###

# kubernetes system config

#

# The following values are used to configure the kube-apiserver

#

# The address on the local server to listen to.

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.

KUBE_API_PORT="--port=8080"

# Port minions listen on

# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"

# Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies

#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!

KUBE_API_ARGS=""

b) Vi /etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://k8s-master:8080"

3. 启动服务并设置开机自启动(先重启os

a) systemctl enable kube-apiserver.service

b) systemctl start kube-apiserver.service

c) systemctl enable kube-controller-manager.service

d) systemctl start kube-controller-manager.service

e) systemctl enable kube-scheduler.service

f) systemctl start kube-scheduler.service

4. Node安装docker/kubernets,参考master安装

a) 安装k8s后配置并启动kubernetes

vi /etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

#   kube-apiserver.service

#   kube-controller-manager.service

#   kube-scheduler.service

#   kubelet.service

#   kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://k8s-master:8080"

 

b) vi /etc/kubernetes/kubelet

###

# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on

# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=k8s-node-1"

# location of the api-server

KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"

# pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!

KUBELET_ARGS=""

5. 启动kubeletkube-proxy服务并设置开机自启动

a) systemctl enable kubelet.service

b) systemctl start kubelet.service

c) systemctl enable kube-proxy.service

d) systemctl start kube-proxy.service

6. Master上执行Kubectl get no检查k8s是否部署成功

a) 查询集群节点:kubectl get no

b) 查询apihttp://192.168.72.131:8080/swagger-ui/

7. 安装网络Flannel

a) masternode上均执行如下命令,进行安装

yum install flannel

b) 配置Flannelmasternode上均编辑/etc/sysconfig/flanneld,修改红色部分

vi /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs

FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

# etcd config key.  This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass

#FLANNEL_OPTIONS=""

c) 配置etcd中关于flannelkey

执行命令:etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'

{ "Network": "10.0.0.0/16" }

d) 启动Flannel之后,需要依次重启dockerkubernete,在master执行

i. systemctl enable flanneld.service

ii. systemctl start flanneld.service

iii. service docker restart

iv. systemctl restart kube-apiserver.service

v. systemctl restart kube-controller-manager.service

vi. systemctl restart kube-scheduler.service

e) node上执行

i. systemctl enable flanneld.service

ii. systemctl start flanneld.service

iii. service docker restart

iv. systemctl restart kubelet.service

v. systemctl restart kube-proxy.service

猜你喜欢

转载自blog.csdn.net/biao_java/article/details/78553574