One, two ways to create resources
1. command-based mode:
- Simple and intuitive quick, quick.
- Suitable temporary test or experiment.
2. Based on the configuration list of ways:
- Profiles describe
What
that application of the final state to be reached. - Profiles provide a template to create a resource can be repeated deployments.
- Management code can be like the same management deployment.
- Suitable for formal, cross-environment, large-scale deployment.
- This approach requires the familiar configuration file syntax, there are some difficulties.
Environment Introduction
Host computer | IP addresses | service |
---|---|---|
master | 192.168.1.21 | k8s |
node01 | 192.168.1.22 | k8s |
node02 | 192.168.1.23 | k8s |
II. Configuration Checklist (yam, yaml)
In k8s, the file format is generally used yaml to create expectations in line with our expectations of pod, so we generally yaml file called a resource list
/ etc / Kubernetes / manifests / K8S storage place (yam, yaml) file
** kubectl explain deployment (by adding resource categories explain the parameters you can see how the resource should be defined)
kubectl explain deployment.metadata by resource categories plus field with the Object tag, we can see what the next level of field two fields are those such as how to define
kubectl explain deployment.metadata.ownerReferences by adding the contents of the different levels of the field name of the field of view, and in front of the [] represents the object list No.
1. The role of common yaml written documents, as well as fields
(1) apiVersion: api version information
(Used to define which belong to the group and the current version, this is directly related to the final offer using that version)
[root@master manifests]# kubectl api-versions
//查看到当前所有api的版本
(2) kind: Category Resource objects
(Used to define objects created belong to the category of what is pod, service, or deployment and other objects, you can customize according to their fixed syntax.)
(3) the Metadata: Metadata Name field (will write)
Providing the following fields:
creationTimestamp: "2019-06-24T12: 18 is: 48Z"
generateName: MyWeb-5b59c8b9d-
Labels: (object label)
POD-the hash-Template: 5b59c8b9d
RUN: MyWeb
name: MyWeb-5b59c8b9d-gwzz5 (PODS name of the object, the same category among the pod object name is unique and can not be repeated)
namespace: default (the name space object belongs, can be repeated within the same name space, the name space is also k8s level namespace, no and containers namespace confusion)
ownerReferences:
- apiVersion: Apps / V1
blockOwnerDeletion: to true
Controller: to true
kind: ReplicaSet
name: MyWeb-5b59c8b9d
UID: 37f38f64-967a-11e9-8b4b-000c291028e5
resourceVersion: "943"
selfLink: / API / V1 / Namespaces / default / PODS / myweb- gwzz5-5b59c8b9d
uid: 37f653a6-967a-11e9-8b4b-000c291028e5
annotations (notes resources, the need to define in advance, the default is none)
defines the path for each resource referenced by these identities: the / api / group / version / namespaces / namespace / resource categories / object name
(4) spec: user-desired state
(This field is most important, because the spec is used to define a target state 'disired state', but nowhere cause resource spec nested fields also vary, and it is important because the field is not the same spec, internal self K8S a description of the spec for the query)
(5) status: What kind of resources are now in state
(Current state, 'current state', this field has to generate and maintain k8s clusters, can not customize, belonging to a read only field)
2. Write a yaml file
[root@master ~]# vim web.yaml
kind: Deployment #资源对象是控制器
apiVersion: extensions/v1beta1 #api的版本
metadata: #描述kind(资源类型)
name: web #定义控制器名称
spec:
replicas: 2 #副本数量
template: #模板
metadata:
labels: #标签
app: web_server
spec:
containers: #指定容器
- name: nginx #容器名称
image: nginx #使用的镜像
About the implementation
[root@master ~]# kubectl apply -f web.yaml
Look
[root@master ~]# kubectl get deployments. -o wide
//查看控制器信息
[root@master ~]# kubectl get pod -o wide
//查看pod节点信息
3. Write a service.yaml file
[root@master ~]# vim web-svc.yaml
kind: Service #资源对象是副本
apiVersion: v1 #api的版本
metadata:
name: web-svc
spec:
selector: #标签选择器
app: web-server #须和web.yaml的标签一致
ports: #端口
- protocol: TCP
port: 80 #宿主机的端口
targetPort: 80 #容器的端口
Using the same tag and the tag selector content, the two resource objects interrelated.
service resource object is created, the default type is ClusterIP, means that within any cluster node can access. Its role is to provide a unified service for the real back-end interface pod. If you want to access external network services, the type should be changed NodePort
(1) about the implementation of
[root@master ~]# kubectl apply -f web-svc.yaml
(2) look
[root@master ~]# kubectl get svc
//查看控制器信息
(3) access to it
[root@master ~]# curl 10.111.193.168
4. outside the network can access the service
(1) modify the web-svc.yaml file
kind: Service #资源对象是副本
apiVersion: v1 #api的版本
metadata:
name: web-svc
spec:
type: NodePort #添加 更改网络类型
selector: #标签选择器
app: web_server #须和web.yaml的标签一致
ports: #端口
- protocol: TCP
port: 80 #宿主机的端口
targetPort: 80 #容器的端口
nodePort: 30086 #指定群集映射端口,范围是30000-32767
(2) refresh
[root@master ~]# kubectl apply -f web-svc.yaml
(3) look
[root@master ~]# kubectl get svc
(4) Browser Test
Third, small experiment
Proceed based on a blog experiment
1. use the file to create a Deployment yaml resource object, mirroring requires the use of personal private mirroring v1 version. replicas three.
Write yaml file
[root@master ~]# vim www.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: xgp
spec:
replicas: 3
template:
metadata:
labels:
app: www_server
spec:
containers:
- name: web
image: 192.168.1.21:5000/web:v1
(1) about the implementation of
[root@master ~]# kubectl apply -f web-svc.yaml
(2) look
[root@master ~]# kubectl get deployments. -o wide
//查看控制器信息
[root@master ~]# kubectl get pod -o wide
//查看pod节点信息
(3) access to it
2. Create a file using yaml way a Service resource object to be associated with the above-mentioned Deployment resource object, type type: NodePort, port: 30123.
Preparation of service documents
[root@master ~]# vim www-svc.yaml
kind: Service
apiVersion: v1
metadata:
name: www-svc
spec:
type: NodePort
selector:
app: www_server
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30123
About the implementation
[root@master ~]# kubectl apply -f www-svc.yaml
Look
[root@master ~]# kubectl get svc
Access it
IV. Summary
1. Pod effect of
In k8s the pod is the smallest unit of management in a pod will often contain one or more containers. In most cases, only a Container Pod within a container.
In each Pod in Pause has a special container and one or more service container, derived Pause pause-amd64 mirror, Pause container has a very important role Pod:
- Pause Pod container as a root container of the container, irrespective of its local operations on the container, which state represents the state of the entire pod.
- Pod service container in the plurality of shared IP Pause containers, each Pod is assigned a unique IP address, each of the container Pod namespace shared network, including IP address and network port. The container can communicate with each other Pod localhost. k8s supports communication between any two cluster Pod the underlying network.
- Pod in all containers can access shared volumes, allow these containers to share data. Pod volumes further data for persistence in, in case where a container needs to restart data loss.
2. Service's role
Service is the real abstract back-end services, a Service may represent more of the same back-end services
Service provided for the POD POD cluster controller of a fixed endpoint access, job Service also depends on K8s in an annex, is CoreDNS, it will address a DNS Service.
NodePort type of service
clusterIP: Which IP Service specified in the service network, the default dynamic allocation
NodePort increase in ClusterIP type a nodePort a exposed on node network namespace, the user can access from outside the cluster to the cluster, and thus the user's request processes are: Client -> NodeIP: NodePort -> ClusterIP: ServicePort -> PodIP: ContainerPort.
It can be understood as NodePort enhance the ClusterIP features, so the client can access any cluster in each external access to clusterIP a nodeip so, then the clusterIP load balancing to POD.
3. flow to
After we created a service, users should first visit is nginx reverse proxy ip, then access by nginx to "NodePort exposed IP and port mapping" k8s back-end server (master node) by "master node" the "ip + port mapping" information to the back-end k8s access node.