新版CKA-2020.12真题试卷三+参考答案
- CKA 2020.12 练习笔记
-
- Task 1
- Task 1 参考答案
- Task 2
- Task 2 参考答案
- Task 3
- Task 3 参考答案
- Task 4
- Task 4 参考答案
- Task 5
- Task 5 参考答案
- Task 6
- Task 6 参考答案
- Task 7
- Task 7 参考答案
- Task 8
- Task 8 参考答案
- Task 9
- Task 9 参考答案
- Task 10
- Task 10 参考答案
- Task 11
- Task 11 参考答案
- Task 12
- Task 12 参考答案
- Task 13
- Task 13 参考答案
- Task 14
- Task 14 参考答案
- Task 15
- Task 15 参考答案
- Task 16
- Task 16 参考答案
- Task 17
- Task 17 参考答案
新版CKA考试已于2020年9月1日正式上线!
考试模式:线上考试
考试时间:2小时
认证有效期:3年
软件版本:Kubernetes v1.19
重考政策:可接受1次重考
经验水平:中級
题目数量:17题
通过分数:大于等于66分
CKA 2020.12 练习笔记
练习笔记
Task 1
Create a new ClusterRole named deployment-clusterrole,which only allows to create the following resource types:
- Deployment
- StatefulSet
- Daemonsets
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limit to the namespace app-team1
Task 1 参考答案
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets
kubectl create serviceaccount cicd-token --namespace=app-team1
kubectl create rolebinding deployment-clusterrole --clusterrole=deployment-clusterrole --serviceaccount=default:cicd-token --namespace=app-team1
Task 2
Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it.
Task 2 参考答案
kubectl cordon ek8s-node-1
kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force
Task 3
Given an existing Kubernetes cluster running version 1.18.8, upgrade all of the Kubernetes control plan and node components on the master node only to version 1.19.0.
You are also expected to upgrade kubelet and kubectl on the master node.
Task 3 参考答案
kubectl cordon k8s-master
kubectl drain k8s-master --delete-local-data --ignore-daemonsets --force
apt-get install kubeadm=1.19.0-00 kubelet=1.19.0-00 kubectl=1.19.0-00 --disableexcludes=kubernetes
kubeadm upgrade apply 1.19.0 --etcd-upgrade=false
systemctl daemon-reload
systemctl restart kubelet
kubectl uncordon k8s-master
Task 4
Create a snapshot of the existing etcd instance running at https://127.0.0.1:2379 saving the snapshot to /srv/data/etcd-snapshot.db
Next, restore an existing, previous snameshot located at /var/lib/backup/etcd-snapshot-previous.db.
The following TLS certificates/key are supplied for connecting to the server with etcdctl:
CA certificate: /opt/KUIN00601/ca.crt
Client certificate: /opt/KUIN00601/etcd-client.crt
Clientkey:/opt/KUIN00601/etcd-client.key

Task 4 参考答案
#backup
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot save /etc/data/etcd-snapshot.db
#restore
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot restore /var/lib/backup/etcd-snapshot-previoys.db
Task 5
Create a new NetworkPolicy name allow-port-from-namespace that allows Pods in the existing namespace internal to connect to port 9000 of other Pods in the same namespace,
Ensure that the new NetworkPolicy:
- does not allow access to Pods not listening on port 9000
- does not allow access from Pods not in namespace internal
Task 5 参考答案
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: all-port-from-namespace
namespace: internal
spec:
podSelector:
matchLabels: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
name: namespacecorp-net
- podSelector: {}
ports:
- port: 9000
Task 6
Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx
Create a new service named front-end-svc exposing the container port http.
Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled
Task 6 参考答案
kubectl expose deployment front-end --name=front-end-svc --port=80 --targetport=80 --type=NodePort
Task 7
Create a new nginx Ingress resource as follows:
- Name: pong
- Namespace: ing-internal
- Exposing service hi on path /hi using service port 5678
The availability of service hi can be checked using the following command, which should return hi:
Task 7 参考答案
curl -kL <INTERNAL_IP>/hi
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ping
namespace: ing-internal
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 5678
Task 8
Scale the deployment loadbalancer to 6 pods.
Task 8 参考答案
kubectl scale deploy loadbalancer --replicas=6
Task 9
Schedule a pod as follow:
- Name: nginx-kusc00401
- Image: nginx
- Node selector: disk=spinning
Task 9 参考答案
kubectl run nginx --image=nginx --dry-run=client -oyaml
edit
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
labels:
role: nginx-kusc00401
spec:
nodeSelector:
disk: spinning
containers:
- name: nginx
image: nginx
Task 10
Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/nodenum
Task 10 参考答案
kubectl get node | grep -i ready
kubectl describe nodes <nodeName> | grep -i taints | grep -i noSchedule
相减,写入/opt/nodenum
Task 11
Create a pod named kucc1 with a single container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached + consul
Task 11 参考答案
kubectl run kucc1 --image=nginx --dry-run=client -oyaml
edit
kubectl apply
apiVersion: v1
kind: Pod
metadata:
name: kucc1
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memchached
name: memcached
- image: consul
name: consul
Task 12
Creae a persistent volume with name app-config of capacity 1Gi and access mode ReadWriteOnce. The type of volume is hostPath and its location is /srv/app-config
Task 12 参考答案
refer to: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
labels:
type: local
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/srv/app-config"
Task 13
Create a new PVC:
- Name: pv-volume
- Class: csi-hostpath-sc
- Capacity: 10Mi
Create a new Pod which mounts the PVC as a volume: - Name: web-server
- Image: nginx
- Mount path: /usr/share/nginx/html
Configure the new Pod to have ReadWriteOnce access on the volume.
Finally, using kubectl edit or kubectl path expand the PVC to a capacity 70Mi and record that change.
Task 13 参考答案
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
#创建pod
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pv-volume
volumes:
- name: pv-volume
persistentVolumeClaim:
claimName: pv-volume
kubectl edit pvc pv-volume --save-config
Task 14
Monitor the logs of pod foobar and:
- Extract log lines corrsponding to error unable-to-access-website
- Write them to /opt/KUTR00101/foobar
Task 14 参考答案
kubectl logs foobar |grep unable-to-access-website > /opt/KUTR00101/foobar
cat /opt/KUTR00101/foobar
Task 15
Without changing its existing containers, an existing Pod needs to be integrated into Kubernetes’s built-in logging architecture(e.g. kubectl logs). Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Add a busybox sidecar container to the existing Pod legacy-app. The new sidecar container has to run the following command:
/bin/sh -c tail -n+1 -f /var/log/legac-appp.log
Use a volume mount named logs to make the file /var/log/legacy-app.log available to the sidecar container.
- Don’t modify the existing container.
- Don’t modify the path of the log file, both containers must access it at /var/log/legacy-app.log.
Task 15 参考答案
kubectl get pod xxx -oyaml
apiVersion: v1
kind: Pod
metadata:
name: podname
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$(date) INFO $i" >> /var/log/legacy-ap.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: logs
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-ap.log']
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {}
#验证:
kubectl logs <pod_name> -c <container_name>
Task 16
From the pod label name=cpu-user,find pods running high CPU workloads and write the name of the pod consuming most CPU to the fule /opt/KUT00401/KUT00401.txt (which already exists).
Task 16 参考答案
kubectl top -l name=cpu-user -A
echo 'pod name' >> /opt/KUT00401/KUT00401.txt
Task 17
A Kubernetes worker node, named wk8s-node-0 is in state NotReady.
Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.
Task 17 参考答案
sudo -i
systemctl status kubelet
systemctl start kubelet
systemctl enable kubelet