kubernetes v1.12 部署EFK插件

版权声明:转载请注明出处,否则自行负责所有后果 https://blog.csdn.net/ljx1528/article/details/85548485

kubernetes v1.12 部署EFK插件

1、EFK 对应的目录:kubernetes/cluster/addons/fluentd-elasticsearch

[root@k8s_master ~]# cd /root/kubernetes/cluster/addons/fluentd-elasticsearch/
[root@k8s_master fluentd-elasticsearch]# ls *.yaml
es-service.yaml      fluentd-es-configmap.yaml  kibana-deployment.yaml
es-statefulset.yaml  fluentd-es-ds.yaml         kibana-service.yaml

2、修改定义文件内容

[root@k8s_master fluentd-elasticsearch]# cp es-statefulset.yaml{,.bak}
[root@k8s_master fluentd-elasticsearch]# diff es-statefulset.yaml{,.orig}
76c76
<       - image: longtds/elasticsearch:v6.2.5
---
>       - image: registry.cn-hangzhou.aliyuncs.com/zhangbohan/k8s-elasticsearch:v6.2.5

[root@k8s_master fluentd-elasticsearch]# cp fluentd-es-ds.yaml{,.bak}
[root@k8s_master fluentd-elasticsearch]# diff fluentd-es-ds.yaml{,.bak}
79c79
<         image: netonline/fluentd-elasticsearch:v2.2.0
---
>         image: registry.cn-hangzhou.aliyuncs.com/k8s-yun/fluentd-elasticsearch:v2.2.0

[root@k8s_master fluentd-elasticsearch]# cp kibana-deployment.yaml{,.bak}
[root@k8s_master fluentd-elasticsearch]# diff kibana-deployment.yaml{,.bak}
24c24
<         image: docker.elastic.co/kibana/kibana-oss:6.3.2
---
>         image: registry.cn-hangzhou.aliyuncs.com/xyz10/kibana-oss:6.2.3

3、给 Node 设置标签(不然fluentd-es起不来)

[root@k8s_master ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE     VERSION
192.168.1.204   Ready    <none>   3d22h   v1.12.0

[root@k8s_master ~]# kubectl label nodes 192.168.1.204 beta.kubernetes.io/fluentd-ds-ready=true
node "192.168.1.204" labeled

4、执行定义文件

[root@k8s_master fluentd-elasticsearch]# pwd
/root/kubernetes/cluster/addons/fluentd-elasticsearch
[root@k8s_master fluentd-elasticsearch]# ls *.yaml
es-service.yaml      fluentd-es-configmap.yaml  kibana-deployment.yaml
es-statefulset.yaml  fluentd-es-ds.yaml         kibana-service.yaml

[root@k8s_master fluentd-elasticsearch]# kubectl create -f .

5、检查执行结果

[root@k8s_master fluentd-elasticsearch]# kubectl get pods -n kube-system -o wide|grep -E 'elasticsearch|fluentd|kibana'
elasticsearch-logging-0                 1/1     Running   0          96m     172.30.13.12   192.168.1.204   <none>
elasticsearch-logging-1                 1/1     Running   0          91m     172.30.13.18   192.168.1.204   <none>
fluentd-es-v2.2.0-5b9vl                 1/1     Running   0          96m     172.30.13.13   192.168.1.204   <none>
kibana-logging-56db5cfd5d-xpznk         1/1     Running   0          96m     172.30.13.17   192.168.1.204   <none>

[root@k8s_master fluentd-elasticsearch]# kubectl get service  -n kube-system|grep -E 'elasticsearch|kibana'
elasticsearch-logging   ClusterIP   10.254.7.31      <none>          9200/TCP            96m
kibana-logging          ClusterIP   10.254.55.179    <none>          5601/TCP            96m

说明:只有当的 Kibana pod 启动完成后,才能查看 kibana dashboard

6、访问 kibana

[root@k8s_master fluentd-elasticsearch]# kubectl cluster-info|grep -E 'Elasticsearch|Kibana'
Elasticsearch is running at https://192.168.1.203:6443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://192.168.1.203:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy

6.1、通过 kubectl proxy 访问

6.1.1、创建代理

[root@k8s_master fluentd-elasticsearch]# nohub kubectl proxy --address='192.168.1.203' --port=8086 --accept-hosts='^*$' &  #添加到后台运行
Starting to serve on 192.168.1.203:8086

浏览器访问 URL:http://192.168.1.203:8086/api/v1/namespaces/kube-system/services/kibana-logging/proxy
说明:端口可以任意指定,只要不冲突即可

7、web界面操作说明

在 Settings -> Indices 页面创建一个 index(相当于 mysql 中的一个 database),选中 Index contains time-based events,使用默认的 logstash-* pattern,点击 Create ;
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

有兴趣的朋友可以扫码拉群一起探讨k8s的更多问题
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/ljx1528/article/details/85548485