Kubernetes EFK(Elasticsearch+Fluentd+Kibana)日志系统部署(2020年9月无坑版)

tech2022-08-14  124

一、部署环境:

• K8s nodes:1个master(2CPU\2GB内存)、2个worker node(2CPU\3GB内存) • Centos 7.8(5.8.5-1.el7.elrepo.x86_64) • Kubernetes v1.19.0 • Docker 19.03.12 • Elasticsearch镜像版本:elasticsearch:6.8.8 • Fluentd镜像版本:fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch • Kibana镜像版本:kibana:6.8.8

二、部署注意事项(填坑):

1、确认Kubernetes的环境是正常的:查看kube-DNS、kube-proxy、kube-apiserver等组件的日志有无异常,因我初次部署时Centos内核是3.10.x,该内核运行k8s有网络bug,经常出现网络异常,因此强烈建议将Centos升级到4.4.x以上的内核.

2、Elasticsearch拥有的内存权限至少需要262144,因此在部署前,在每个Node节点上执行: sysctl -w vm.max_map_count=262144 sysctl -a|grep vm.max_map_count #查看 上述方法修改之后,如果重启虚拟机将失效,所以需要在 /etc/sysctl.conf文件最后添加一行: vm.max_map_count=262144

3、出现“kibana server is not ready yet”,是因Kibana的ELASTICSERACH_URL地址配置错误,或elasticsearch部署有问题(如内存资源不够、内存权限不够、许可失效等);

4、注意EFK软件版本的匹配,Elasticsearch与Kibana的软件版本一定要相同,还有Fluentd的版本一定要能适配Kubernetes的;我开始几次部署都是Kibana抓不到Elasticsearch数据,创建不了Index Patterns索引,原因就是Fluentd版本匹配不了;

5、部署ELK组件后检查运行状态(192.168.100.1是我master节点的外网IP,可以换成你的node IP+定义的端口): 5.1、检查Elasticsearch集群工作状态: http://192.168.100.1:31200/_cluster/health?pretty 5.2、检查Elasticsearch各组件的工作状态: http://192.168.100.1:31200/_xpack?pretty 5.3、检查Elasticsearch是否写入的索引数据,判断EFK三组件是否开始正常工作 http://192.168.100.1:31200/_cat/indices 5.4、访问Kibana http://192.168.100.1:31601

三、无坑版YAML配置文件:

1、创建名称空间

[root@k8s-master efk-yaml]# cat ns.yaml apiVersion: v1 kind: Namespace metadata: name: efk

2、创建Elasticsearch

[root@k8s-master efk-yaml]# cat elasticsearch-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: elasticsearch namespace: efk spec: selector: matchLabels: component: elasticsearch template: metadata: labels: component: elasticsearch spec: containers: - name: elasticsearch image: elasticsearch:6.8.8 imagePullPolicy: "IfNotPresent" env: - name: discovery.type value: single-node ports: - containerPort: 9200 name: http protocol: TCP resources: limits: cpu: 500m memory: 2Gi requests: cpu: 500m memory: 2Gi --- apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: efk labels: service: elasticsearch spec: type: NodePort selector: component: elasticsearch ports: - port: 9200 targetPort: 9200 nodePort: 31200

3、创建kibana

[root@k8s-master efk-yaml]# cat kibana-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kibana namespace: efk spec: selector: matchLabels: run: kibana template: metadata: labels: run: kibana spec: containers: - name: kibana image: kibana:6.8.8 imagePullPolicy: "IfNotPresent" env: - name: ELASTICSERACH_URL value: http://elasticsearch.efk.svc.cluster.local:9200 #对应elasticsearch节点的的服务地址或IP+端口号 - name: XPACK_SECURITY_ENABLED value: "true" ports: - containerPort: 5601 name: http protocol: TCP --- apiVersion: v1 kind: Service metadata: name: kibana namespace: efk labels: service: kibana spec: type: NodePort selector: run: kibana ports: - port: 5601 targetPort: 5601 nodePort: 31601

4、创建 Fluentd-rbac授权

[root@k8s-master efk-yaml]# cat fluentd-rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: fluentd namespace: kube-system #注意这一定要配置成k8s核心组件的名称空间(kube-system) --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: fluentd namespace: kube-system #注意这一定要配置成k8s核心组件的名称空间(kube-system) rules: - apiGroups: - "" resources: - pods - namespaces verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: fluentd roleRef: kind: ClusterRole name: fluentd apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: fluentd namespace: kube-system #注意这一定要配置成k8s核心组件的名称空间(kube-system)

5、创建 Fluentd

[root@k8s-master efk-yaml]# cat fluentd-daemonset.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd namespace: kube-system #注意这一定要配置成k8s核心组件的名称空间(kube-system) labels: k8s-app: fluentd-efk version: v1 kubernetes.io/cluster-service: "true" spec: selector: matchLabels: k8s-app: fluentd-efk version: v1 template: metadata: labels: k8s-app: fluentd-efk version: v1 kubernetes.io/cluster-service: "true" spec: serviceAccount: fluentd serviceAccountName: fluentd tolerations: - key: "node-role.kubernetes.io/master" effect: "NoSchedule" containers: - name: fluentd image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch imagePullPolicy: "IfNotPresent" env: - name: FLUENT_ELASTICSEARCH_HOST value: "elasticsearch.efk.svc.cluster.local" #对应elasticsearch节点的的服务地址或IP - name: FLUENT_ELASTICSEARCH_PORT value: "9200" - name: FLUENT_ELASTICSEARCH_SCHEME value: "http" - name: FLUENT_UID value: "0" - name: FLUENT_SYSTEMD_CONF value: disable resources: limits: cpu: 200m memory: 200Mi requests: cpu: 200m memory: 100Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /data/docker/containers # 对应docker在node节点上的日志目录,如安装Docker时没有改目录的话,那日志默认存放在/var/log/docker/containers目录; readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /data/docker/containers # 对应docker在node节点上的日志目录,如安装Docker时没有改目录的话,那日志默认存放在/var/log/docker/containers目录; 以上5个yaml文件依次执行命令:kubectl apply -f +yaml文件名

四、部署完成后检查k8s部署状态

以上所有组件部署正常。

END.

最新回复(0)