配置参考https://banzaicloud.com/docs/one-eye/logging-operator/configuration/fluentd/
github https://github.com/banzaicloud/logging-operator/
官方的demo https://banzaicloud.com/docs/one-eye/logging-operator/quickstarts/es-nginx/
logging opeator(插件): https://banzaicloud.com/docs/one-eye/logging-operator/configuration/plugins/

一 https://jishuin.proginn.com/p/763bfbd572ae
二 https://blog.csdn.net/tao12345666333/article/details/116178235
三 https://www.jianshu.com/p/9c18c4a2b7ed

准备
k8s 1.23
helm 3.8
持久化卷

  • Fluent Operator vs logging-operator 对比
两者皆可自动部署 Fluent Bit 与 Fluentd。logging-operator 需要同时部署 Fluent Bit 和 Fluentd,而 Fluent Operator 支持可插拔部署 Fluent Bit 与 Fluentd,非强耦合,用户可以根据自己的需要自行选择部署 Fluentd 或者是 Fluent Bit,更为灵活。
在 logging-operator 中 Fluent Bit 收集的日志都必须经过 Fluentd 才能输出到最终的目的地,而且如果数据量过大,那么 Fluentd 存在单点故障隐患。Fluent Operator 中 Fluent Bit 可以直接将日志信息发送到目的地,从而规避单点故障的隐患。
logging-operator 定义了 loggings,outputs,flows,clusteroutputs 以及 clusterflows 四种 CRD,而 Fluent Operator 定义了 13 种 CRD。相较于 logging-operator,Fluent Operator 在 CRD 定义上更加多样,用户可以根据需要更灵活的对 Fluentd 以及 Fluent Bit 进行配置。同时在定义 CRD 时,选取与 Fluentd 以及 Fluent Bit 配置类似的命名,力求命名更加清晰,以符合原生的组件定义。
两者均借鉴了 Fluentd 的 label router 插件实现多租户日志隔离。
  • logging operator 架构图
  • 整个Logging Operator的核心CRD就只有5个,它们分别是
  logging:用于定义一个日志采集端(FleuntBit)和传输端(Fleuntd)服务的基础配置;flow:用于定义一个namespaces级别的日志过滤、解析和路由等规则。clusterflow:用于定义一个集群级别的日志过滤、解析和路由等规则。output:用于定义namespace级别的日志的输出和参数;clusteroutput:用于定义集群级别的日志输出和参数,它能把被其他命名空间内的flow关联;
  • helm安装
helm repo add banzaicloud-stable https://kubernetes-charts.banzaicloud.com
# 更新下仓库
helm repo update #指定变量
pro=logging-operator
chart_version=3.17.5mkdir -p /data/$pro
cd /data/$pro#下载charts
helm pull banzaicloud-stable/$pro --version=$chart_version#提取values.yaml文件
tar zxvf $pro-$chart_version.tgz --strip-components 1 $pro/values.yaml cat > /data/$pro/start.sh << EOF
helm upgrade --install --wait $pro $pro-$chart_version.tgz \
--create-namespace \
-f values.yaml \
-n logging
EOFbash /data/logging-operator/start.sh
# kubectl get pod -n logging
NAMESPACE     NAME                                 READY   STATUS              RESTARTS   AGE
logging       logging-operator-8547f7d6c6-2sk8w    1/1     Running             0          13m
# kubectl get crds|grep loggingclusterflows.logging.banzaicloud.io              2022-05-08T12:57:37Z
clusteroutputs.logging.banzaicloud.io            2022-05-08T12:57:37Z
eventtailers.logging-extensions.banzaicloud.io   2022-05-08T12:57:37Z
flows.logging.banzaicloud.io                     2022-05-08T12:57:37Z
hosttailers.logging-extensions.banzaicloud.io    2022-05-08T12:57:37Z
loggings.logging.banzaicloud.io                  2022-05-08T12:57:37Z
outputs.logging.banzaicloud.io                   2022-05-08T12:57:37Z
  • flow或clustflow 配置 ,flow只对单个namespace有效,clusterflow对全局有效
cat > /data/logging-operator/clusterflow.yaml << 'EOF'
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:name: clusterflow
spec:filters:- parser:remove_key_name_field: trueparse:type: nginx- tag_normaliser:format: ${namespace_name}.${pod_name}.${container_name}match:- exclude:labels:componnet: reloadernamespaces:- stage- select:labels:app: nginxnamespaces:- default- prod- dev- test
EOF

实战

1、准备nginx
2、准备elasticsearch
3、helm安装logging-operator
4、配置logging, flow, output、clusteroutput、clusterflow、HostTailer、EventTailer等。
5、验证

  • 部署 nginx
cat > /data/logging-operator/nginx.yaml  << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentnamespace: defaultlabels:app: nginx
spec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: banzaicloud/log-generator:0.3.2ports:- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:name: nginx-servicenamespace: default
spec:selector:app: nginxports:- protocol: TCPport: 80targetPort: 80
EOFkubectl apply -f /data/logging-operator/nginx.yaml
  • 部署elasticsearch
mkdir -p  /var/lib/container/elasticsearch/data \
&& chmod 777  /var/lib/container/elasticsearch/datacat > /data/logging-operator/elasticsearch.yaml  << 'EOF'
apiVersion: v1
kind: Secret
metadata:name: elasticsearch-passwordnamespace: logging
data:ES_PASSWORD: RWxhc3RpY3NlYXJjaDJPMjE=
type: Opaque---
apiVersion: apps/v1
kind: Deployment
metadata:name: elasticsearchnamespace: logging
spec:replicas: 1selector:matchLabels:app: elasticsearchtemplate:metadata:labels:app: elasticsearchspec:volumes:- name: elasticsearch-datahostPath:path: /var/lib/container/elasticsearch/data- name: localtimehostPath:path: /usr/share/zoneinfo/Asia/Shanghaicontainers:- env:- name: TZvalue: Asia/Shanghai- name: xpack.security.enabledvalue: "true"- name: discovery.typevalue: single-node- name: ES_JAVA_OPTSvalue: "-Xms512m -Xmx512m"- name: ELASTIC_PASSWORDvalueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_PASSWORDname: elasticsearchimage: elasticsearch:7.13.1imagePullPolicy: Alwaysports:- containerPort: 9200- containerPort: 9300resources:requests:memory: 1000Micpu: 200mlimits:memory: 1000Micpu: 500mvolumeMounts:- name: elasticsearch-datamountPath: /usr/share/elasticsearch/data- name: localtimemountPath: /etc/localtime
---
apiVersion: v1
kind: Service
metadata:labels:app: elasticsearchname: elasticsearchnamespace: logging
spec:ports:- port: 9200protocol: TCPtargetPort: 9200selector:app: elasticsearch
---
apiVersion: apps/v1
kind: Deployment
metadata:name: kibananamespace: logginglabels:app: kibana
spec:selector:matchLabels:app: kibanatemplate:metadata:labels:app: kibanaspec:
#      nodeSelector:
#        es: logcontainers:- name: kibanaimage: kibana:7.13.1#image: docker.elastic.co/kibana/kibana:7.13.1resources:limits:cpu: 1000mrequests:cpu: 1000menv:- name: TZvalue: Asia/Shanghai- name: ELASTICSEARCH_HOSTSvalue: http://elasticsearch:9200- name: ELASTICSEARCH_USERNAMEvalue: elastic- name: ELASTICSEARCH_PASSWORDvalueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_PASSWORDports:- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:name: kibananamespace: logginglabels:app: kibana
spec:ports:- port: 5601nodePort: 5601type: NodePortselector:app: kibana
EOFkubectl apply -f /data/logging-operator/elasticsearch.yaml
  • 配置logging, flow, output等。
cat > /data/logging-operator/es-output.yaml << 'EOF'
apiVersion: v1
kind: Secret
metadata:name: elasticsearch-passwordnamespace: default
data:ES_PASSWORD: RWxhc3RpY3NlYXJjaDJPMjE=
type: Opaque---
apiVersion: logging.banzaicloud.io/v1beta1
kind: Output
metadata:name: es-outputnamespace: default
spec:elasticsearch:host: elasticsearch.logging.svc.cluster.localport: 9200scheme: http#ssl_verify: false#ssl_version: TLSv1_2buffer:timekey: 1mtimekey_wait: 30stimekey_use_utc: truelogstash_format: truelogstash_prefix: nginx   #索引名称user: elasticpassword:valueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_PASSWORD
EOFkubectl apply -f /data/logging-operator/es-output.yamlcat > /data/logging-operator/es-flow.yaml << 'EOF'
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:name: flownamespace: default
spec:filters:- tag_normaliser: {}- parser:remove_key_name_field: truereserve_data: trueparse:type: nginxlocalOutputRefs:- es-outputmatch:- select:labels:app: nginx
EOFkubectl apply -f /data/logging-operator/es-flow.yamlcat > /data/logging-operator/config.yaml << 'EOF'
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:name: fluent-logging
spec:fluentd:disablePvc: true
#    bufferStorageVolume:
#      hostPath:
#        path: ""
#      pvc:
#        spec:
#          accessModes:
#            - ReadWriteOnce
#          resources:
#            requests:
#              storage: 50Gi
#          storageClassName: csi-rbd
#          volumeMode: Filesystemscaling:
#      replicas: 3drain:enabled: trueimage:repository: ghcr.io/banzaicloud/fluentd-drain-watchtag: latestlivenessProbe:periodSeconds: 60initialDelaySeconds: 600exec:command:- "/bin/sh"- "-c"- >LIVENESS_THRESHOLD_SECONDS=${LIVENESS_THRESHOLD_SECONDS:-300};if [ ! -e /buffers ];thenexit 1;fi;touch -d "${LIVENESS_THRESHOLD_SECONDS} seconds ago" /tmp/marker-liveness;if [ -z "$(find /buffers -type d -newer /tmp/marker-liveness -print -quit)" ];thenexit 1;fi;fluentOutLogrotate:enabled: truepath: /fluentd/log/outage: "10"size: "10485760"image:repository: banzaicloud/fluentdtag: v1.10.4-alpine-1pullPolicy: IfNotPresentconfigReloaderImage:repository: jimmidyson/configmap-reloadtag: v0.4.0pullPolicy: IfNotPresentbufferVolumeImage:repository: quay.io/prometheus/node-exportertag: v1.1.2pullPolicy: IfNotPresent
#    logLevel: debugsecurity:roleBasedAccessControlCreate: truereadinessDefaultCheck:bufferFileNumber: truebufferFileNumberMax: 5000bufferFreeSpace: truebufferFreeSpaceThreshold: 90failureThreshold: 1initialDelaySeconds: 5periodSeconds: 30successThreshold: 3timeoutSeconds: 3fluentbit:filterKubernetes:Kube_URL: "https://kubernetes.default.svc:443"Use_Kubelet: "false" tls.verify: "false"Kube_CA_File: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtKube_Token_File: /var/run/secrets/kubernetes.io/serviceaccount/tokenMatch: kubernetes.*Kube_Tag_Prefix: kubernetes.var.log.containersMerge_Log: "false"Merge_Log_Trim: "true"Kubelet_Port: "10250"inputTail:Skip_Long_Lines: "true"Parser: criRefresh_Interval: "60"Rotate_Wait: "5"Mem_Buf_Limit: "128M"Docker_Mode: "false"Tag: "kubernetes.*"bufferStorage:storage.backlog.mem_limit: 10Mstorage.path: /var/log/log-bufferbufferStorageVolume:hostPath:path: "/var/log/log-buffer"positiondb:hostPath:path: "/var/log/positiondb"image:repository: fluent/fluent-bittag: 1.8.15-debugpullPolicy: IfNotPresentenableUpstream: falselogLevel: debugnetwork:dnsPreferIpv4: trueconnectTimeout: 30keepaliveIdleTimeout: 60keepalive: truetls:enabled: falsetolerations:- effect: NoSchedulekey: node-role.kubernetes.io/mastersecurity:podSecurityPolicyCreate: trueroleBasedAccessControlCreate: truesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truepodSecurityContext:fsGroup: 101forwardOptions:Require_ack_response: truemetrics:interval: 60spath: /api/v1/metrics/prometheusport: 2020serviceMonitor: falsecontrolNamespace: loggingwatchNamespaces: ["default","kube-system","logging"]
EOFkubectl apply -f /data/logging-operator/config.yaml
  • 验证
kubectl logs -f -n logging fluent-logging-fluentd-configcheck-31c22f37

为logging-operator的CRDS生成的fluentd配置

为logging-operator的CRDS生成的fluent-bit配置

#安装jq
rpm -ivh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install jq -ykubectl get secrets  -n logging fluent-logging-fluentbit  -o json | jq '.data."fluent-bit.conf"' | xargs echo | base64 --decode

把secrets通过base64decode后

  • Event Tailer
    https://banzaicloud.com/docs/one-eye/logging-operator/configuration/extensions/kubernetes-event-tailer/
apiVersion: logging-extensions.banzaicloud.io/v1alpha1
kind: EventTailer
metadata:name: eventnamespace: logging
spec:controlNamespace: logging
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:name: event-flownamespace: logging
spec:filters:- tag_normaliser: {}- parser:parse:type: jsonmatch:- select:labels:app.kubernetes.io/name: event-tailerglobalOutputRefs:- global-event-output
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:name: global-event-outputnamespace: logging
spec:enabledNamespaces: ["defalut","logging"]elasticsearch:host: elasticsearch.logging.svc.cluster.localport: 9200scheme: http#ssl_verify: false#ssl_version: TLSv1_2user: elasticlogstash_format: truelogstash_prefix: event     #索引名称password:valueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_PASSWORDbuffer:timekey: 1mtimekey_wait: 30stimekey_use_utc: true

  • Host Tailer
    https://banzaicloud.com/docs/one-eye/logging-operator/configuration/extensions/kubernetes-host-tailer/
apiVersion: logging-extensions.banzaicloud.io/v1alpha1
kind: HostTailer
metadata:name: systemd-hosttailernamespace: logging
spec:systemdTailers:- name: systemd-tailerdisabled: falsemaxEntries: 100path: "/run/log/journal"  #此目录一定要指定,不然启动不成功,/var/log/journal或/run/log/journal#systemdFilter: kubelet.service  #对单个指示进行入库时,指定systemd的service,全局默认是所有containerOverrides:image: fluent/fluent-bit:1.8.15-debugfileTailers:- name: message-tailpath: /var/log/messagebuffer_max_size: 64k   #此值一定要修改,不然启动不成功disabled: falseskip_long_lines: "true"---
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:name: hosttailer-flownamespace: logging
spec:filters:- tag_normaliser: {}- parser:parse:type: jsonmatch:- select:labels:app.kubernetes.io/name: host-tailerglobalOutputRefs:- global-host-output---
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:name: global-host-outputnamespace: logging
spec:enabledNamespaces: ["defalut","logging"]elasticsearch:host: elasticsearch.logging.svc.cluster.localport: 9200scheme: http#ssl_verify: false#ssl_version: TLSv1_2user: elasticlogstash_format: truelogstash_prefix: hosttailerpassword:valueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_PASSWORDbuffer:timekey: 1mtimekey_wait: 30stimekey_use_utc: true


clusteroutput配置 方法

apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:name: global-es-outputnamespace: logging           #把clusteroutput放在logging的namespace下
spec:enabledNamespaces: ["defalut","logging"]     #对定义的namespaces有作用elasticsearch:host: elasticsearch.logging.svc.cluster.localport: 9200scheme: http#ssl_verify: false#ssl_version: TLSv1_2user: elasticlogstash_format: truepassword:valueFrom:secretKeyRef:name: elasticsearch-passwordkey: ES_PASSWORDbuffer:timekey: 1mtimekey_wait: 30stimekey_use_utc: true
  • flow或clusterflow引用clusteroutput
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:name: flownamespace: default
spec:filters:- tag_normaliser: {}- parser:remove_key_name_field: truereserve_data: trueparse:type: nginxlocalOutputRefs:    #使用output- es-outputglobalOutputRefs:    #使用clusteroutput- global-es-outputmatch:- select:labels:app: nginx

排错:
1、查看是不有权限访问 kubernetes API

kubectl exec -it -n logging  fluent-logging-fluentd-0 sh
/$#执行 如下命令获得token
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)# 访问API server
curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $TOKEN" -s  https://kubernetes.default.svc:443/api/v1/namespaces/default/pods/ -k{"kind": "Status","apiVersion": "v1","metadata": {},"status": "Failure","message": "pods is forbidden: User \"system:serviceaccount:logging:fluent-logging-fluentd\" cannot list resource \"pods\" in API group \"\" in the namespace \"default\"","reason": "Forbidden","details": {"kind": "pods"},"code": 403
}提示无权访问

2、验证output和flow工作是否正常

[root@host-192-168-11-100 logging-operator]# kubectl get outputs.logging.banzaicloud.io
NAME        ACTIVE   PROBLEMS
es-output   true
[root@host-192-168-11-100 logging-operator]# kubectl get flows.logging.banzaicloud.io
NAME         ACTIVE   PROBLEMS
event-flow   true
flow         true

k8s 日志收集工具(logging operator)--logging、 flow、output、clusteroutput、clusterflow、HostTailer、EventTailer相关推荐

  1. k8s 日志收集工具 (fluent operator)

    一. 架构图 二. CRD 简介 Fluent Operator 为 Fluent Bit 和 Fluentd 分别定义了两个 Group:fluentbit.fluent.io 和 fluentd. ...

  2. Oracle TFA日志收集工具的安装与使用

    TFA日志收集工具: 一.介绍: TFA全称:Trace File Analyzer,日志分析工具. TFA会监视的日志,以发现可能影响服务的重大问题,在检测到任何潜在问题时也会自动收集相关的诊断信息 ...

  3. Scribe日志收集工具

    Scribe日志收集工具 概述 Scribe是facebook开源的日志收集系统,在facebook内部已经得到大量的应用.它能够从各种日志源上收集日志,存储到一个中央存储系统(可以是NFS,分布式文 ...

  4. 性能优越的轻量级日志收集工具,微软、亚马逊都在用!

    ELK日志收集系统大家都知道,但是还有一种日志收集系统EFK,肯定有很多朋友不知道!这里的F指的是Fluentd,它具有Logstash类似的日志收集功能,但是内存占用连Logstash的十分之一都不 ...

  5. 分布式日志收集工具分析比较

    目录 写在最前:为什么做日志收集系统❓ 一.多种日志收集工具比较 1.背景介绍 2.Facebook 的 Scribe 3.Apache 的 Chukwa 4.LinkedIn 的 Kafka 5.C ...

  6. Oracle GI 日志收集工具 - TFA

    1.TFA的目的: TFA是个11.2版本上推出的用来收集Grid Infrastructure/RAC环境下的诊断日志的工具,它可以用非常简单的命令协助用户收集RAC里的日志,以便进一步进行诊断:T ...

  7. 日志收集工具ELK,简单集群配置

    因项目部署在多台服务器上,如果出现Bug需要查询日志的时候,日志非常难查询.所以采用Logstash来收集日志,通过Kibana页面将日志展示出来.一开始偷懒,使用Docker安装了个单机版的ELK, ...

  8. 在Kubernetes上搭建新版fluentd-elasticsearch_1.22日志收集工具

    背景介绍 第一,对于企业来说,日志的重要性不言而喻,就不赘述了. 第二,日志收集分析展示平台的选择,这里给出几点选择ELK的理由.ELK是一套非常成熟的系统,她本身的构架非常适合Kubernetes集 ...

  9. k8s日志收集实战(无坑)

    一.k8s收集日志方案简介 本文主要介绍在k8s中收集应用的日志方案,应用运行中日志,一般情况下都需要收集存储到一个集中的日志管理系统中,可以方便对日志进行分析统计,监控,甚至用于机器学习,智能分析应 ...

最新文章

  1. 计算机网络技能专项训练一:基础配置
  2. Qt-Creator编译pthread多线程程序的方法
  3. 2019年十大AI创业死亡名单:无人车机器人为主,B轮阵亡最多
  4. 老BOJ 07 Fence Repair
  5. 零基础写Java知乎爬虫之进阶篇
  6. 前端学习(1744):前端调试值之调试元素的盒模型
  7. linux scp限制传输速度
  8. 基于uFUN开发板的心率计(三)Qt上位机的实现
  9. 方差公式初三_方差|初中方差的计算公式
  10. 软件测试工程师相关证书
  11. w ndows摄像头驱动怎么安,如何安装摄像头驱动?求安装步骤和方法!!!
  12. 硅树脂油漆申请美国标准UL 790 Class A 合适吗?
  13. 淘宝网触屏版 - 学习笔记(1 - 关于meta)
  14. 关于鸿蒙系统 JS UI 框架源码的分析
  15. Android打码函数,Android 图片编辑的原理与实现——涂鸦与马赛克
  16. XML - XML学习/XML文件解析器(C++)实现
  17. 以太坊数据结构MPT 1
  18. win7产生大量evtx文件_Windows XML Event Log (EVTX)单条日志清除(四)——通过注入获取日志文件句柄删除当前系统单条日志记录...
  19. 使用SQLAlchemy创建数据模型
  20. 消息提示类控件使用之Toast(吐司)的简单使用

热门文章

  1. 微博数据清洗(Python版)
  2. uniapp全局引入js
  3. 天正引出标注lisp_CAD中做法标注和引出标注是怎么画出的
  4. 套索回归 (Lasso Regression)的基本应用
  5. OSChina 周五乱弹 ——麒麟臂极限在哪里
  6. 汇美 LQ-300K+ 打印机驱动
  7. WordPress分类目录绑定二级域名插件WP Subdomains
  8. 学会python能做兼职吗-学完python怎么找兼职呢?
  9. 使用python计算excel表格A列有多少行
  10. 吉大计算机如何本科进实验室,实验室简介-吉林大学理论化学计算实验室