日志检索系统ELK搭建

  • 架构图
  • 各组件配置
    • 1.filebeat
    • 2.logstash
    • 3.elasticsearch
    • 4.kibana
  • 示例应用

架构图

各组件配置

各组件需保持版本一致,否则无法运行。

1.filebeat

下载、解压、重命名,命令如下

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.16.1-linux-x86_64.tar.gz
tar xf filebeat-7.16.1-linux-x86_64.tar.gz -C /usr/local/
mv /usr/local/filebeat-7.16.1-linux-x86_64/ /usr/local/filebeat

通过后台运行的方式
配置systemd方式的Filebeat启动管理文件

# vim /usr/lib/systemd/system/filebeat.service[Unit]
Description=Filebeat sends log files to Logstash or directory to Elasticsearch.
Wants=network-online.target
After=network-online.target[Service]
ExecStart=/usr/local/filebeat/filebeat -c /usr/local/filebeat/filebeat.yml
Restart=always[Install]
WantedBy=multi-user.target建立新的系统进程并重新装载
# systemctl daemon-reload
启动
# systemctl start filebeat
或者使用指定的配置文件进行启动
# /usr/local/filebeat/filebeat -c /usr/local/filebeat/filebeat.yml
停止
# systemctl stop filebeat

配置文件 filebeat.yml

###################### Filebeat Configuration Example ########################## This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.# ============================== Filebeat inputs ===============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.# filestream is an input for collecting log messages from files.
- type: log# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths:- /var/log/*.log#- c:\programdata\elasticsearch\logs\*# Exclude lines. A list of regular expressions to match. It drops the lines that are# matching any regular expression from the list.#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are# matching any regular expression from the list.#include_lines: ['^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that# are matching any regular expression from the list. By default, no files are dropped.#prospector.scanner.exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked# to add additional information to the crawled log files for filtering#fields:#  level: debug#  review: 1# ============================== Filebeat modules ==============================
# 内置的收集日志的模块配置
filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml# Set to true to enable config reloadingreload.enabled: false# Period on which files under path should be checked for changes#reload.period: 10s# ======================= Elasticsearch template setting =======================setup.template.settings:index.number_of_shards: 1#index.codec: best_compression#_source.enabled: false# ================================== General ===================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:# =================================== Kibana ===================================# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:# Kibana Host# Scheme and port can be left out and will be set to the default (http and 5601)# In case you specify and additional path, the scheme is required: http://localhost:5601/path# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601#host: "localhost:5601"# Kibana Space ID# ID of the Kibana Space into which the dashboards should be loaded. By default,# the Default Space will be used.#space.id:# =============================== Elastic Cloud ================================# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:# ================================== Outputs ===================================# Configure what output to use when sending the data collected by the beat.# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:# Array of hosts to connect to.hosts: ["localhost:9200"]# Protocol - either `http` (default) or `https`.#protocol: "https"# Authentication credentials - either API key or username/password.#api_key: "id:api_key"#username: "elastic"#password: "changeme"# ------------------------------ Logstash Output -------------------------------
#output.logstash:# The Logstash hosts#hosts: ["localhost:5044"]# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"# ================================= Processors =================================
processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~# ================================== Logging ===================================# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.# Set to true to enable the monitoring reporter.
#monitoring.enabled: false# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:# ============================== Instrumentation ===============================# Instrumentation support for the filebeat.
#instrumentation:# Set to true to enable instrumentation of filebeat.#enabled: false# Environment in which filebeat is running on (eg: staging, production, etc.)#environment: ""# APM Server hosts to report instrumentation results to.#hosts:#  - http://localhost:8200# API Key for the APM Server(s).# If api_key is set then secret_token will be ignored.#api_key:# Secret token for the APM Server(s).#secret_token:# ================================= Migration ==================================# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

输入

filebeat.inputs:
- type: logenabled: truepaths:- /var/log/*.log

filebeat的output只能选择一种进行输出。
输出到logstash

output.logstash:hosts: ["localhost:5044"]

查看filebeat运行日志

/usr/local/filebeat/logs

注意:直接在文件中同一行补充内容,会丢失一个字符。

2.logstash

docker构建
下载

docker pull logstash:7.16.1

运行

docker run -d --name=logstash7 -e LS_JAVA_OPTS="-Xms256m -Xmx256m" logstash:7.16.1

配置输入和输出

input {beats {port => 5044}
}output {stdout {}elasticsearch {hosts => ["http://localhost:9200"]index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"#user => "elastic"#password => "changeme"}
}

配置好之后重启服务

tar包安装

curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-7.16.1-linux-x86_64.tar.gz
tar -xf logstash-7.16.1-linux-x86_64.tar.gz -C /usr/local/
mv /usr/local/logstash-7.16.1/ /usr/local/logstash

添加配置文件

vim /usr/local/logstash/config/first-pipeline.conf
input {beats {port => 5044}
}filter {grok {match => { "message" => "${COMBINEDAPACHELOG}" }}
}
// 当匹配到message字段时,用户模式"{COMBINEDAPACHELOG}"进行字段映射
output {stdout {}elasticsearch {hosts => ["http://localhost:9200"]index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"#user => "elastic"#password => "changeme"}
}

运行logstash

bin/logstash -f config/first-pipeline.conf
后面补充参数:
-t :检查配置文件是否有错。
--config.test_and_exit :测试并退出。

3.elasticsearch

docker构建
下载

docker pull elasticsearch:7.16.1

运行

docker run -it --name elasticsearch7 -d -e ES_JAVA_POTS="-Xms256m -Xmx256m" -e "discovery.type=single-node" -p 9200:9200 -p 9300:9300 -p 5601:5601 elasticsearch:7.16.1

注意:ES使用的运行内存大小默认为2G,当前虚拟机配置无法满足,需要修改为256M。

4.kibana

docker构建
下载

docker pull kibana:7.16.1

运行

docker run -it -d -e ELASTICSEARCH_URL=http://127.0.0.1:9200 --name kibana7 --network=container:elasticsearch7 kibana:7.16.1

–network 指定容器共享elasticsearch容器的网络栈 (使用了–network 就不能使用-p 来暴露端口)
安装好之后检查配置文件:

server.host: "0.0.0.0"
server.shutdownTimeout: "5s"
elasticsearch.hosts: [ "http://localhost:9200" ]
monitoring.ui.container.elasticsearch.enabled: true

示例应用

#网上拖取java8的镜像
FROM java:8
MAINTAINER zhangshan-makepakege
#添加本地的jar包到根目录
ADD *.jar web.jar
#暴露两个jar运行的端口
EXPOSE 8080/tcp
EXPOSE 16081/tcp
#执行cmd命令,下面两种方式都可以
#ENTRYPOINT ["java","-Duser.timezone=GMT+08","-jar","/web.jar"]
ENTRYPOINT ["sh","-c","java -Duser.timezone=GMT+08 -jar /web.jar"]

执行脚本

docker build -t demo:1.0.0 .
docker run -itd --name demo -p 8080:8080 demo:1.0.0

构建镜像

docker build -t demo:1.0.0 .
# 最后的符号 . 不能丢。

运行

docker run -itd --name demo -p 8080:8080 demo:1.0.0

查看日志

docker logs -f --tail=100 4a40fe45a990

进入docker容器内部

docker exec -it 6144abb13a1b /bin/bash

【日志检索系统ELK搭建】相关推荐

  1. 企业微信万亿级日志检索系统

    作者:datonli,腾讯 WXG 后台开发工程师 背景 开发在定位问题时需要查找日志,但企业微信业务模块日志存储在本机磁盘,这会造成以下问题: 日志查找效率低下:一次用户请求涉及近十个模块,几十台机 ...

  2. 企业级日志分析系统ELK(Elasticsearch , Logstash, Kibana)

    企业级日志分析系统ELK(Elasticsearch , Logstash, Kibana) 前言 一.ELK概述 1.ELK日志分析系统 2.ELK日志处理特点 3.Elasticsearch概述 ...

  3. 运维实操——日志分析系统ELK(中)之logstash采集数据、伪装rsyslog、多行过滤、grok切片

    日志分析系统ELK(中)之logstash 1.什么是logstash? 2.Logstash安装 3.logstash简单命令行测试 4.logstash文件测试 (1)命令行输入,输出到文件 (2 ...

  4. 实时日志分析系统-ELK

    一.ELK简介 1.什么是日志 日志就是程序产生的,遵循一定格式(通常包括时间戳)的文本数据. 通常日志由服务器生成,输出到不同的文件中,一般会有系统日志. 应用日志.安全日志.这些日志分散地存储在不 ...

  5. AI快车道PaddleNLP系列直播课6|语义检索系统快速搭建落地

    目录 1 搜索核心技术发展 1.1 基于字面匹配的检索流程 传统基于字面匹配的检索的痛点: 2 PaddleNLP语义检索系统 2.1 语义检索系统架构:recall+ranking 2.2 Padd ...

  6. ELK学习10_ELK系列--实时日志分析系统ELK 部署与运行中的问题汇总

    前记: 去年测试了ELK,今年测试了Storm,最终因为Storm需要过多开发介入而放弃,选择了ELK.感谢互联网上各路大神,目前总算是正常运行了. logstash+elasticsearch+ki ...

  7. 日志分析系统ELK之Kibana、es的替代metricbeat

    Kibana Kibana简介 怎么将数据导入kibana 演示环境 kibana安装与配置 可视化现有 Elasticsearch 索引中的数据 创建索引 创建可视化仪表盘图 创建可视化垂直条形图 ...

  8. docker容器日志采集EFK日志分析系统的搭建与应用

    前言 docker容器中的日志会随着docker的关闭而消失,需要一个持久化的日志落地方案 本编介绍elasticsearch+kibana+filebeat的搭建如何收集各docker容器的日志 一 ...

  9. ELK日志分析系统搭建以及springboot日志发送到ELK中

    前言 安装之前服务器必须装了Java环境,我们这里安装的是7.7.0版本,而且7.7.0版本还必须要求jdk11以上.,最好跟我安装的路径保持一致/usr/local/elk,千万不要在root 安装 ...

最新文章

  1. 输出三角形(3.12)(Java)
  2. java 64进制转10进制_java进制转换
  3. CentOS 7安装docker环境
  4. c语言课设代写一般多少钱_海南彩礼钱一般给多少 海南娶媳妇要多少钱
  5. 网游中的网络编程3:在UDP上建立虚拟连接
  6. 完整安装PX4/PX4-Autopilot,无需科学上网。
  7. 根据时间经纬度高程计算天顶角
  8. Bundle Adjustment — A Modern Synthesis(一)
  9. kali2020.4中文安装后,fcitx配置框中空白,无法添加输入法,请做如下操作尝试修复
  10. 谷歌论文Weight Agnostic Neural Networks(WANN)权重无关神经网络
  11. element ui 自定义icon图标
  12. lzma打包exe_【原创】手写PE文件,打造史上最小LZMA解压DLL
  13. 【你觉得这些技术值多少钱?】
  14. android多媒体框架学习 详解 最新版本
  15. springboot入门(四):ajax实现登录
  16. cpu设计和实现(流水线暂停)
  17. 验证码的几种方式-普通图形验证码,滑动拼图,图中点选
  18. 授受不亲?中国古代男女浪漫社交
  19. chrome 浏览器语言切换
  20. 【转】可解释推荐系统:知其然,知其所以然

热门文章

  1. type 和 interface区别
  2. 普中开发板学习(一)
  3. Windows电脑80端口被占用问题
  4. SpringBoot + Vue + ElementUI 开发的后台管理系统
  5. 【转】百度API获取城市名地名(附源码)
  6. 《无人驾驶车辆模型预测控制》之车辆运动学模型
  7. PC-信使服务之不用聊天软件也能通信
  8. 查看linux驱动文件,linux怎么查看硬件驱动
  9. win10突然无法连接wifi(校园网),但是可以连手机热点, 重置网络、重启、刷新DNS都不行
  10. TCL李东生:产业纵深 锁定利润点