Current location - Quotes Website - Team slogan - Constructing EFK log acquisition system with springboot
Constructing EFK log acquisition system with springboot
EFK building (elasticsearchfilebeatkibana)

1, filebeat collects logs (you can collect various log types logpatibility_version": "6.8.0 ",

" minimum _ index _ compatibility _ version ":" 6 . 0 . 0-beta 1 "

},

"Slogan": "You know, for search"

}

3.2 install and start kibana

& gt3.2. 1 Decompress Kibana

[root@ecs7 efk]# su elasticsearch

[elastic search @ ECS 7 efk]$ tar-zxvf kibana-7. 12.0-Linux-x86 _ 64 . tar . gz

& gt3.2.2 configure kibana.

[elastic search @ ECS 7 efk]$ CD kibana-7. 12.0-Linux-x86 _ 64

[elastic search @ ECS 7 ki Bana-7. 12.0-Linux-x86 _ 64]$ CD config/

[elastic search @ ECS 7 config]$ CP ki Bana . yml kibana.yml.org

Back up the original configuration file

[elastic search @ ECS 7 config]$ CP ki Bana . yml kibana.yml.org

Full text of kibana.yml

# port

server.port: 560 1

# Host

server.host: "0.0.0.0 "

# Name

server.name: "master "

# es cluster address

elasticsearch.hosts: ["pression

#_source.enabled: false

setup.kibana:

#-Flexible search output-

output.elasticsearch:

# Host array to be connected.

# es address

Host: ["191.168.0.107: 9200"]

Processor:

-Add host metadata:

When.not.contains.tags: forward

-Add cloud metadata: ~

- add_docker_metadata: ~

- add_kubernetes_metadata: ~

# Log Time Processing

-Timestamp:

Field: json. @ Timestamp

Time zone: Asia/Shanghai

Layout:

-' 2006-0 1-02t 15:04:05+08:00 '

-' 2006-0 1-02t 15:04:05.999+08:00 '

Test:

-' 20 19-06-22t 16:33:5 1+08:00 '

-' 20 19- 1 1- 18t 04:59:5 1. 123+08:00 '

# Delete related fields

-Drop-down field:

Field: [json. @version,json.level_value,json。 @ Timestamp]

# Rename field

-Rename:

Field:

-From: "json.logName"

To: "json.appName"

ignore_missing: false

Failure error: true

& gt3.3.3 start filebeat.

Run filebeat.exe with cmd.

3.4 Boot jumping logistics configuration

Pom.xml adds the logstash-logback-encoder dependency, and logstash-logback-encoder can output logs as json, so we don't have to deal with the problem of multi-line records alone.

net.logstash.logback

Log stash-log back- encoder

5.3

& lt? Xml version =" 1.0 "encoding ="UTF-8"? & gt

%d{yyyy-MM-dd HH:mm:ss。 SSS }[% thread]%-5 level %logger-%msg%n

UTF-8

logs/${logName}/${logName}。 log

real

logs/$ { logName }/$ { logName }-% d { yyyy-MM-DD } . log,%i

64MB

30

1GB

Asia/Shanghai

{"level": "%level "," class": "%logger{40} "," message": "%message "," stack_trace": "%exception"}

Start the springboot service, and the generated logs will be automatically collected by filebeat and pushed to es.