AKA ES Nginx Logs
由神翔,青空,睿希完成
一、介绍
- 由Q群:IT信息文案策划中心 制作
- https://www.akiraka.net
二、Update Logs
2020-04-23
- 新增今日访问pv、今日访问uv、7天访问pv图表
- 针对k8s ingress nginx 日志已测试ok。
- k8s 部署文档正在写
2020-09-30
- 反馈导入必须有 Prometheus 现已移除 Prometheus
三、ELK Version
Name | 7.3.1 | 7.6.1 | 7.9.1 |
---|---|---|---|
kibana | ok | ok | ok |
filebeat | ok | ok | ok |
logstash | ok | ok | ok |
elasticsearch | ok | ok | ok |
四、Error
- 字段错误
logstash 引索必须是 logstash-* 开头,否则需要修改logstash 才可正常
Nginx 字段
- 请保证 nginx 使用该字段,名称如果有修改,grafana 模板需要做一定修改
log_format aka_logs
'{"@timestamp":"$time_iso8601",'
'"host":"$hostname",'
'"server_ip":"$server_addr",'
'"client_ip":"$remote_addr",'
'"xff":"$http_x_forwarded_for",'
'"domain":"$host",'
'"url":"$uri",'
'"referer":"$http_referer",'
'"args":"$args",'
'"upstreamtime":"$upstream_response_time",'
'"responsetime":"$request_time",'
'"request_method":"$request_method",'
'"status":"$status",'
'"size":"$body_bytes_sent",'
'"request_body":"$request_body",'
'"request_length":"$request_length",'
'"protocol":"$server_protocol",'
'"upstreamhost":"$upstream_addr",'
'"file_dir":"$request_filename",'
'"http_user_agent":"$http_user_agent"'
'}';
filebeat 配置
#=========================== Filebeat inputs =============================
filebeat.inputs:
# 收集nginx日志
- type: log
enabled: true
paths:
- /data/wwwlogs/*_nginx.log
# 日志是json开启这个
json.keys_under_root: true
json.overwrite_keys: true
json.add_error_key: true
#-------------------------- Redis output ------------------------------
output.redis:
hosts: ["host"] #输出到redis的机器
password: "password"
key: "nginx_logs" #redis中日志数据的key值ֵ
db: 0
timeout: 5
logstash 配置
input {
# redis nginx key
redis {
data_type =>"list"
key =>"nginx_logs"
host =>"redis"
port => 6379
password => "password"
db => 0
}
}
filter {
geoip {
#multiLang => "zh-CN"
target => "geoip"
source => "client_ip"
database => "/usr/share/logstash/GeoLite2-City.mmdb"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
# 去掉显示 geoip 显示的多余信息
remove_field => ["[geoip][latitude]", "[geoip][longitude]", "[geoip][country_code]", "[geoip][country_code2]", "[geoip][country_code3]", "[geoip][timezone]", "[geoip][continent_code]", "[geoip][region_code]"]
}
mutate {
convert => [ "size", "integer" ]
convert => [ "status", "integer" ]
convert => [ "responsetime", "float" ]
convert => [ "upstreamtime", "float" ]
convert => [ "[geoip][coordinates]", "float" ]
# 过滤 filebeat 没用的字段,这里过滤的字段要考虑好输出到es的,否则过滤了就没法做判断
remove_field => [ "ecs","agent","host","cloud","@version","input","logs_type" ]
}
# 根据http_user_agent来自动处理区分用户客户端系统与版本
useragent {
source => "http_user_agent"
target => "ua"
# 过滤useragent没用的字段
remove_field => [ "[ua][minor]","[ua][major]","[ua][build]","[ua][patch]","[ua][os_minor]","[ua][os_major]" ]
}
}
output {
elasticsearch {
hosts => "es-master"
user => "elastic"
password => "password"
index => "logstash-nginx-%{+YYYY.MM.dd}"
}
}
图片
Data source config
Collector config:
Upload an updated version of an exported dashboard.json file from Grafana
Revision | Description | Created | |
---|---|---|---|
Download |
Google Cloud logs
Easily monitor Google Cloud logs with Grafana Cloud's out-of-the-box monitoring solution.
Learn more