ELK收集Nginx日志

1.配置nginx日志

编辑nginx.conf文件 vim /etc/nginx/nginx.conf 在http节点下配置如下

1
2
3
4
5
6
7
8
9
10
11
12
13
log_format json '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"client":"$remote_addr",'
'"url":"$uri",'
'"status":"$status",'
'"domain":"$host",'
'"host":"$server_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"referer": "$http_referer",'
'"ua": "$http_user_agent"'
'}';
access_log /data/nginx/logs/access_json.log json;

目的就是将nginx的日志以json的形式进行文件存储,方便es存储
访问nginx 查看日志 tail -f /data/nginx/logs/access_json.log 可以看到新的入职信息说明配置正常

安装elk

采用docker-compose安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.1.1"
container_name: "elasticsearch"
restart: "always"
volumes:
- "elasticsearch:/usr/share/elasticsearch"
#vim /etc/sysctl.conf
#vm.max_map_count=262144
#sysctl -w vm.max_map_count=262144
#sysctl -p
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
networks:
- "elk"
ports:
- "9200:9200"
- "9300:9300"
kibana:
image: "kibana:7.1.1"
container_name: "kibana"
restart: "always"
depends_on:
- elasticsearch
volumes:
- "kibana:/usr/share/kibana"
networks:
- "elk"
ports:
- "5601:5601"


logstash:
image: "logstash:7.1.1"
container_name: "logstash"
restart: "always"
networks:
- "elk"
ports:
- "5044:5044"
- "9600:9600"
volumes:
- "logstash:/usr/share/logstash"
- "/data/nginx/logs:/data/nginx/logs"

networks:
elk:

volumes:
elasticsearch:
logstash:
kibana:

  • 配置logstash.yml
    config/logstash.yml文件下追加

    1
    path.config: /usr/share/logstash/conf.d/*.conf
  • 配置logstash日志处理文件
    新增conf.d/logstash.conf文件 内容如下:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    input {
    file {
    type => "nginx-access-log"
    path => "/data/nginx/logs/access_json.log"
    start_position => "beginning"
    stat_interval => "2"
    codec => json
    }

    }
    filter {}
    output {
    elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    #index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    index => "logstash-nginx-access-log-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
    }
    stdout {
    codec => json_lines
    }
    }

    说明:

  • start_position 指从文件开始位置读取

  • stat_interval 指每间隔两秒读取一次

  • index 指定索引名称

  • user | password 这里没有安装xpack插件,所以用户名,密码不用配置,如果需要可以 自行配置
    启动docker-compose
    docker -compose up -d --build

之后打开head插件发现发出来一个index库

打开http:{host}:5601在kibana中添加nginx日志匹配规则


Management–>index patterns–>create index pattern
输入logstash-nginx-* 就是在logstash中配置的索引名称前缀
然后配置时间排序字段 @timestamp 这样kibana就可以根据此字段进行时间倒序展示了
配置好之后就可以在左侧discover中查看对应的日志索引信息了

另外可以进行字段筛选显示