ELFK-实时抓取HAPROXY日志

ELFK-实时抓取HAPROXY日志

1、前言

本篇博客仅记录haproxy日志,通过logstash正则进行数据结构化,输出到elasticsearch中,最后通过kibaka进行展示的完整过程,EK的安装及配置方法,请参考:ELK-Stack简介及安装手册

1.1、Logstash简介

Logstash 能够动态地采集、转换和传输数据,不受格式或复杂度的影响。利用 Grok 从非结构化数据中派生出结构,从 IP 地址解码出地理坐标,匿名化或排除敏感字段,并简化整体处理过程。数据往往以各种各样的形式,或分散或集中地存在于很多系统中。 Logstash 支持 各种输入选择 ,可以在同一时间从众多常用来源捕捉事件。能够以连续的流式传输方式,轻松地从您的日志、指标、Web 应用、数据存储以及各种 AWS 服务采集数据。如下图:

  • 图一:image_1dod7098hm0q1u501sr91lvhe7jp.png-41.8kB

2、部署环境介绍

平台 IP 用途 E版本 L版本 K版本
CentOS 6.7 64Bit 192.168.1.241 ES+Cerebro+Kibana 6.7.0 6.7.0
CentOS 6.7 64Bit 192.168.1.43 Logstash 6.7.0

2、Logstash的安装与配置

2.1、配置EPEL源+K源(all_logstash)

1
2
3
4
5
6
7
8
9
10
11
12
[root@localhost ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/Packages/e/epel-release-6-8.noarch.rpm
[root@localhost ~]# yum install vim telnet wget nethogs htop glances dstat traceroute lrzsz goaccess ntpdate dos2unix openssl-devel tcpdump lrzsz fio nss curl -y
[root@localhost ~]# yum groupinstall "Development Tools" -y
[root@localhost ~]# vim /etc/yum.repos.d/logstash.repo
[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

2.2、安装JDK及配置系统环境(all_logstash)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@test1 ~]# echo "* - nofile 65536" >> /etc/security/limits.conf
[root@test1 ~]# sed -i "s/1024/65536/g" /etc/security/limits.d/90-nproc.conf
[root@test1 ~]# echo "fs.file-max = 65536" >> /etc/sysctl.conf
[root@test1 ~]# wget http://192.168.1.231/soft/jdk-8u131-linux-x64.tar.gz
[root@test1 ~]# mkdir -pv /usr/java/
[root@test1 ~]# tar xzvf jdk-8u131-linux-x64.tar.gz -C /usr/java/
[root@test1 ~]# ln -s /usr/java/jdk1.8.0_131/bin/java /usr/sbin/
[root@test1 ~]# vim /etc/profile
JAVA_HOME=/usr/java/jdk1.8.0_131
export JAVA_HOME
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export CLASSPATH
PATH=$JAVA_HOME/bin:$PATH:$HOME/bin:$JAVA_HOME/bin
export PATH
export LANG=zh_CN.UTF-8
[root@test1 ~]# reboot

2.3、Logstash安装(all_logstash)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@test1 ~]# yum install logstash-6.7.0
[root@test1 ~]# chkconfig logstash on
[root@test1 ~]# rpm -ql logstash
#配置文件路径
/etc/logstash/conf.d
/etc/logstash/jvm.options
/etc/logstash/log4j2.properties
/etc/logstash/logstash-sample.conf
/etc/logstash/logstash.yml
/etc/logstash/pipelines.yml
/etc/logstash/startup.options
#缓存数据路径
/var/lib/logstash/
#主程序路径
/usr/share/logstash
#日志路径
/var/log/logstash

2.4、导入haproxy_patterns规则库

此规则库来自官方GITHUB,详见:ELK_GROK学习笔记

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@atmpos patterns]# pwd
/usr/share/logstash/patterns
[root@atmpos patterns]# cat haproxy_patterns
HAPROXYTIME (?!<[0-9])%{HOUR:haproxy_hour}:%{MINUTE:haproxy_minute}(?::%{SECOND:haproxy_second})(?![0-9])
HAPROXYDATE %{MONTHDAY:haproxy_monthday}/%{MONTH:haproxy_month}/%{YEAR:haproxy_year}:%{HAPROXYTIME:haproxy_time}.%{INT:haproxy_milliseconds}

HAPROXYCAPTUREDREQUESTHEADERS %{DATA:captured_request_headers}
HAPROXYCAPTUREDRESPONSEHEADERS %{DATA:captured_response_headers}

# parse a haproxy 'httplog' line
HAPROXYHTTPBASE %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_request}/%{INT:time_queue}/%{INT:time_backend_connect}/%{INT:time_backend_response}/%{NOTSPACE:time_duration} %{INT:http_status_code} %{NOTSPACE:bytes_read} %{DATA:captured_request_cookie} %{DATA:captured_response_cookie} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue} (\{%{HAPROXYCAPTUREDREQUESTHEADERS}\})?( )?(\{%{HAPROXYCAPTUREDRESPONSEHEADERS}\})?( )?"(<BADREQ>|(%{WORD:http_verb} (%{URIPROTO:http_proto}://)?(?:%{USER:http_user}(?::[^@]*)?@)?(?:%{URIHOST:http_host})?(?:%{URIPATHPARAM:http_request})?( HTTP/%{NUMBER:http_version})?))?"?

HAPROXYHTTP (?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{HAPROXYHTTPBASE}

# parse a haproxy 'tcplog' line
HAPROXYTCP (?:%{SYSLOGTIMESTAMP:syslog_timestamp}|%{TIMESTAMP_ISO8601:timestamp8601}) %{IPORHOST:syslog_server} %{SYSLOGPROG}: %{IP:client_ip}:%{INT:client_port} \[%{HAPROXYDATE:accept_date}\] %{NOTSPACE:frontend_name} %{NOTSPACE:backend_name}/%{NOTSPACE:server_name} %{INT:time_queue}/%{INT:time_backend_connect}/%{NOTSPACE:time_duration} %{NOTSPACE:bytes_read} %{NOTSPACE:termination_state} %{INT:actconn}/%{INT:feconn}/%{INT:beconn}/%{INT:srvconn}/%{NOTSPACE:retries} %{INT:srv_queue}/%{INT:backend_queue}

2.5、导入haproxy.conf配置文件

提示:首先根据你HAPROXY的使用类型,来引用对应规则库,目前官方规则库支持httplog、tcplog日志格式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[root@atmpos conf.d]# pwd
/etc/logstash/conf.d
[root@atmpos conf.d]# cat haproxy.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
file {
path => "/var/log/haproxy.log"
type => "haproxy-access-log"
exclude => "*.gz"
start_position => "beginning"
stat_interval => "1"
}
}

filter {
if [type] == "haproxy-access-log" {
grok {
match => { "message" => "%{HAPROXYTCP}" }
}
date {
match => ["accept_date","dd/MMM/yyyy:HH:mm:ss.SSS"]
}
mutate {
rename => { "host" => "host.name" }
remove_field =>["message"]
remove_field =>["host.name"]
add_field => { "server_ip" => "192.168.1.43" }
remove_field => ["accept_date"]

}
}
}

output {
if [type] == "haproxy-access-log" {
elasticsearch {
hosts => ["192.168.1.241:9200"]
index => "haproxy-43-log-%{+YYYY.MM.dd}"
}
}
}

2.6、重启logstash

提示:日志如无ERROR级别的日志,表示数据已开始向ES导入

1
2
[root@atmpos logstash]# service logstash restart
[root@atmpos logstash]# tail -f /var/log/logstash/logstash-plain.log

2.7、登录Cerebro查看ES索引及数据情况

ES管理工具下载地址,需JDK1.8环境:https://github.com/lmenezes/cerebro

image_1dodc61ti1okj35u1avph7d16p5m.png-47.3kB
image_1dodcu1h61bfpmq31voksh0lir9.png-104.1kB

2.8、登录kibana查看ES索引及数据情况

image_1dodd5v5r1gbe1e4gac5pvc10a1m.png-118.5kB

-------------本文结束感谢您的阅读-------------
LiGuanCheng wechat
如有问题,请与我微信交流或通过右下角“daovoice”与我联系~。
请我喝一杯咖啡~