一 、ELK 简介
ELK 分为三部分组成: elasticsearch、logstach、kibana
-
elasticsearch: elasticsearch是个分布式搜索和分析引擎,能对大容量的数据进行接近实时的存储、搜索和分析操作
-
logstash :数据收集引擎支持动态从各种数据源收集数据,并对数据进行过滤、分析丰富的统一格式等操作,然后存储到指定位置,这里将数据发送给elasticsearch,logstash具有强大的插件功能用于日志处理
-
kibana:基于node.js 开发的展示工具,可以为logstash、elasticsearch 提供图形化的日志分析web界面展示,可以汇总、分析和搜索重要的数据日志
-
Filebeat: 轻量级的开源日志文件数据搜集器,在需要采集数据的客户端安装Filebeat,指定目录与日志格式,Filebeat 就能快速搜集数据并发送给logstash 解析,或直接发送给elasticsearch 存储,性能比logstash优势明显是对他的替代
二、 部署环境
此环境只用于数据量不大的集群使用,如果数据量规模较大,需要加入kafka 或 rabbitmq 消息队列
操作系统 | IP | 主机名 | 应用部署规划 |
---|---|---|---|
CentOS Linux release 7.5.1804 (Core) | 192.168.169.10 | els-node | elasticsearch |
CentOS Linux release 7.5.1804 (Core) | 192.168.169.20 | logstash-node | elasticsearch、logstach |
CentOS Linux release 7.6.1810 (Core) | 192.168.169.30 | kibana | elasticsearch、kibana |
1、所有节点关闭防火墙,selinux,所有节点配置hosts
[root@els-node ~]# setenforce 0
[root@els-node ~]# sed -ri 's/^(SELINUX=).*/\1disable/g' /etc/selinux/config
[root@els-node ~]# systemctl stop firewalld
[root@els-node ~]# systemctl disable firewalld
[root@els-node ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.169.10 els-node
192.168.169.20 logstash-node
192.168.169.30 kibana
2、官网下载,elasticsearch安装包,并上传至服务器解压,elasticsearch官网
[root@els-node home]# tar xvf elasticsearch-7.10.1-linux-x86_64.tar.gz
[root@els-node home]# ls
elasticsearch-7.10.1 elasticsearch-7.10.1-linux-x86_64.tar.gz
3、es依赖JDK环境,检查是否安装JDK,官方建议jdk1.8.0_11版本以上
[root@els-node home]# java -version
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)
4、所有节点配置,修改JVM参数,默认为1G,调整为物理内存的70%
[root@els-node ~]# cat /home/elasticsearch-7.10.1/config/jvm.options
-Xms1g
-Xmx1g
5、修改limits资源限制
[root@els-node ~]# tail -5f /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 32000
* hard nproc 32000
* hard memlock unlimited
* soft memlock unlimited
6、配置最大虚拟内存
[root@els-node ~]# vim /etc/sysctl.conf
vm.max_map_count=655360
[root@els-node ~]# sysctl -p
vm.max_map_count = 655360
7、修改各节点es主配置文件
[root@els-node config]# cat elasticsearch.yml |egrep -v "^$"
#集群名称
cluster.name: elast-cluster
#节点名称
node.name: els-node
#数据存储路径
path.data: /home/elasticsearch-7.10.1/data
#日志路径
path.logs: /home/elasticsearch-7.10.1/logs
#ES锁定物理内存,避免使用swap分区,使IOPS升高
bootstrap.memory_lock: true
#当前ES节点
network.host: 192.168.169.10
#ES监听端口,为外部提供服务端口
http.port: 9200
# ES集群内节点,可写为IP+端口,端口默认为9300可修改,用于节点内部通信端口
discovery.seed_hosts: ["192.168.169.10", "192.168.169.20","192.168.169.30"]
# 集群内节点,用于在es集群初始化时选举
cluster.initial_master_nodes: ["els-node", "logstash-node","kibana"]
8、创建es用户并授权
[root@els-node ~]# useradd es
[root@els-node ~]# chown -R es.es /home/elasticsearch-7.10.1
9、启动所有ES
[root@els-node home]# su - es
[es@els-node ~]$ cd /home/elasticsearch-7.10.1/bin/
[es@els-node bin]$ ./elasticsearch -d
10、浏览器验证 http://192.168.169.10:9200/_cat/nodes?v
cdhilmrstw * 表示当前集群 master节点
1、logstach 依赖JDK,检查是否安装JDK,
[root@logstash-node logs]# java -version
java version "11.0.16.1" 2022-08-18 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.16.1+1-LTS-1)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.16.1+1-LTS-1, mixed mode)
2、官网下载logstash 安装包,上传至服务器并解压,授权给普通用户,logstash 官网链接
[root@logstash-node ~]# chown -R es.es /home/logstash-7.3.0
[root@logstash-node home]# tar xvf logstash-7.3.0.tar.gz
[root@logstash-node home]# ls
logstash-7.3.0 logstash-7.3.0.tar.gz
3、创建监控日志的配置文件
[root@logstash-node logstash-7.3.0]# mkdir pip ##创建单独存放监控日志文件的目录
[root@logstash-node pip]# cat nginx_pip.conf
input {
file{
#需要监控的日志
path => ['/usr/local/nginx/logs/access.log']
#定义一个标签
type => "nginx"
#每隔2s 检查监控的文件是否有更新
stat_interval => "2"
}
}
#数据过滤
filter {
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
}
}
output {
#type 定义的标签,与input对应
if [type] == "nginx" {
elasticsearch {
#output 输出到elasticsearch 的集群
hosts => ["192.168.169.10:9200","192.168.169.30:9200","192.168.169.30:9200"]
#日志输出的格式
index => "nginx-%{+YYYY.MM.dd}"
}
}
}
4、启动logstash
[root@logstash-node ~]# su - es
[es@logstash-node ~]$ cd /home/logstash-7.3.0/bin/
[es@logstash-node bin]$ ./logstash -f ../pip/nginx_pip.conf &
1、创建ES 之间的SSL证书,到master 主节点执行
[es@logstash-node elasticsearch-7.10.1]$ ./bin/elasticsearch-certutil ca
Please enter the desired output file [elastic-stack-ca.p12]: #回车
Enter password for elastic-stack-ca.p12 #CA证书的密码,回车
[es@logstash-node elasticsearch-7.10.1]$ ./elasticsearch-certutil cert --ca elastic-stack-ca.p12
Enter password for CA (elastic-stack-ca.p12) : # CA证书的密码,回车
Please enter the desired output file [elastic-certificates.p12]: # 默认
Enter password for elastic-certificates.p12 # 证书密码,回车
2、将生成的 elastic-stack-ca.p12、elastic-certificates.p12 文件复制到各节点的config/certs目录下,更改普通用户权限
#certs需要手动创建,路径可自定义
[es@logstash-node certs]$ scp elastic-*.p12 root@192.168.169.10:/home/elasticsearch-7.10.1/config/certs/
[es@logstash-node certs]$ scp elastic-*.p12 root@192.168.169.30:/home/elasticsearch-7.10.1/config/certs/
[es@logstash-node certs]$ chown -R es.es elastic-*.p12 #所有es节点都需要修改
3、修改 elasticsearch 主配置文件,所有节点执行
[es@logstash-node config]$ cat elasticsearch.yml |egrep -v "^#|^$"
cluster.name: elast-cluster
node.name: logstash-node
path.data: /home/elasticsearch-7.10.1/data
path.logs: /home/elasticsearch-7.10.1/logs
bootstrap.memory_lock: true
network.host: 192.168.169.20
http.port: 9200
discovery.seed_hosts: ["192.168.169.10", "192.168.169.20","192.168.169.30"]
cluster.initial_master_nodes: ["els-node", "logstash-node","kibana"]
#增加下面这5行CA认证配置
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
4、重启所有ES节点
[es@logstash-node config]$ ps -ef|grep elast
[es@logstash-node config]$ #kill -9 6948(PID)
[es@logstash-node bin]$ ./elasticsearch -d
5、到ES的bin目录下执行命令设置初始密码,设置六个账户的密码
我的密码统一设置为123456
[es@logstash-node bin]$ ./elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
.....此处省略
6、修改logstash配置文件
[es@logstash-node bin]$ cd /home/logstash-7.3.0/pip/
[es@logstash-node pip]$ cat nginx_pip.conf
input {
file{
path => ['/usr/local/nginx/logs/access.log']
type => "nginx"
stat_interval => "2"
}
}
filter {
date {
match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
}
}
output {
if [type] == "nginx" {
elasticsearch {
hosts => ["192.168.169.10:9200","192.168.169.30:9200","192.168.169.30:9200"]
index => "nginx-%{+YYYY.MM.dd}"
user => "elastic" #增加用户认证
password => "123456" #增加密码认证
}
}
}
7、重启logstash
[es@logstash-node pip]$ ps -ef|grep logstash
[es@logstash-node pip]$ kill -9 7263 (PID)
[es@logstash-node ~]$ /home/logstash-7.3.0/bin/logstash -f /home/logstash-7.3.0/pip/nginx_pip.conf &
8、加密验证,浏览器登录,http://192.168.169.10:9200/_cluster/state
1、下载kibana 安装包上传至服务器解压
#注意kibanna 版本必须和elasticsearch 对应版本一致,否则启动报错,踩过坑!!
[root@logstash-node opt]# wget https://artifacts.elastic.co/downloads/kibana/kibana-7.10.2-linux-x86_64.tar.gz
[es@kibana home]$ tar xvf kibana-7.10.2-linux-x86_64.tar.gz
[es@kibana home]$ ls
elasticsearch-7.10.1 kibana-7.10.2-linux-x86_64.tar.gz
kibana-7.10.2-linux-x86_64
2、配置kibana
[es@kibana config]$ cat kibana.yml |egrep -v "^#|^$"
#kibana 服务端口
server.port: 5601
#kibana 监听本机IP地址
server.host: "192.168.169.30"
#elasticsearch集群地址
elasticsearch.hosts: ["http://192.168.169.10:9200","http://192.168.169.20:9200","http://192.168.169.30:9200"]
#kibana 汉化
i18n.locale: "zh-CN"
#kibana 认证用户
elasticsearch.username: "kibana_system"
#kibana 认证密码
elasticsearch.password: "123456"
3、授权普通用户,启动程序
[root@kibana ~]# chown -R es.es /home/kibana-7.10.2-linux-x86_64
[root@kibana ~]# su - es -c "/home/kibana-7.10.2-linux-x86_64/bin/kibana &"
4、登录浏览器
账号:elastic
密码:123456