【ELK】ElasticSearch集群搭建(测试)
xixuefeng
ElasticSearch, ELK
2018-11-06 15:46:45
2,042 次浏览
ES
【ELK】ElasticSearch集群搭建(测试)已关闭评论
1:三台测试虚拟机基本信息如下
- 操作系统:CentOS 7.3
- JDK :java version “1.8.0_151”
- es:elasticsearch-6.1.1
- IP:192.168.31.11;192.168.31.12;192.168.31.13
2:系统准备
- 关闭防火墙及SELinux,参考:www.xixuefeng.top/archives/826
- 修改系统参数,参考:http://www.xixuefeng.top/archives/889
3:创建用户及相关目录(三台测试虚拟机均做如下操作)
|
1 2 3 4 5 6 7 8 9 10 11 |
## 组和用户的名字可以自定义,本人习惯oracle用户,所以,本测试使用的是oracle用户 [root@node1 ~]# groupadd -g 1000 oinstall [root@node1 ~]# groupadd -g 1001 dba [root@node1 ~]# groupadd -g 1002 oper [root@node1 ~]# useradd -u 1001 -d /home/oracle -g oinstall -G dba,oper oracle ## 修改oracle用户口令 [root@node1 ~]# passwd oracle ## 创建数据及日志目录 [root@node1 ~]# mkdir -p /es/data [root@node1 ~]# mkdir -p /es/logs [root@node1 ~]# chown -R oracle:oinstall /es |
4:上传es软件及JDK1.8到指定目录(本测试为/soft目录,每个节点都要上传)步骤略
|
1 2 3 4 5 |
[root@node1 soft]# ll total 213088 -rw-r--r--. 1 root root 28462503 Nov 6 13:55 elasticsearch-6.1.1.tar.gz -rw-r--r--. 1 root root 189736377 Nov 6 13:55 jdk-8u151-linux-x64.tar.gz [root@node1 soft]# |
5:安装JDK(每个节点都安装)
- 参考:http://www.xixuefeng.top/archives/856
- oracle用户下配置JDK环境变量
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
[oracle@node3 ~]$ cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/.local/bin:$HOME/bin export PATH export JAVA_HOME=/usr/local/jdk1.8.0_151 export PATH=$JAVA_HOME/bin:$PATH [oracle@node3 ~]$ |
6:安装es(每个节点都安装)
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
## 利用root用户解压Elasticsearch至/usr/local/目录下,并修改属主、属组 ## 解tar包 [root@node1 ~]# cd /soft/ [root@node1 soft]# tar -xzvf elasticsearch-6.1.1.tar.gz -C /usr/local/ ...... [root@node1 soft]# ## 修改 elasticsearch-6.1.1 属主、属组 [root@node1 soft]# cd /usr/local/ [root@node1 local]# ls -l |grep el drwxr-xr-x. 9 root root 155 Jan 3 23:36 elasticsearch-6.1.1 [root@node1 local]# [root@node1 local]# chown -R oracle:oinstall elasticsearch-6.1.1/ [root@node1 local]# [root@node1 local]# ls -l |grep el drwxr-xr-x. 9 oracle oinstall 155 Jan 3 23:36 elasticsearch-6.1.1 [root@node1 local]# |
7:修改配置文件
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[root@node1 ~]# vi /usr/local/elasticsearch-6.1.1/config/elasticsearch.yml cluster.name: my-application # 集群名称,所有节点的配置应该一样 node.name: node-1 # 节点的名称,每个节点名称不同 path.data: /es/data # 数据存放路径 path.logs: /es/logs # 日志存放路径 network.host: 0.0.0.0 # IP访问限制,0.0.0.0表示不限制任何IP访问 http.port: 9200 # 实例对外提供服务端口号 transport.tcp.port: 9300 # 集群节点间内部通信端口号 # 集群IP列表 discovery.zen.ping.unicast.hosts: ["192.168.31.12:9300", "192.168.31.13:9300"] # 设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点。默认为1,对于大的集群来说,可以设置大一点的值(2-4),集群节点数最少为 半数+1 discovery.zen.minimum_master_nodes: 2 |
8:启动(每个节点都启动)
- 用我们创建的oracle用户启动
- 日志打印在前台 ./bin/elasticsearch
- 日志打印在后台 ./bin/elasticsearch -p
|
1 2 3 4 5 6 |
[root@node1 ~]# su - oracle Last login: Thu Jan 4 01:32:33 CST 2018 on pts/0 [oracle@node1 ~]$ [oracle@node1 ~]$ cd /usr/local/elasticsearch-6.1.1/ [oracle@node1 elasticsearch-6.1.1]$ [oracle@node1 elasticsearch-6.1.1]$ ./bin/elasticsearch |
用节点1的日志举例,如下:我们可以看到,倒数第四行“detected_master {node-3}”表示选举了节点3为master
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
[oracle@node1 ~]$ cd /usr/local/elasticsearch-6.1.1/ [oracle@node1 elasticsearch-6.1.1]$ ./bin/elasticsearch [2018-11-06T14:55:48,497][INFO ][o.e.n.Node ] [node-1] initializing ... [2018-11-06T14:55:49,638][INFO ][o.e.e.NodeEnvironment ] [node-1] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [35.2gb], net total_space [36.9gb], types [rootfs] [2018-11-06T14:55:49,645][INFO ][o.e.e.NodeEnvironment ] [node-1] heap size [1015.6mb], compressed ordinary object pointers [true] [2018-11-06T14:55:49,659][INFO ][o.e.n.Node ] [node-1] node name [node-1], node ID [i-nt6PHrROCsHAYwnqXaxQ] [2018-11-06T14:55:49,662][INFO ][o.e.n.Node ] [node-1] version[6.1.1], pid[9859], build[bd92e7f/2017-12-17T20:23:25.338Z], OS[Linux/3.10.0-514.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_151/25.151-b12] [2018-11-06T14:55:49,665][INFO ][o.e.n.Node ] [node-1] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/local/elasticsearch-6.1.1, -Des.path.conf=/usr/local/elasticsearch-6.1.1/config] [2018-11-06T14:56:06,167][INFO ][o.e.p.PluginsService ] [node-1] loaded module [aggs-matrix-stats] [2018-11-06T14:56:06,169][INFO ][o.e.p.PluginsService ] [node-1] loaded module [analysis-common] [2018-11-06T14:56:06,169][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-common] [2018-11-06T14:56:06,170][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-expression] [2018-11-06T14:56:06,171][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-mustache] [2018-11-06T14:56:06,172][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-painless] [2018-11-06T14:56:06,173][INFO ][o.e.p.PluginsService ] [node-1] loaded module [mapper-extras] [2018-11-06T14:56:06,173][INFO ][o.e.p.PluginsService ] [node-1] loaded module [parent-join] [2018-11-06T14:56:06,174][INFO ][o.e.p.PluginsService ] [node-1] loaded module [percolator] [2018-11-06T14:56:06,175][INFO ][o.e.p.PluginsService ] [node-1] loaded module [reindex] [2018-11-06T14:56:06,176][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repository-url] [2018-11-06T14:56:06,189][INFO ][o.e.p.PluginsService ] [node-1] loaded module [transport-netty4] [2018-11-06T14:56:06,193][INFO ][o.e.p.PluginsService ] [node-1] loaded module [tribe] [2018-11-06T14:56:06,196][INFO ][o.e.p.PluginsService ] [node-1] no plugins loaded [2018-11-06T14:56:32,089][INFO ][o.e.d.DiscoveryModule ] [node-1] using discovery type [zen] [2018-11-06T14:56:40,693][INFO ][o.e.n.Node ] [node-1] initialized [2018-11-06T14:56:40,694][INFO ][o.e.n.Node ] [node-1] starting ... [2018-11-06T14:56:43,190][INFO ][o.e.t.TransportService ] [node-1] publish_address {192.168.31.11:9300}, bound_addresses {[::]:9300} [2018-11-06T14:56:43,430][INFO ][o.e.b.BootstrapChecks ] [node-1] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks [2018-11-06T14:56:47,998][WARN ][o.e.d.z.ZenDiscovery ] [node-1] not enough master nodes discovered during pinging (found [[Candidate{node={node-1}{i-nt6PHrROCsHAYwnqXaxQ}{ek0nqJM-T3WjYwPg25G2iA}{192.168.31.11}{192.168.31.11:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again [2018-11-06T14:56:48,037][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][7][9] duration [799ms], collections [1]/[1.1s], total [799ms]/[4.9s], memory [61.2mb]->[53.8mb]/[1015.6mb], all_pools {[young] [38.9mb]->[16.9mb]/[66.5mb]}{[survivor] [8.3mb]->[5.1mb]/[8.3mb]}{[old] [13.9mb]->[32mb]/[940.8mb]} [2018-11-06T14:56:48,079][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][7] overhead, spent [799ms] collecting in the last [1.1s] [2018-11-06T14:56:52,801][INFO ][o.e.c.s.ClusterApplierService] [node-1] detected_master {node-3}{GWHHSw3JTeyJWaxvpkJ0lA}{hmdfgaO-Q2qgRmY4jScz6A}{192.168.31.13}{192.168.31.13:9300}, added {{node-2}{z5TPseyNS0SJn25BqdNTkA}{E7Ii9T5bSsuWiuLndxYCNA}{192.168.31.12}{192.168.31.12:9300},{node-3}{GWHHSw3JTeyJWaxvpkJ0lA}{hmdfgaO-Q2qgRmY4jScz6A}{192.168.31.13}{192.168.31.13:9300},}, reason: apply cluster state (from master [master {node-3}{GWHHSw3JTeyJWaxvpkJ0lA}{hmdfgaO-Q2qgRmY4jScz6A}{192.168.31.13}{192.168.31.13:9300} committed version [2]]) [2018-11-06T14:56:53,453][INFO ][o.e.h.n.Netty4HttpServerTransport] [node-1] publish_address {192.168.31.11:9200}, bound_addresses {[::]:9200} [2018-11-06T14:56:53,454][INFO ][o.e.n.Node ] [node-1] started [2018-11-06T15:07:21,999][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][640] overhead, spent [498ms] collecting in the last [1s] |
9:查看节点状态
|
1 2 3 4 5 6 |
## 其中带*号的表示为master [root@msp ~]# curl -XGET 'http://192.168.31.11:9200/_cat/nodes?pretty' 192.168.31.12 9 94 1 0.00 0.16 0.35 mdi - node-2 192.168.31.11 9 92 3 0.00 0.16 0.37 mdi - node-1 192.168.31.13 7 92 2 0.02 0.20 0.41 mdi * node-3 [root@msp ~]# |