博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
ES搜索引擎集群模式搭建【Kibana可视化】
阅读量:6115 次
发布时间:2019-06-21

本文共 10299 字,大约阅读时间需要 34 分钟。

一.简介

  ElasticSearch是一个基于Lucene的搜索服务器。它提供了一个分布式多用户能力的全文搜索引擎(与Solr类似),基于RESTful web接口。Elasticsearch是用Java开发的,并作为Apache许可条款下的开放源码发布,是当前流行的企业级搜索引擎。设计用于中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。

二.相关概念

  cluster
  代表一个 ,集群中有多个节点,其中有一个为主节点,这个主节点通过选举产生,主从节点是对于集群内部来说的。ES的一个概念就是去中心化,字面上理解就是无中心节点,这是对于集群外部来说的,因为从外部来看ES集群,在逻辑上是个整体,你与任何一个节点的通信和与整个ES集群通信是等价的。
  shards
  代表索引分片,ES可以把一个完整的索引分成多个分片,这样的好处是可以把一个大的索引拆分成多个,分布到不同的节点上。构成分布式搜索。分片的数量只能在索引创建前指定,并且索引创建后不能更改。
  replicas
  代表索引副本,ES可以设置多个索引的副本,副本的作用一是提高系统的 ,当某个节点某个分片损坏或丢失时可以从副本中恢复。二是提高ES的查询效率,ES会自动对搜索请求进行负载均衡。
  recovery
  代表数据恢复或叫数据重新分布,es在有节点加入或退出时会根据机器的负载对索引分片进行重新分配,挂掉的节点重新启动时也会进行数据恢复。
  river
  代表es的一个数据源,也是其它存储方式(如:数据库)同步数据到es的一个方法。它是以插件方式存在的一个es服务,通过读取river中的数据并把它索引到es中,官方的river有couchDB的,RabbitMQ的,Twitter的,Wikipedia的。
  gateway
  代表es索引快照的存储方式,es默认是先把索引存放到内存中,当内存满了时再持久化到本地硬盘。gateway对索引快照进行存储,当这个es集群关闭再重新启动时就会从gateway中读取索引备份数据。es支持多种类型的gateway,有本地文件系统(默认), ,Hadoop的HDFS和amazon的s3 服务。
  discovery.zen
  代表es的自动发现节点机制,es是一个基于p2p的系统,它先通过广播寻找存在的节点,再通过 协议来进行节点之间的通信,同时也支持 的交互。
  Transport
  代表es内部节点或集群与客户端的交互方式,默认内部是使用tcp协议进行交互,同时它支持http协议(json格式)、 、servlet、memcached、zeroMQ等的 (通过 方式集成)。

三.上传

   

  

四.创建用户

  

五.配置

  修改配置文件:elasticsearch.yml

# ======================== Elasticsearch Configuration =========================## NOTE: Elasticsearch comes with reasonable defaults for most settings.#       Before you set out to tweak and tune the configuration, make sure you#       understand what are you trying to accomplish and the consequences.## The primary way of configuring a node is via this file. This template lists# the most important settings you may want to configure for a production cluster.## Please consult the documentation for further information on configuration options:# https://www.elastic.co/guide/en/elasticsearch/reference/index.html## ---------------------------------- Cluster -----------------------------------## Use a descriptive name for your cluster:#cluster.name: zhen-es # 集群名称## ------------------------------------ Node ------------------------------------## Use a descriptive name for the node:#node.name: node-1 # 节点名称## Add custom attributes to the node:##node.attr.rack: r1## ----------------------------------- Paths ------------------------------------## Path to directory where to store the data (separate multiple locations by comma):##path.data: /path/to/data## Path to log files:##path.logs: /path/to/logs## ----------------------------------- Memory -----------------------------------## Lock the memory on startup:##bootstrap.memory_lock: true## Make sure that the heap size is set to about half the memory available# on the system and that the owner of the process is allowed to use this# limit.## Elasticsearch performs poorly when the system is swapping the memory.## ---------------------------------- Network -----------------------------------## Set the bind address to a specific IP (IPv4 or IPv6):#network.host: 192.168.245.133 # 本机ip## Set a custom port for HTTP:#http.port: 9200 # 开放端口## For more information, consult the network module documentation.## --------------------------------- Discovery ----------------------------------## Pass an initial list of hosts to perform discovery when new node is started:# The default list of hosts is ["127.0.0.1", "[::1]"]##discovery.zen.ping.unicast.hosts: ["host1", "host2"]## Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):##discovery.zen.minimum_master_nodes:## For more information, consult the zen discovery module documentation.## ---------------------------------- Gateway -----------------------------------## Block initial recovery after a full cluster restart until N nodes are started:##gateway.recover_after_nodes: 3## For more information, consult the gateway module documentation.## ---------------------------------- Various -----------------------------------## Require explicit names when deleting indices:##action.destructive_requires_name: true# 配置放置脑裂# discovery.zen.ping.multicast.enabled: falsediscovery.zen.ping.unicast.hosts: ["192.168.245.130","192.168.245.131", "192.168.245.133"]discovery.zen.ping_timeout: 120sclient.transport.ping_timeout: 60s

六.分发到其它节点(根据节点进行相应的修改)

  执行命令:scp -r ./elasticsearch-6.6.0/ root@worker1:/usr/local/soft/

  执行命令:scp -r ./elasticsearch-6.6.0/ root@worker2:/usr/local/soft/

  

  修改权限

  

  修改相应配置

  节点worker1:

    node.name: node-2

    network.host: 192.168.245.130

  节点worker2:

    node.name: node-3

    network.host: 192.168.245.131

七.测试ES

  1.切换用户

    su es

  2.执行

    cd ..

    cd bin

    ./elasticsearch

  3.结果

    错误一

    

    解决方案一

    编辑/etc/security/limits.conf,添加

    * soft nofile 65536

    * hard nofile 131072

    * soft nproc 2048

    * hard nproc 4096

    节点master:

     

    错误二

    

    第二个错误是新出现的,解决方案二

    修改:/etc/security/limits.d/20-nproc.conf 

    修改:4096-》5120

    节点worker1:

    

    节点worker2:

    

八.配置可视化显示和交互平台Kibana

  1.简介

    Kibana 是一款开源的数据分析和可视化平台,它是 Elastic Stack 成员之一,设计用于和 Elasticsearch 协作。您可以使用 Kibana 对Elasticsearch索引中的数据进行搜索、查看、交互操作。您可以很方便的利用图表、表格及地图对数据进行多元化的分析和呈现。

    Kibana 可以使大数据通俗易懂。它很简单,基于浏览器的界面便于您快速创建和分享动态数据仪表板来追踪 Elasticsearch 的实时数据变化。

   2.配置

# Kibana is served by a back end server. This setting specifies the port to use.server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.# The default is 'localhost', which usually means remote machines will not be able to connect.# To allow connections from remote users, set this parameter to a non-loopback address.server.host: "192.168.245.133" # 根据节点ip配置# Enables you to specify a path to mount Kibana at if you are running behind a proxy.# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath# from requests it receives, and to prevent a deprecation warning at startup.# This setting cannot end in a slash.#server.basePath: ""# Specifies whether Kibana should rewrite requests that are prefixed with# `server.basePath` or require that they are rewritten by your reverse proxy.# This setting was effectively always `false` before Kibana 6.3 and will# default to `true` starting in Kibana 7.0.#server.rewriteBasePath: false# The maximum payload size in bytes for incoming server requests.#server.maxPayloadBytes: 1048576# The Kibana server's name.  This is used for display purposes.#server.name: "your-hostname"# The URLs of the Elasticsearch instances to use for all your queries.elasticsearch.hosts: ["http://192.168.245.130:9200","http://192.168.245.131:9200","http://192.168.245.133:9200"] # 配置集群中所有的节点# When this setting's value is true Kibana uses the hostname specified in the server.host# setting. When the value of this setting is false, Kibana uses the hostname of the host# that connects to this Kibana instance.#elasticsearch.preserveHost: true# Kibana uses an index in Elasticsearch to store saved searches, visualizations and# dashboards. Kibana creates a new index if the index doesn't already exist.kibana.index: ".kibana"# The default application to load.#kibana.defaultAppId: "home"# If your Elasticsearch is protected with basic authentication, these settings provide# the username and password that the Kibana server uses to perform maintenance on the Kibana# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which# is proxied through the Kibana server.#elasticsearch.username: "user"#elasticsearch.password: "pass"# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.# These settings enable SSL for outgoing requests from the Kibana server to the browser.#server.ssl.enabled: false#server.ssl.certificate: /path/to/your/server.crt#server.ssl.key: /path/to/your/server.key# Optional settings that provide the paths to the PEM-format SSL certificate and key files.# These files validate that your Elasticsearch backend uses the same key files.#elasticsearch.ssl.certificate: /path/to/your/client.crt#elasticsearch.ssl.key: /path/to/your/client.key# Optional setting that enables you to specify a path to the PEM file for the certificate# authority for your Elasticsearch instance.#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]# To disregard the validity of SSL certificates, change this setting's value to 'none'.#elasticsearch.ssl.verificationMode: full# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of# the elasticsearch.requestTimeout setting.#elasticsearch.pingTimeout: 1500# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value# must be a positive integer.#elasticsearch.requestTimeout: 30000# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side# headers, set this value to [] (an empty list).#elasticsearch.requestHeadersWhitelist: [ authorization ]# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.#elasticsearch.customHeaders: {}# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.#elasticsearch.shardTimeout: 30000# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.#elasticsearch.startupTimeout: 5000# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.#elasticsearch.logQueries: false# Specifies the path where Kibana creates the process ID file.#pid.file: /var/run/kibana.pid# Enables you specify a file where Kibana stores log output.#logging.dest: stdout# Set the value of this setting to true to suppress all logging output.#logging.silent: false# Set the value of this setting to true to suppress all logging output other than error messages.#logging.quiet: false# Set the value of this setting to true to log all events, including system usage information# and all requests.#logging.verbose: false# Set the interval in milliseconds to sample system and process performance# metrics. Minimum is 100ms. Defaults to 5000.#ops.interval: 5000# Specifies locale to be used for all localizable strings, dates and number formats.#i18n.locale: "en"

  3.启动

    运行命令:./kibana

  4.查看ES集群情况

  以及节点详情:

  此时表示ES集群配置及启动成功!

转载于:https://www.cnblogs.com/yszd/p/10352325.html

你可能感兴趣的文章
Spark综合使用及用户行为案例区域内热门商品统计分析实战-Spark商业应用实战...
查看>>
初学者自学前端须知
查看>>
Retrofit 源码剖析-深入
查看>>
企业级负载平衡简介(转)
查看>>
ICCV2017 论文浏览记录
查看>>
科技巨头的交通争夺战
查看>>
当中兴安卓手机遇上农行音频通用K宝 -- 卡在“正在通讯”,一直加载中
查看>>
Shell基础之-正则表达式
查看>>
JavaScript异步之Generator、async、await
查看>>
讲讲吸顶效果与react-sticky
查看>>
c++面向对象的一些问题1 0
查看>>
直播视频流技术名词
查看>>
网易跟贴这么火,背后的某个力量不可忽视
查看>>
企业级java springboot b2bc商城系统开源源码二次开发-hystrix参数详解(八)
查看>>
java B2B2C 多租户电子商城系统- 整合企业架构的技术点
查看>>
IOC —— AOP
查看>>
比特币现金将出新招,推动比特币现金使用
查看>>
数据库的这些性能优化,你做了吗?
查看>>
某大型网站迁移总结(完结)
查看>>
mysql的innodb中事务日志(redo log)ib_logfile
查看>>