摘要:本文將使用容器使用編排快速部署集群,可用于開發(fā)環(huán)境單機(jī)多實(shí)例或生產(chǎn)環(huán)境部署。在集群每一個節(jié)點(diǎn)上執(zhí)行安裝使用安裝重啟服務(wù)驗(yàn)證分詞默認(rèn)使用分詞器只能處理英文,中文會被拆分成一個個的漢字,沒有語義。
本文將使用Docker容器(使用docker-compose編排)快速部署Elasticsearch 集群,可用于開發(fā)環(huán)境(單機(jī)多實(shí)例)或生產(chǎn)環(huán)境部署。
注意,6.x版本已經(jīng)不能通過 -Epath.config 參數(shù)去指定配置文件的加載位置,文檔說明:
For the archive distributions, the config directory location defaults to $ES_HOME/config. The location of the >config directory can be changed via the ES_PATH_CONF environment variable as follows:
ES_PATH_CONF=/path/to/my/config ./bin/elasticsearch
Alternatively, you can export the ES_PATH_CONF environment variable via the command line or via your shell profile.
即交給環(huán)境變量 ES_PATH_CONF 來設(shè)定了(官方文檔),單機(jī)部署多個實(shí)例且不使用容器的同學(xué)多多注意。
準(zhǔn)備工作 安裝 docker & docker-compose這里推進(jìn)使用 daocloud 做個加速安裝:
#docker curl -sSL https://get.daocloud.io/docker | sh #docker-compose curl -L https://get.daocloud.io/docker/compose/releases/download/1.23.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose #查看安裝結(jié)果 docker-compose -v數(shù)據(jù)目錄
#創(chuàng)建數(shù)據(jù)/日志目錄 這里我們部署3個節(jié)點(diǎn) mkdir /opt/elasticsearch/data/{node0,nod1,node2} -p mkdir /opt/elasticsearch/logs/{node0,nod1,node2} -p cd /opt/elasticsearch #權(quán)限我也很懵逼啦 給了 privileged 也不行 索性0777好了 chmod 0777 data/* -R && chmod 0777 logs/* -R #防止JVM報(bào)錯 echo vm.max_map_count=262144 >> /etc/sysctl.conf sysctl -pdocker-compse 編排服務(wù) 創(chuàng)建編排文件
vim docker-compose.yml
- cluster.name=elasticsearch-cluster
集群名稱
- node.name=node0
- node.master=true
- node.data=true
節(jié)點(diǎn)名稱、是否可作為主節(jié)點(diǎn)、是否存儲數(shù)據(jù)
- bootstrap.memory_lock=true
鎖定進(jìn)程的物理內(nèi)存地址避免交換(swapped)來提高性能
- http.cors.enabled=true
- http.cors.allow-origin=*
開啟cors以便使用Head插件
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
JVM內(nèi)存大小配置
- "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2"
- "discovery.zen.minimum_master_nodes=2"
由于5.2.1后的版本是不支持多播的,所以需要手動指定集群各節(jié)點(diǎn)的tcp數(shù)據(jù)交互地址,用于集群的節(jié)點(diǎn)發(fā)現(xiàn)和failover,默認(rèn)缺省9300端口,如設(shè)定了其它端口需另行指定,這里我們直接借助容器通信,也可以將各節(jié)點(diǎn)的9300映射至宿主機(jī),通過網(wǎng)絡(luò)端口通信。
設(shè)定failover選取的quorum = nodes/2 + 1
當(dāng)然,也可以掛載自己的配置文件,ES鏡像的配置文件是/usr/share/elasticsearch/config/elasticsearch.yml,掛載如下:
volumes: - path/to/local/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
version: "3" services: elasticsearch_n0: image: elasticsearch:6.6.2 container_name: elasticsearch_n0 privileged: true environment: - cluster.name=elasticsearch-cluster - node.name=node0 - node.master=true - node.data=true - bootstrap.memory_lock=true - http.cors.enabled=true - http.cors.allow-origin=* - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2" - "discovery.zen.minimum_master_nodes=2" ulimits: memlock: soft: -1 hard: -1 volumes: - ./data/node0:/usr/share/elasticsearch/data - ./logs/node0:/usr/share/elasticsearch/logs ports: - 9200:9200 elasticsearch_n1: image: elasticsearch:6.6.2 container_name: elasticsearch_n1 privileged: true environment: - cluster.name=elasticsearch-cluster - node.name=node1 - node.master=true - node.data=true - bootstrap.memory_lock=true - http.cors.enabled=true - http.cors.allow-origin=* - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2" - "discovery.zen.minimum_master_nodes=2" ulimits: memlock: soft: -1 hard: -1 volumes: - ./data/node1:/usr/share/elasticsearch/data - ./logs/node1:/usr/share/elasticsearch/logs ports: - 9201:9200 elasticsearch_n2: image: elasticsearch:6.6.2 container_name: elasticsearch_n2 privileged: true environment: - cluster.name=elasticsearch-cluster - node.name=node2 - node.master=true - node.data=true - bootstrap.memory_lock=true - http.cors.enabled=true - http.cors.allow-origin=* - "ES_JAVA_OPTS=-Xms512m -Xmx512m" - "discovery.zen.ping.unicast.hosts=elasticsearch_n0,elasticsearch_n1,elasticsearch_n2" - "discovery.zen.minimum_master_nodes=2" ulimits: memlock: soft: -1 hard: -1 volumes: - ./data/node2:/usr/share/elasticsearch/data - ./logs/node2:/usr/share/elasticsearch/logs ports: - 9202:9200
這里我們分別為node0/node1/node2開放宿主機(jī)的9200/9201/9202作為http服務(wù)端口,各實(shí)例的tcp數(shù)據(jù)傳輸用默認(rèn)的9300通過容器管理通信。
如果需要多機(jī)部署,則將ES的transport.tcp.port: 9300端口映射至宿主機(jī)xxxx端口,discovery.zen.ping.unicast.hosts填寫各主機(jī)代理的地址即可:
#比如其中一臺宿主機(jī)為192.168.1.100 ... - "discovery.zen.ping.unicast.hosts=192.168.1.100:9300,192.168.1.101:9300,192.168.1.102:9300" ... ports: ... - 9300:9300創(chuàng)建并啟動服務(wù)
[root@localhost elasticsearch]# docker-compose up -d [root@localhost elasticsearch]# docker-compose ps Name Command State Ports -------------------------------------------------------------------------------------------- elasticsearch_n0 /usr/local/bin/docker-entr ... Up 0.0.0.0:9200->9200/tcp, 9300/tcp elasticsearch_n1 /usr/local/bin/docker-entr ... Up 0.0.0.0:9201->9200/tcp, 9300/tcp elasticsearch_n2 /usr/local/bin/docker-entr ... Up 0.0.0.0:9202->9200/tcp, 9300/tcp #啟動失敗查看錯誤 [root@localhost elasticsearch]# docker-compose logs #最多是一些訪問權(quán)限/JVM vm.max_map_count 的設(shè)置問題查看集群狀態(tài)
192.168.20.6 是我的服務(wù)器地址
訪問http://192.168.20.6:9200/_cat/nodes?v即可查看集群狀態(tài):
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.3 36 98 79 3.43 0.88 0.54 mdi * node0 172.25.0.2 48 98 79 3.43 0.88 0.54 mdi - node2 172.25.0.4 42 98 51 3.43 0.88 0.54 mdi - node1驗(yàn)證 Failover 通過集群接口查看狀態(tài)
模擬主節(jié)點(diǎn)下線,集群開始選舉新的主節(jié)點(diǎn),并對數(shù)據(jù)進(jìn)行遷移,重新分片。
[root@localhost elasticsearch]# docker-compose stop elasticsearch_n0 Stopping elasticsearch_n0 ... done
集群狀態(tài)(注意換個http端口 原主節(jié)點(diǎn)下線了),down掉的節(jié)點(diǎn)還在集群中,等待一段時(shí)間仍未恢復(fù)后就會被剔出
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.2 57 84 5 0.46 0.65 0.50 mdi - node2 172.25.0.4 49 84 5 0.46 0.65 0.50 mdi * node1 172.25.0.3 mdi - node0
等待一段時(shí)間
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.2 44 84 1 0.10 0.33 0.40 mdi - node2 172.25.0.4 34 84 1 0.10 0.33 0.40 mdi * node1
恢復(fù)節(jié)點(diǎn) node0
[root@localhost elasticsearch]# docker-compose start elasticsearch_n0 Starting elasticsearch_n0 ... done
等待一段時(shí)間
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 172.25.0.2 52 98 25 0.67 0.43 0.43 mdi - node2 172.25.0.4 43 98 25 0.67 0.43 0.43 mdi * node1 172.25.0.3 40 98 46 0.67 0.43 0.43 mdi - node0配合 Head 插件觀察
git clone git://github.com/mobz/elasticsearch-head.git cd elasticsearch-head npm install npm run start
集群狀態(tài)圖示更容易看出數(shù)據(jù)自動遷移的過程
1、集群正常 數(shù)據(jù)安全分布在3個節(jié)點(diǎn)上
2、下線 node1 主節(jié)點(diǎn) 集群開始遷移數(shù)據(jù)
遷移中
遷移完成
3、恢復(fù) node1 節(jié)點(diǎn)
安裝IK分詞器analysis-ik:https://github.com/medcl/elas... 注意對應(yīng)版本,這里我們部署的ES為 6.6.2。
在集群每一個節(jié)點(diǎn)上執(zhí)行安裝
docker exec -it elasticsearch_n0 bash # 使用 elasticsearch-plugin 安裝 elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.6.2/elasticsearch-analysis-ik-6.6.2.zip
重啟服務(wù)
docker-compose restart驗(yàn)證分詞
默認(rèn)使用standard分詞器只能處理英文,中文會被拆分成一個個的漢字,沒有語義。
GET /_analyze { "text": "我愛祖國" } # reponse { "tokens": [ { "token": "我", "start_offset": 0, "end_offset": 1, "type": "", "position": 0 }, { "token": "愛", "start_offset": 1, "end_offset": 2, "type": " ", "position": 1 }, { "token": "祖", "start_offset": 2, "end_offset": 3, "type": " ", "position": 2 }, { "token": "國", "start_offset": 3, "end_offset": 4, "type": " ", "position": 3 } ] }
使用ik分詞器
GET /_analyze { "analyzer": "ik_smart", "text": "我愛祖國" } # reponse { "tokens": [ { "token": "我", "start_offset": 0, "end_offset": 1, "type": "CN_CHAR", "position": 0 }, { "token": "愛祖國", "start_offset": 1, "end_offset": 4, "type": "CN_WORD", "position": 1 } ] }
分詞模式有 ik_smart/ ik_max_word兩種方式,可自行根據(jù)業(yè)務(wù)需求選擇。
設(shè)定默認(rèn)分詞器如果我們的業(yè)務(wù)中很多字段都是中文,那在字段定義是都要去指定analyzer是個很繁瑣的工作,我們可以設(shè)定默認(rèn)的分詞器為ik,這樣中英文都能做分詞處理了。
PUT /index { "settings" : { "index" : { "analysis.analyzer.default.type": "ik_smart" } } }
注意:5.x版本后已無法在 elasticsearch.yaml中設(shè)定分詞器配置,只能通過 restApi設(shè)定:
************************************************************************************* Found index level settings on node level configuration. Since elasticsearch 5.x index level settings can NOT be set on the nodes configuration like the elasticsearch.yaml, in system properties or command line arguments.In order to upgrade all indices the settings must be updated via the /${index}/_settings API. Unless all settings are dynamic all indices must be closed in order to apply the upgradeIndices created in the future should use index templates to set default values. Please ensure all required values are updated on all indices by executing: curl -XPUT "http://localhost:9200/_all/_settings?preserve_existing=true" -d "{ "index.analysis.analyzer.default.type" : "ik_smart" }" *************************************************************************************
index.analysis.analyzer.default.type: ik_smart #默認(rèn)索引/檢索分詞器 index.analysis.analyzer.default_index.type: ik_smart #默認(rèn)索引分詞器 index.analysis.analyzer.default_search.type: ik_smart #默認(rèn)檢索分詞器問題小記
elasticsearch watermark
部署完后創(chuàng)建索引發(fā)現(xiàn)有些分片處于 Unsigned 狀態(tài),是由于 elasticsearch watermark:low,high,flood_stage的限定造成的,默認(rèn)硬盤使用率高于85%就會告警,開發(fā)嘛,手動關(guān)掉好了,數(shù)據(jù)會分片到各節(jié)點(diǎn),生產(chǎn)自行決斷。
curl -X PUT http://192.168.20.6:9201/_cluster/settings -H "Content-type":"application/json" -d "{"transient":{"cluster.routing.allocation.disk.threshold_enabled": false}}"
完
文章版權(quán)歸作者所有,未經(jīng)允許請勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。
轉(zhuǎn)載請注明本文地址:http://m.specialneedsforspecialkids.com/yun/27756.html
摘要:摘要本篇文章介紹了如何通過數(shù)人云部署一套標(biāo)準(zhǔn)的日志收集系統(tǒng)。主機(jī)添加完成后,檢查主機(jī)運(yùn)行是否正常,如圖第二步,發(fā)布實(shí)例我們將通過數(shù)人云將的鏡像以模式部署到我們規(guī)劃的主機(jī)和上。 摘要:本篇文章介紹了如何通過數(shù)人云部署一套標(biāo)準(zhǔn)的 ELK 日志收集系統(tǒng)。第一步,將主機(jī)組織成集群;第二步,發(fā)布 ElasticSearch 實(shí)例;第三步,發(fā)布 Kibana 實(shí)例;第四步,發(fā)布 Logstash ...
摘要:我已經(jīng)為你做了這些,并放在上部署到一個多節(jié)點(diǎn)集群使用工作有兩個配置文件和。我們需要部署這些容器到多個主機(jī)上。使用,這會變得非常容易。我希望這篇文章對你部署和遷移有用。除了之外,我們還有部署和管理,和的例子。 本文的作者是 Luke Marsden ,本文的原文地是 Deploying and migrating a multi-node ElasticSearch-Logstas...
閱讀 3947·2021-11-17 09:33
閱讀 3290·2021-10-08 10:05
閱讀 3119·2021-09-22 15:36
閱讀 1145·2021-09-06 15:02
閱讀 2776·2019-08-29 12:45
閱讀 1595·2019-08-26 13:40
閱讀 3406·2019-08-26 13:37
閱讀 428·2019-08-26 13:37