大橙子网站建设,新征程启航
为企业提供网站建设、域名注册、服务器等服务
elasticsearch.yml 参考官方文档https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
成都做网站、网站制作服务团队是一支充满着热情的团队,执着、敏锐、追求更好,是创新互联的标准与要求,同时竭诚为客户提供服务是我们的理念。创新互联公司把每个网站当做一个产品来开发,精雕细琢,追求一名工匠心中的细致,我们更用心!
version: '3'
services:
elasticsearch:
image: elasticsearch:7.4.2
restart: always
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
networks:
- logging
volumes:
- esdata1:/usr/share/elastcisearch/data
- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
esdata1:
driver: local
networks:
logging:
external:
name: logging
新版在安装过程中遇到两个问题
1 the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
需要新建elasticsearch.yml文件(https://github.com/elastic/elasticsearch/blob/master/distribution/src/config/elasticsearch.yml)
修改node.name和cluster.initial_master_nodes一致
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: es-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: "es-master"
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#${path.data}
#
# Path to log files:
#
#${path.logs}
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["es-master"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
http.cors.enabled: true
http.cors.allow-origin: /.*/
vm.max_map_count=262144
docker-compose.yml
version: '3'
services:
filebeat:
image: elastic/filebeat:7.4.2
container_name: filebeat
volumes:
- ./filebeat.yml:/usr/share/filebeat/filebeat.yml
restart: always
networks:
- logging
deploy:
replicas: 1
networks:
logging:
external:
name: logging
filebeat.yml
filebeat.inputs:
- type: log
paths:
- /var/lib/docker/containers/*/*.log
output.elasticsearch:
hosts: ["elasticsearch:9200"]
kibana没有什么繁琐的配置,指定ELASTICSEARCH_HOSTS即可
docker-compose.yml 配置如下
version: '3'
services:
kibana:
image: kibana:7.4.2
ports:
- 5601:5601
networks:
- logging
environment:
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
networks:
logging:
external:
name: logging