Elastic Database - dependency
(This guide is not updated for every new elastic release so it is recommended to always check directly on the elastic website for up to date information: https://www.elastic.co/start )
We always recommend consulting the official elastic website when deploying the elastic stack which can be found here: https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html
Below is an extract from the Elastic website to set up elastic using Docker, this is useful as a quick guide however, as before we recommend you read the elastic documentation above if you are unfamiliar with the system.
Pulling the image
Obtaining Elasticsearch for Docker is as simple as issuing a docker pull
command against the Elastic Docker registry.
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.9.3
Alternatively, you can download other Docker images that contain only features available under the Apache 2.0 license. To download the images, go to www.docker.elastic.co.
To start a single-node Elasticsearch cluster for development or testing, specify single-node discovery to bypass the bootstrap checks:
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.9.3
To get a three-node Elasticsearch cluster up and running in Docker, you can use Docker Compose:
- Create a
docker-compose.yml
file:
version: '2.2' services: es01: image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3 container_name: es01 environment: - node.name=es01 - cluster.name=es-docker-cluster - discovery.seed_hosts=es02,es03 - cluster.initial_master_nodes=es01,es02,es03 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - data01:/usr/share/elasticsearch/data ports: - 9200:9200 networks: - elastic es02: image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3 container_name: es02 environment: - node.name=es02 - cluster.name=es-docker-cluster - discovery.seed_hosts=es01,es03 - cluster.initial_master_nodes=es01,es02,es03 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - data02:/usr/share/elasticsearch/data networks: - elastic es03: image: docker.elastic.co/elasticsearch/elasticsearch:7.9.3 container_name: es03 environment: - node.name=es03 - cluster.name=es-docker-cluster - discovery.seed_hosts=es01,es02 - cluster.initial_master_nodes=es01,es02,es03 - bootstrap.memory_lock=true - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 volumes: - data03:/usr/share/elasticsearch/data networks: - elastic volumes: data01: driver: local data02: driver: local data03: driver: local networks: elastic: driver: bridge
This sample Docker Compose file brings up a three-node Elasticsearch cluster. Node es01
listens on localhost:9200
and es02
and es03
talk to es01
over a Docker network.
Please note that this configuration exposes port 9200 on all network interfaces, and given how Docker manipulates iptables
on Linux, this means that your Elasticsearch cluster is publically accessible, potentially ignoring any firewall settings. If you don’t want to expose port 9200 and instead use a reverse proxy, replace 9200:9200
with 127.0.0.1:9200:9200
in the docker-compose.yml file. Elasticsearch will then only be accessible from the host machine itself.
The Docker named volumes data01
, data02
, and data03
store the node data directories so the data persists across restarts. If they don’t already exist, docker-compose
creates them when you bring up the cluster.
-
Make sure Docker Engine is allotted at least 4GiB of memory. In Docker Desktop, you configure resource usage on the Advanced tab in Preference (macOS) or Settings (Windows).
Docker Compose is not pre-installed with Docker on Linux. See docs.docker.com for installation instructions: Install Compose on Linux
-
Run
docker-compose
to bring up the cluster:docker-compose up
-
Submit a
_cat/nodes
request to see that the nodes are up and running:curl -X GET "localhost:9200/_cat/nodes?v&pretty"
Log messages go to the console and are handled by the configured Docker logging driver. By default you can access logs with docker logs
.
To stop the cluster, run docker-compose down
. The data in the Docker volumes is preserved and loaded when you restart the cluster with docker-compose up
. To delete the data volumes when you bring down the cluster, specify the -v
option: docker-compose down -v
.