How to deploy Elasticsearch cluster on containerized environment with Docker Swarm

In this guide I use my previous Ceph cluster as a central storage for Elasticsearch cluster and Docker installed with Swarm mode enabled. I created 3 virtual machines in VirtualBox and these are specifications for each virtual machine:
2 CPUs
4GB of memory
64GB of system storage
8GB of additional storage for Ceph cluster
CentOS 7 latest
• Ceph Nautilus latest
• Docker latest with Swarm mode enabled
• Docker Compose latest
Network topology:

In this scenario I will create an Elasticsearch cluster with 3 Elasticsearch nodes (Roles are “cdhimstw“) and 3 Elasticsearch coordinating only nodes (Role as load balancer).

Let’s begin.
The following guide is applied to all nodes (loganalytics1, loganalytics2, and loganalytics3).
1
. Set Docker and Elasticsearch requirements in sysctl.conf
vi /etc/sysctl.conf
insert the following configuration on bottom
# Docker requirements
net.ipv4.ip_forward=1
# Elasticsearch requirements
vm.max_map_count=262144
then reload sysctl.
/sbin/sysctl -p
2
. Set Elasticsearch requirements in docker.service
vi /lib/systemd/system/docker.service
insert the following configuration on bottom inside of service segment
# Elasticsearch requirements
LimitMEMLOCK=infinity
then reload systemctl
systemctl daemon-reload
and restart Docker.
systemctl restart docker
3. Verify new configurations
more /proc/sys/vm/max_map_count
and also this one.
grep locked /proc/$(ps –no-headers -o pid -C dockerd | tr -d ‘ ‘)/limits

4. Create Ceph file system directory
mkdir /cephfs
and then mount it.
mount -t ceph 172.20.20.101,172.20.20.102,172.20.20.103:/ /cephfs -o name=admin,secretfile=cephclusterfs.secret

The following guide is applied only to loganalytics1 node.
5
. Create new directories on mounted Ceph file system directory
mkdir /cephfs/docker-compose
mkdir /cephfs/docker-compose/elasticsearch
mkdir /cephfs/loganalytics1
mkdir /cephfs/loganalytics1/data
mkdir /cephfs/loganalytics1/data-escon
mkdir /cephfs/loganalytics1/logs
mkdir /cephfs/loganalytics1/logs-escon
mkdir /cephfs/loganalytics2
mkdir /cephfs/loganalytics2/data
mkdir /cephfs/loganalytics2/data-escon
mkdir /cephfs/loganalytics2/logs
mkdir /cephfs/loganalytics2/logs-escon
mkdir /cephfs/loganalytics3
mkdir /cephfs/loganalytics3/data
mkdir /cephfs/loganalytics3/data-escon
mkdir /cephfs/loganalytics3/logs
mkdir /cephfs/loganalytics3/logs-escon
mkdir /cephfs/loganalyticsnexus
mkdir /cephfs/loganalyticsnexus/snapshots
then set ownership and permission.
chown -R root:root /cephfs/*; chmod -R g+rwx /cephfs/*; chgrp -R 0 /cephfs/*
6. Change directory.
cd /cephfs/docker-compose/elasticsearch
7. Create Docker Compose for Elasticsearch cluster
vi docker-compose.yml
with the following configuration.
version: “3.9”
services:
  loganalyticselasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
    hostname: “elasticsearch.{{.Node.Hostname}}”
    deploy:
      mode: global
    env_file:
      – elasticsearch.yml
      – jvm.options
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      – type: bind
        source: /etc/localtime
        target: /etc/localtime
        read_only: true
      – data:/usr/share/elasticsearch/data
      – logs:/usr/share/elasticsearch/logs
      – snapshots:/usr/share/elasticsearch/snapshots
    dns:
      – 1.1.1.1
      – 1.0.0.1

  loganalyticselasticsearch-escon:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
    hostname: “elasticsearch-escon.{{.Node.Hostname}}”
    deploy:
      mode: global
    env_file:
      – elasticsearch-escon.yml
      – jvm-escon.options
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      – type: bind
        source: /etc/localtime
        target: /etc/localtime
        read_only: true
      – data-escon:/usr/share/elasticsearch/data
      – logs-escon:/usr/share/elasticsearch/logs
      – snapshots:/usr/share/elasticsearch/snapshots
    ports:
      – 9200:9200
    dns:
      – 1.1.1.1
      – 1.0.0.1

volumes:
  data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/data
  data-escon:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/data-escon
  logs:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/logs
  logs-escon:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/logs-escon
  snapshots:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /cephfs/loganalyticsnexus/snapshots
8. Create Elasticsearch configuration for 3 Elasticsearch nodes
vi elasticsearch.yml
with the following configuration.
cluster.name=loganalyticselasticsearch
node.name=elasticsearch.{{.Node.Hostname}}
node.remote_cluster_client=false
path.repo=/usr/share/elasticsearch/snapshots
bootstrap.memory_lock=true
http.port=9200
transport.host=_eth0_
transport.tcp.port=9300
transport.tcp.no_delay=true
discovery.seed_hosts=elasticsearch.loganalytics1,elasticsearch.loganalytics2,elasticsearch.loganalytics3
cluster.initial_master_nodes=elasticsearch.loganalytics1,elasticsearch.loganalytics2,elasticsearch.loganalytics3
xpack.ml.enabled=false
xpack.watcher.enabled=false
xpack.security.enabled=true
xpack.monitoring.collection.enabled=true
xpack.monitoring.collection.indices=syslog_*,*monitoring*
ELASTIC_PASSWORD=elastic
9. Create Elasticsearch configuration for 3 Elasticsearch coordinating only nodes
vi elasticsearch-escon.yml
with the following configuration.
cluster.name=loganalyticselasticsearch

node.name=elasticsearch-escon.{{.Node.Hostname}}
node.data=false
node.ingest=false
node.master=false
node.remote_cluster_client=false
path.repo=/usr/share/elasticsearch/snapshots
bootstrap.memory_lock=true
http.port=9200
transport.host=_eth1_
transport.tcp.port=9300
transport.tcp.no_delay=true
discovery.seed_hosts=elasticsearch.loganalytics1,elasticsearch.loganalytics2,elasticsearch.loganalytics3
cluster.initial_master_nodes=elasticsearch.loganalytics1,elasticsearch.loganalytics2,elasticsearch.loganalytics3
xpack.ml.enabled=false
xpack.watcher.enabled=false
xpack.security.enabled=true
xpack.monitoring.collection.enabled=true
xpack.monitoring.collection.indices=syslog_*,*monitoring*
ELASTIC_PASSWORD=elastic
10
. Create Java configuration for 3 Elasticsearch nodes
vi jvm.options
with the following configuration.
ES_JAVA_OPTS=-Xms768m -Xmx768m
MAX_LOCKED_MEMORY=unlimited
11
. Create Java configuration for 3 Elasticsearch coordinating only nodes
vi jvm-escon.options
with the following configuration.
ES_JAVA_OPTS=-Xms256m -Xmx256m
MAX_LOCKED_MEMORY=unlimited
12. Deploy Elasticsearch cluster Docker Compose
docker stack deploy –compose-file docker-compose.yml loganalyticsswarm
then check containers state
docker stack ps loganalyticsswarm
and wait until it is on running state.

13
. Verify Elaticsearch cluster
curl -XGET ‘localhost:9200/?pretty’ -u elastic:elastic

verify Elasticsearch coordinating only node working as a load balancer.
curl -XGET ‘localhost:9200/?pretty’ -u elastic:elastic

curl -XGET ‘localhost:9200/?pretty’ -u elastic:elastic

14
. Verify Elasticsearch nodes roles.
curl -XGET ‘localhost:9200/_cat/nodes?v’ -u elastic:elastic

15
. Verify Elasticsearch cluster health status.
curl -XGET ‘localhost:9200/_cat/health?v’ -u elastic:elastic

16
. Set snapshot configuration for Elasticsearch cluster
curl -XPUT “localhost:9200/_snapshot/log_analytics?pretty” -u elastic:elastic -H ‘Content-Type: application/json’ -d’
{
“type”: “fs”,
“settings”: {
“location”: “/usr/share/elasticsearch/snapshots”
}
}

then verify it.
curl -XGET ‘localhost:9200/_snapshot/log_analytics?pretty’ -u elastic:elastic

17
. Setup Elasticsearch cluster default users password. Choose Elasticsearch node container ID
docker ps
then configure it
docker exec -it d6a9c7b5a980 /usr/share/elasticsearch/bin/./elasticsearch-setup-passwords interactive
and set again bootstrap password.
docker exec -it d6a9c7b5a980 /usr/share/elasticsearch/bin/./elasticsearch-keystore add “bootstrap.password”


18. Elasticsearch cluster is ready to use.

Ade Destrianto
Just tryna git gud.

Leave a Reply

Your email address will not be published. Required fields are marked *