How to deploy Kibana cluster and Logstash cluster on a containerized environment with Docker Swarm

In this guide I use my previous Elasticsearch cluster for Kibana cluster and Logstash cluster with Traefik as a load balancer for both clusters. The network topology:

In this scenario I will create a Kibana cluster with 3 Kibana instances, a Logstash cluster with 3 Logstash nodes, Traefik (Role as load balancer), and Tecnativa’s Docker Socket Proxy (Role as secure Docker’s socket access).

Let’s begin.
The following guide is applied only to loganalytics1 node.
1. Create new directories on mounted Ceph file system directory
mkdir /cephfs/docker-compose/kibana
mkdir /cephfs/docker-compose/logstash
mkdir /cephfs/docker-compose/logstash/pipeline
mkdir /cephfs/docker-compose/traefik
then set ownership and permission.
chown -R root:root /cephfs/docker-compose/*; chmod -R g+rwx /cephfs/docker-compose/*; chgrp -R 0 /cephfs/docker-compose/*
2. Change directory.
cd /cephfs/docker-compose/traefik
3. Create a Docker Compose
vi docker-compose.yml
with the following configuration.
version: “3.9”
services:
  socket:
    image: tecnativa/docker-socket-proxy:latest
    hostname: “socket.{{.Node.Hostname}}”
    deploy:
      placement:
        constraints:
          – node.role == manager
    environment:
      SERVICES: 1
      TASKS: 1
      NETWORKS: 1
    volumes:
      – type: bind
        source: /etc/localtime
        target: /etc/localtime
        read_only: true
      – type: bind
        source: /var/run/docker.sock
        target: /var/run/docker.sock
        read_only: true
    networks:
      – socket
    dns:
      – 1.1.1.1
      – 1.0.0.1

  traefik:
    image: traefik:2.4.8
    hostname: “traefik.{{.Node.Hostname}}”
    deploy:
      placement:
        constraints:
          – node.role == manager
    environment:
      – TRAEFIK_API_INSECURE=true
      – TRAEFIK_PROVIDERS_DOCKER=true
      – TRAEFIK_PROVIDERS_DOCKER_SWARMMODE=true
      – TRAEFIK_PROVIDERS_DOCKER_ENDPOINT=tcp://socket.{{.Node.Hostname}}:2375
      – TRAEFIK_PROVIDERS_DOCKER_EXPOSEDBYDEFAULT=false
      – TRAEFIK_PROVIDERS_DOCKER_NETWORK=loganalyticsswarm_default
      – TRAEFIK_ENTRYPOINTS_kibana_ADDRESS=:80
      – TRAEFIK_ENTRYPOINTS_logstash_ADDRESS=:514/udp
    volumes:
      – type: bind
        source: /etc/localtime
        target: /etc/localtime
        read_only: true
    networks:
      – socket
      – loganalyticsswarm_default
    ports:
      – 80:80
      – 514:514/udp
      – 8080:8080
    dns:
      – 1.1.1.1
      – 1.0.0.1

  loganalyticskibana:
    image: docker.elastic.co/kibana/kibana:7.12.1
    hostname: “kibana.{{.Node.Hostname}}”
    deploy:
      mode: global
      labels:
        – “traefik.enable=true”
        – “traefik.docker.network=loganalyticsswarm_default”
        – “traefik.http.routers.kibana.entrypoints=kibana”
        – “traefik.http.routers.kibana.rule=PathPrefix(`/`)”
        – “traefik.http.services.kibana.loadbalancer.server.port=5601”
    environment:
      – HOSTNAME={{.Node.Hostname}}
      – NODE_OPTIONS=”–max-old-space-size=256″
    volumes:
      – type: bind
        source: /etc/localtime
        target: /etc/localtime
        read_only: true
      – type: bind
        source: /cephfs/docker-compose/kibana/kibana.yml
        target: /usr/share/kibana/config/kibana.yml
        read_only: true
    networks:
      – loganalyticsswarm_default
    dns:
      – 1.1.1.1
      – 1.0.0.1

  loganalyticslogstash:
    image: docker.elastic.co/logstash/logstash:7.12.1
    hostname: “logstash.{{.Node.Hostname}}”
    deploy:
      mode: global
      labels:
        – “traefik.enable=true”
        – “traefik.docker.network=loganalyticsswarm_default”
        – “traefik.udp.routers.logstash.entrypoints=logstash”
        – “traefik.udp.services.logstash.loadbalancer.server.port=5514”
    environment:
      – HOSTNAME={{.Node.Hostname}}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      – type: bind
        source: /etc/localtime
        target: /etc/localtime
        read_only: true
      – type: bind
        source: /cephfs/docker-compose/logstash/logstash.yml
        target: /usr/share/logstash/config/logstash.yml
        read_only: true
      – type: bind
        source: /cephfs/docker-compose/logstash/jvm.options
        target: /usr/share/logstash/config/jvm.options
        read_only: true
      – pipeline:/usr/share/logstash/pipeline
    networks:
      – loganalyticsswarm_default
    dns:
      – 1.1.1.1
      – 1.0.0.1

volumes:
  pipeline:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /cephfs/docker-compose/logstash/pipeline

networks:
  socket:
    external: false
    name: socket
  loganalyticsswarm_default:
    external: true
4. Create Kibana configuration for 3 Kibana instances
vi /cephfs/docker-compose/kibana/kibana.yml
with the following configuration.
server.name: “kibana.${HOSTNAME}”
server.host: “0.0.0.0”
server.port: 5601
elasticsearch.hosts: [“http://elasticsearch-escon.${HOSTNAME}:9200”]
elasticsearch.username: “kibana_system”
elasticsearch.password: “elastic”
xpack.apm.enabled: false
xpack.security.enabled: true
xpack.security.encryptionKey: “ABCDEFGHIJKLMNOPQRSTUVWXYZ012345”
xpack.encryptedSavedObjects.encryptionKey: “ABCDEFGHIJKLMNOPQRSTUVWXYZ012345”
xpack.reporting.enabled: true
xpack.reporting.encryptionKey: “ABCDEFGHIJKLMNOPQRSTUVWXYZ012345”
5. Create Logstash configuration for 3 Logstash nodes
vi /cephfs/docker-compose/logstash/logstash.yml
with the following configuration.
node.name: logstash.${HOSTNAME}
http.host: “0.0.0.0”
http.port: 9600
path.data: /usr/share/logstash/data
path.logs: /usr/share/logstash/logs
config.reload.automatic: true
config.reload.interval: 60s
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: [“http://elasticsearch-escon.${HOSTNAME}:9200”]
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: elastic
xpack.monitoring.collection.interval: 10s
6. Create Java configuration for 3 Logstash nodes
vi /cephfs/docker-compose/logstash/jvm.options
with the following configuration.
-Xms256m
-Xmx256m
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djruby.compile.invokedynamic=true
-Djruby.jit.threshold=0
-Djruby.regexp.interruptible=true
-XX:+HeapDumpOnOutOfMemoryError
-Djava.security.egd=file:/dev/urandom
-Dlog4j2.isThreadContextMapInheritable=true
7.
Create Syslog input configuration for 3 Logstash nodes
vi /cephfs/docker-compose/logstash/pipeline/logstash-ingress-from-appliance.conf
with the following configuration.
input {
  udp {
    host => “0.0.0.0”
    port => 5514
    type => syslog
  }
}
8. Create Syslog filter configuration for 3 Logstash nodes
vi /cephfs/docker-compose/logstash/pipeline/logstash-filter-for-juniper-ssg-syslog.conf
with the following configuration.
filter {
  fingerprint {
    source => “message”
    target => “[@metadata][fingerprint]”
    method => “MURMUR3”
  }
  grok {
    match => { “message” => “%{GREEDYDATA}” }
    add_tag => “log_ssg_general”
  }
}
Note: This is just a simple filter. I use my own filter to parse Juniper SSG Firewall’s log.
9.
 Create Syslog output configuration for 3 Logstash nodes
vi /cephfs/docker-compose/logstash/pipeline/logstash-egress-to-elasticsearch.conf
with the following configuration.
output {
  elasticsearch {
    hosts => “elasticsearch-escon.${HOSTNAME}:9200”
    user => logstash_syslog_egress
    password => “elastic”
    manage_template => false
    document_id => “%{[@metadata][fingerprint]}”
    index => “syslog_firewall_%{+YYYY.MM.dd}”
  }
}
10. Create a role and a user for Logstash.
curl -XPOST “http://localhost:9200/_xpack/security/role/logstash_syslog_egress?pretty” -u elastic:elastic -H ‘Content-Type: application/json’ -d’
{
“cluster”: [“manage_index_templates”, “monitor”, “manage_ilm”],
“indices”: [
{
“names”: [ “syslog_*” ],
“privileges”: [“write”,”create”,”delete”,”create_index”,”manage”,”manage_ilm”]
}
]
}

curl -XPOST “http://localhost:9200/_xpack/security/user/logstash_syslog_egress?pretty” -u elastic:elastic -H ‘Content-Type: application/json’ -d’
{
“password” : “elastic”,
“roles” : [ “logstash_syslog_egress”],
“full_name” : “logstash_syslog_egress”
}


11. Deploy the Docker Compose
docker stack deploy –compose-file docker-compose.yml loganalyticsswarm2
then check containers state
docker stack ps loganalyticsswarm2
and wait until it is on running state.

12. Access Traefik from a browser (In this scenario it would be http://172.20.20.101:8080)

verify Kibana and Logstash containers are discoverable.


13. Set Syslog server on the device (In this scenario it would be Juniper SSG Firewall).


14.
Access Kibana from a browser and log in with elastic user (In this scenario it would be http://172.20.20.101)

verify Kibana cluster and Logstash cluster are running

verify all instances and nodes are running.



15. Verify snapshot repository are configured.

16. Create a Syslog index




verify Syslog is discovered and stored in Elasticsearch.

Ade Destrianto
Just tryna git gud.

Leave a Reply

Your email address will not be published. Required fields are marked *