Logstash intergration

Can i find an example of docker-compose.yml file configuration with opendistro-for-elasticsearch, opendistro-for-elasticsearch-kibana, and logstash?

1 Like

Which version of logstash should be use to work with this distro… or is the latest ok.

the latest is ok… I setup like this:

version: '3'  
services:
  odfe-node1:
    image: amazon/opendistro-for-elasticsearch:0.8.0
    container_name: odfe-node1
    environment:
      - cluster.name=odfe-cluster
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
      - opendistro_security.ssl.http.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - odfe-data1:/usr/share/elasticsearch/data
      #- ./config.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
      #- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - odfe-net
  odfe-node2:
    image: amazon/opendistro-for-elasticsearch:0.8.0
    container_name: odfe-node2
    environment:
      - cluster.name=odfe-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.zen.ping.unicast.hosts=odfe-node1
      - opendistro_security.ssl.http.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - odfe-data2:/usr/share/elasticsearch/data
      #- ./config.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
      #- ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    networks:
      - odfe-net
  kibana:
    image: amazon/opendistro-for-elasticsearch-kibana:0.8.0
    container_name: odfe-kibana
    ports:
      - 80:5601
    expose:
      - "5601"
    environment:
      ELASTICSEARCH_URL: http://odfe-node1:9200
    networks:
      - odfe-net

  logstash:
    image: docker.elastic.co/logstash/logstash-oss:7.0.0
    container_name: logstash
    volumes:
    - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
    ports:
      - "5044:5044"
    networks:
      - odfe-net
    depends_on:
      - odfe-node1

volumes:
  odfe-data1:
  odfe-data2:

networks:
  odfe-net:
1 Like

Logstash.conf

input { 
  beats {
    port => 5044
  }
}

output {
        elasticsearch {
                hosts => ["http://192.168.36.64:9200"]
                index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
                document_type => "%{[@metadata][type]}"
                user => "logstash"
                password => "logstash"
                ssl => false
                ssl_certificate_verification => false
  }
}
1 Like

I run simultaneously open distro (kibana, elasticsearch) and logstash from docker-compose. In my case it works well. Did you check the credentials in Logstash.conf for elasticsearch? and did you use logstash-OSS (open source) version?

user => "logstash"
password => "logstash"

pls show your beats config

you need to specify an ip of your host, where docker containers are running. I think that the problem is that you leave localhost, and because logstash its separate container, and it’s trying to send logs to himself, not to elasticsearch.

I’m trying to do everything in the docker-compose file

I’ve also not had any luck getting metricbeat to attach. The CURL to elastic works fine.

This is the docker compose file

version: ‘3’
services:
aj-node1:
image: amazon/opendistro-for-elasticsearch:0.8.0
container_name: aj-node1
environment:
- cluster.name=aj-cluster
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- “ES_JAVA_OPTS=-Xms512m -Xmx512m” # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- aj-data1:/ABJ/docker/elasticsearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- aj-net
aj-node2:
image: amazon/opendistro-for-elasticsearch:0.8.0
container_name: aj-node2
environment:
- cluster.name=aj-cluster
- bootstrap.memory_lock=true
- “ES_JAVA_OPTS=-Xms512m -Xmx512m”
- discovery.zen.ping.unicast.hosts=aj-node1
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- aj-data2:/ABJ/docker/elasticsearch/data
networks:
- aj-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:0.8.0
container_name: aj-kibana
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: https://aj-node1:9200
networks:
- aj-net

logstash:
image: docker.elastic.co/logstash/logstash-oss:7.0.0
container_name: aj-logstash
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
ports:
- 5044:5044
networks:
- aj-net
depends_on:
- aj-node1

volumes:
aj-data1:
aj-data2:
networks:
aj-net:

input {
beats {
port => 5044
}
}

The following is the “logstash.conf” file

output {
elasticsearch {
hosts => “192.168.8.25:9200”
index => “%{[@metadata][beat]}-%{+YYYY.MM.dd}”
manage_template => false
user => “logstash”
password => “logstash”
ssl => false
ssl_certificate_verification => false
}
}

This is the WARN/error

aj-logstash | [2019-04-29T13:36:55,973][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>“httpq://logstash:xxxxxx@192.168.8.25:9200/”, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>“Elasticsearch Unreachable: [httpq://logstash:xxxxxx@192.168.8.25:9200/][Manticore::ClientProtocolException] 192.168.8.25:9200 failed to respond”}

I tired a different version of logstash(thinking that 7 might be to much).

Same error
[2019-04-29T16:02:32,154][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>“http://logstash:xxxxxx@:9200/”, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>“Elasticsearch Unreachable: [http://logstash:xxxxxx@192.168.8.25:9200/][Manticore::ClientProtocolException] :9200 failed to respond”}

logstash:
image: docker.elastic.co/logstash/logstash-oss:6.6.2
container_name: aj-logstash
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
ports:
- “5044:5044”
networks:
- aj-net
depends_on:
- aj-node1

In your logstash.conf you are trying to reach 192.168.25 ES host which logstash container doesnt know.
Change it to:
aj-node1:9200

If this won’t work add below steps:
in docker-compose.yml for logstash container change network mode:
network_mode: "host" this mean that docker container will try to resolve dns using your host machine network.
then in your /etc/hosts add to row:
127.0.0.1 aj-node

which will resolve aj-node to 127.0.0.1

let me know if this helped

Is there a way to do it without going into /etc/hosts?

Is this probably why a metricbeat is having issues sending to elasticsearch as well?

as I said first in your logstash.conf file you should use: aj-node1:9200 instead of 192.168.8.25:9200
so your logstash.conf should look like this:

output {
elasticsearch {
hosts => “aj-node1:9200”
index => “%{[@metadata][beat]}-%{+YYYY.MM.dd}”
manage_template => false
user => “logstash”
password => “logstash”
ssl => false
ssl_certificate_verification => false
}
}

If this won’t work you can edit /etc/hosts

Thank you all for your help. I found a typo in my compose file and once fixed. All began to work better.

The only thing that I’ll need to do is create users so that metricbeats, filebeats can send data.

Thanks

Where you have created username for beats ?
in Kibana or server level ?
what is the version of logstash and logstatsh-elasticsearch-ouput plugin ?
please let me know , i am also running with this error