Logstash Problem With Open Distro Docker

My docker-compose file (taken from Opendistro documentation - Docker - Open Distro Documentation

version: '3'
services:
  odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.6.0
container_name: odfe-node1
environment:
  - cluster.name=odfe-cluster
  - node.name=odfe-node1
  - discovery.seed_hosts=odfe-node1,odfe-node2
  - cluster.initial_master_nodes=odfe-node1,odfe-node2
  - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
  - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
  memlock:
    soft: -1
    hard: -1
  nofile:
    soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
    hard: 65536
volumes:
  - odfe-data1:/usr/share/elasticsearch/data
ports:
  - 9200:9200
  - 9600:9600 # required for Performance Analyzer
networks:
  - odfe-net
  odfe-node2:
image: amazon/opendistro-for-elasticsearch:1.6.0
container_name: odfe-node2
environment:
  - cluster.name=odfe-cluster
  - node.name=odfe-node2
  - discovery.seed_hosts=odfe-node1,odfe-node2
  - cluster.initial_master_nodes=odfe-node1,odfe-node2
  - bootstrap.memory_lock=true
  - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
  memlock:
    soft: -1
    hard: -1
  nofile:
    soft: 65536
    hard: 65536
volumes:
  - odfe-data2:/usr/share/elasticsearch/data
networks:
  - odfe-net
  kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.6.0
container_name: odfe-kibana
ports:
  - 5601:5601
expose:
  - "5601"
environment:
  ELASTICSEARCH_URL: https://odfe-node1:9200
  ELASTICSEARCH_HOSTS: https://odfe-node1:9200
networks:
  - odfe-net
  logstash:
image: docker.elastic.co/logstash/logstash-oss:7.3.2
container_name: logstash
volumes:
  - ./logstash.docker.conf:/usr/share/logstash/pipeline/logstash.conf
  - .:/usr/share/logstash/logs
ports:
  - "5011:5011"
  - "5012:5012"
  - "5013:5013"
networks:
  - odfe-net
depends_on:
  - odfe-node1

volumes:
  odfe-data1:
  odfe-data2:

networks:
  odfe-net:

My logstash pipeline conf file

input {
  http {
    port => 5011
  }
  udp {
    port => 5012
    codec => "json"
  }
  tcp {
    port => 5013
    codec => "json_lines"
  }
}
filter {
  json{
    source => "message"
  }
  
  elasticsearch {
    hosts => ["odfe-node1:9200", "odfe-node2:9200"]
    query => "name:%{name}"
    fields => { "spend" => "current_spend" }
    index => "logstash-etl"
    user => "logstash"
    password => "logstash"
    enable_sort => false 
    ssl => false 
  }

  ruby {
    code => "event.set('spend', (event.get('current_spend') + event.get('spend')))"
    remove_field => ["current_spend"]
  }

  fingerprint {
    source => "name"
    target => "[@metadata][fingerprint]"
    method => "MURMUR3"
  }
}

output {
  elasticsearch {
    hosts => ["https://odfe-node1:9200","https://odfe-node2:9200" ]
    ssl => false
    ssl_certificate_verification => false
    user => logstash
    password => logstash
    ilm_enabled => false
    index => "logstash-etl"
  }
  stdout {
  }
}

My error log in Logstash (which I don’t see if I remove the Elasticsearch query filter)

logstash | [2020-04-23T02:21:31,017][ERROR][logstash.javapipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Manticore::SocketException: Connection refused (Connection refused)>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37:in block in initialize’“, “/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79:in call'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:274:in call_once’”, “/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:158:in code'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/http/manticore.rb:84:in block in perform_request’”, “/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/base.rb:262:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/transport/http/manticore.rb:67:in perform_request’”, “/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-transport-5.0.5/lib/elasticsearch/transport/client.rb:131:in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/elasticsearch-api-5.0.5/lib/elasticsearch/api/actions/ping.rb:20:in ping’”, “/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-elasticsearch-3.6.0/lib/logstash/filters/elasticsearch.rb:192:in test_connection!'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-elasticsearch-3.6.0/lib/logstash/filters/elasticsearch.rb:74:in register’”, “org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:56:in register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:192:in block in register_plugins’”, “org/jruby/RubyArray.java:1792:in each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:191:in register_plugins’”, “/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:463:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:204:in start_workers’”, “/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:146:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:105:in block in start’”], :thread=>”#<Thread:0x5d027f32 run>"}`

Error from one of the Elasticsearch node

odfe-node1    | [2020-04-23T02:20:16,456][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [odfe-node1] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 48454144202f20485454502f312e310d0a436f6e6e656374696f6e3a204b6565702d416c6976650d0a417574686f72697a6174696f6e3a204261736963205957527461573436595752746157343d0d0a436f6e74656e742d547970653a206170706c69636174696f6e2f6a736f6e0d0a486f73743a206f6466652d6e6f6465313a393230300d0a557365722d4167656e743a204d616e7469636f726520302e362e340d0a4163636570742d456e636f64696e673a20677a69702c6465666c6174650d0a0d0a
odfe-node1    | io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 48454144202f20485454502f312e310d0a436f6e6e656374696f6e3a204b6565702d416c6976650d0a417574686f72697a6174696f6e3a204261736963205957527461573436595752746157343d0d0a436f6e74656e742d547970653a206170706c69636174696f6e2f6a736f6e0d0a486f73743a206f6466652d6e6f6465313a393230300d0a557365722d4167656e743a204d616e7469636f726520302e362e340d0a4163636570742d456e636f64696e673a20677a69702c6465666c6174650d0a0d0a

Even though ssl is set to false in the Elasticsearch filter plugin, it still throws an error. Any one know why this could be happening?

This problem is still there even if I use the latest Logstash image logstsash:7.6.2

You should also specify http in uri:

@ogulman Thanks. I changed that it to http but still get the error below. As I said, the only way this problem goes away is if I remove the Elasticsearch filter plugin used in the Logstash pipeline. I don’t get why this happens even though I set ssl => false in the plugin setting.

logstash      | Apr 23, 2020 10:40:58 AM org.apache.http.impl.execchain.RetryExec execute
logstash      | INFO: I/O exception (org.apache.http.NoHttpResponseException) caught when processing request to {}->http://odfe-node1:9200: The target server failed to respond
logstash      | Apr 23, 2020 10:40:58 AM org.apache.http.impl.execchain.RetryExec execute
logstash      | INFO: Retrying request to {}->http://odfe-node1:9200
odfe-node1    | [2020-04-23T10:40:58,372][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [odfe-node1] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 48454144202f20485454502f312e310d0a436f6e6e656374696f6e3a204b6565702d416c6976650d0a417574686f72697a6174696f6e3a204261736963206247396e63335268633267366247396e633352686332673d0d0a436f6e74656e742d547970653a206170706c69636174696f6e2f6a736f6e0d0a486f73743a206f6466652d6e6f6465313a393230300d0a557365722d4167656e743a204d616e7469636f726520302e362e340d0a4163636570742d456e636f64696e673a20677a69702c6465666c6174650d0a0d0a
odfe-node1    | io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 48454144202f20485454502f312e310d0a436f6e6e656374696f6e3a204b6565702d416c6976650d0a417574686f72697a6174696f6e3a204261736963206247396e63335268633267366247396e633352686332673d0d0a436f6e74656e742d547970653a206170706c69636174696f6e2f6a736f6e0d0a486f73743a206f6466652d6e6f6465313a393230300d0a557365722d4167656e743a204d616e7469636f726520302e362e340d0a4163636570742d456e636f64696e673a20677a69702c6465666c6174650d0a0d0a

try~

elasticsearch {
    hosts => "elasticsearch"
    index => "logstash-%{+YYYY.MM.dd}"
    ssl=>true
    ssl_certificate_verification => false
    ilm_enabled => false
    user=>"user"
    password=>"passwd"
}