Elastic cluster dislay only single node

I have setup 3 node cluster in elasticsearch. but it display only single node in the cluster.

I have specify:

Cluster.name
node.node
network.host
discovery.seed_hosts:

when i do the " curl -XGET https://localhost:9200/_cat/nodes?v -u admin:admin --insecure"
display only single node instead of 3.

can anyone confirm what wrong here.

Hello @sourabhsharma, can you please provide the data from your config, like bellow one:

cluster.name: test-cluster
node.name: test
network.host: ["_local_", "_site_"]
http.host: ["_local_", "_site_"]
http.port: 9200
transport.host: ["_local_", "_site_"]
transport.port: 9300
path.data: /opt/elk-data
path.logs: /var/log/elasticsearch

cluster.initial_master_nodes: ["test"]
discovery.seed_hosts: ["192.168.1.10:9300"]

cluster.name: odfe-cluster
node.name: odfe-d2
#node.master: true
#node.data: true
#node.ingest: true

path.data: /var/lib/elasticsearch

path.logs: /var/log/elasticsearch

network.host: “0.0.0.0”

#discovery.seed_hosts: [“xxx”, “xxx”]
#discovery.seed_hosts: [“x.x.x.x”,“x.x.x.x” ]
discovery.zen.ping.unicast.hosts: [“xxxx”,“xxxx”]
#discovery.zen.minimum_master_nodes: 1
######## Start OpenDistro for Elasticsearch Security Demo Configuration ########

WARNING: revise all the lines below before you go into production

opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:

  • CN=kirk,OU=client,O=client,L=test, C=de

opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: [“all_access”, “security_rest_api_access”]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3

smilarly I have define the configuration into two other nodes.

root@data3:/home/ubuntu# curl -XGET https://localhost:9200/_cat/nodes?v -u admin:admin --insecure
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
x.x.x.x 26 97 1 0.06 0.03 0.02 dimr * odfe-d2
root@data3:/home/ubuntu#

ideally it should return 3 nodes in cluser.

[quote=“stmx38, post:2, topic:3933”]

cluster.initial_master_nodes: ["test"]
discovery.seed_hosts: ["192.168.1.10:9300"]

[/quote][WARN ][o.e.d.HandshakingTransportAddressConnector] [odfe-d1] [connectToRemoteMasterNode[34.197.1.61:9300]] completed handshake with [{odfe-d2}{N-nYV4IYQgW_SgNOq49KYw}{AwJVJGP5SniZIfAP3bP6iw}{172.31.11.196}{172.31.11.196:9300}{dimr}] but followup connection failed
org.elasticsearch.transport.ConnectTransportException: [odfe-d2][172.31.11.196:9300] connect_timeout[30s]
at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onTimeout(TcpTransport.java:972) ~[elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:651) ~[elasticsearch-7.9.1.jar:7.9.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
[2020-10-14T16:53:45,881][INFO ][stats_log ] [odfe-d1] ------------------------------------------------------------------------
Program=PerformanceAnalyzerPlugin

@sourabhsharma, can you please check and post here 3 configuration keys from each host:
host 1

node.name: odfe-d1
cluster.initial_master_nodes: ["odfe-d1", "odfe-d2", "odfe-d3"]
discovery.seed_hosts: ["192.168.1.11:9300", "192.168.1.12:9300", "192.168.1.13:9300"]

host 2

node.name: odfe-d2
cluster.initial_master_nodes: ["odfe-d1", "odfe-d2", "odfe-d3"]
discovery.seed_hosts: ["192.168.1.11:9300", "192.168.1.12:9300", "192.168.1.13:9300"]

host 3

node.name: odfe-d3
cluster.initial_master_nodes: ["odfe-d1", "odfe-d2", "odfe-d3"]
discovery.seed_hosts: ["192.168.1.11:9300", "192.168.1.12:9300", "192.168.1.13:9300"]

node.name: odfe-master
network.host: “0.0.0.0”
discovery.seed_hosts: [“172.31.12.233”,“172.31.5.28”,“172.31.11.196” ]

node.name: odfe-d1
network.host: “0.0.0.0”
discovery.seed_hosts: [“172.31.12.233”,“172.31.5.28”,“172.31.11.196” ]

node.name: odfe-d2
network.host: “0.0.0.0”
discovery.seed_hosts: [“172.31.12.233”,“172.31.5.28”,“172.31.11.196” ]

do I need create any certificate and that i need to use in my cluster ?

completed handshake with [{odfe-d2}{N-nYV4IYQgW_SgNOq49KYw}{AwJVJGP5SniZIfAP3bP6iw}{172.31.11.196}{172.31.11.196:9300}{dimr}] but followup connection failed
org.elasticsearch.transport.ConnectTransportException: [odfe-d2][172.31.11.196:9300] connect_timeout[30s]
at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener.onTimeout(TcpTransport.java:972) ~[elasticsearch-7.9.1.jar:7.9.1]

Can you send me any blog link that i can follow and setup on my vm.

@sourabhsharma, please add keys to your config as in my example above:

node.name: odfe-master/odfe-d1/odfe-d2
cluster.initial_master_nodes: ["odfe-master", "odfe-d1", "odfe-d2"]
discovery.seed_hosts: ["172.31.12.233","172.31.5.28","172.31.11.196" ]

All 3 keys should be placed on every node and Elasticsearch service should be restarted.

For more information, please refer to the documentation: Important discovery and cluster formation settings

I have already add these 3 keys on every node. but these 3 nodes not comes on cluster. r you available for quick zoom call.

@sourabhsharma, yes - let’s try. Don’t have so much experience with zoom - jus created an account. I’m in +3 time zone.

my skype id : saurabh.find1, can you please ping me