About setting up high availability Elasticsearch cluster

Hi,
I am trying to set up a high availability cluster with 3 nodes.
I want all 3 of then as master eligible nodes and data nodes too. I am using following configuration to set up cluster in .yml file for node 1.
cluster.name: test-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: <IP_of_node1>
discovery.seed_hosts: ["<Private-ip-node1>", "<Private-ip-node2>", "<Private-ip-node3>"]
cluster.initial_master_nodes: ["node-1", "node-2","node-3"]

.yml for node2 and node 3 is similar to this just i m changing node.name and network.host.
Multi-node cluster is not forming. everynode is behaving like single node cluster.
Do we need to add few more things in this configuration?
discovery.zen.minimum_master_nodes: 2 this will avoid split brain problem but not helping me to form a multinode-cluster.

try for all nodes set cluster.initial_master_nodes: [“node-1”]

@mmamaenko
No that setting is not working.
Still forming 3 single node standalone cluster with same name.

Are you running in kubernetes or docker-compose? What is your es version?

I am using rpm version :opendistroforelasticsearch-1.12.0

you start first node wait until it up and running and start second one? do you have log from second node?

i am waiting after starting first node. Yeah let me share my logs from second node.

[2021-02-04T20:28:51,273][INFO ][c.a.o.s.a.i.AuditLogImpl ] [node-2] Message routing enabled: true [2021-02-04T20:28:51,324][INFO ][c.a.o.s.f.OpenDistroSecurityFilter] [node-2] <NONE> indices are made immutable. [2021-02-04T20:28:51,514][INFO ][c.a.o.a.b.ADCircuitBreakerService] [node-2] Registered memory breaker. [2021-02-04T20:28:51,828][INFO ][o.e.t.NettyAllocator ] [node-2] creating NettyAllocator with the following configs: [name=unpooled, suggested_max_allocation_size=256kb, factors={es.unsafe.use_unpooled_allocator=null, g1gc_enabled=true, g1gc_region_size=1mb, heap_size=1gb}] [2021-02-04T20:28:51,915][INFO ][o.e.d.DiscoveryModule ] [node-2] using discovery type [zen] and seed hosts providers [settings] [2021-02-04T20:28:52,288][WARN ][o.e.g.DanglingIndicesState] [node-2] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually [2021-02-04T20:28:52,626][INFO ][c.a.o.e.p.h.c.PerformanceAnalyzerConfigAction] [node-2] PerformanceAnalyzer Enabled: true [2021-02-04T20:28:52,778][INFO ][o.e.n.Node ] [node-2] initialized [2021-02-04T20:28:52,779][INFO ][o.e.n.Node ] [node-2] starting ... [2021-02-04T20:28:52,894][INFO ][o.e.t.TransportService ] [node-2] publish_address {10.x.x.x.x:9300}, bound_addresses {10.x.x.x:9300} [2021-02-04T20:28:53,083][INFO ][o.e.b.BootstrapChecks ] [node-2] bound or publishing to a non-loopback address, enforcing bootstrap checks [2021-02-04T20:28:53,085][INFO ][o.e.c.c.Coordinator ] [node-2] cluster UUID [xHFuihZORxanHx4xvGKg0g] [2021-02-04T20:28:53,214][INFO ][o.e.c.s.MasterService ] [node-2] elected-as-master ([1] nodes joined)[{node-2}{N0hefjY8Rz-aZd075_mpMg}{lP8EIHeATnyTbc9DopgMlw}{10.X.X.X}{10.X.X.X:9300}{dimr} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 2, version: 14, delta: master node changed {previous [], current [{node-2}{N0hefjY8Rz-aZd075_mpMg}{lP8EIHeATnyTbc9DopgMlw}{x.x.x.x}{x.x.x.x:9300}{dimr}]} [2021-02-04T20:28:53,250][INFO ][o.e.c.s.ClusterApplierService] [node-2] master node changed {previous [], current [{node-2}{N0hefjY8Rz-aZd075_mpMg}{lP8EIHeATnyTbc9DopgMlw}{10.x.x.x}{10.x.x.x:9300}{dimr}]}, term: 2, version: 14, reason: Publication{term=2, version=14} [2021-02-04T20:28:53,261][WARN ][c.a.o.e.p.c.s.h.ConfigOverridesClusterSettingHandler] [node-2] Config override setting update called with empty string. Ignoring. [2021-02-04T20:28:53,263][INFO ][c.a.o.a.c.ADClusterEventListener] [node-2] CLuster is not recovered yet. [2021-02-04T20:28:53,309][INFO ][o.e.h.AbstractHttpServerTransport] [node-2] publish_address {10.x.x.x:9200}, bound_addresses {10.x.x.x:9200} [2021-02-04T20:28:53,310][INFO ][o.e.n.Node ] [node-2] started

did you destroy data in /var/lib/elasticsearch before restarting nodes with different configuration? ES is storing configuration in indexes and use it when restarting…

1 Like

No i have not destroyed that.
batch_metrics_enabled.conf logging_enabled.conf nodes performance_analyzer_enabled.conf rca_enabled.conf
need to destroy all of them before restarting?

i was talking about es data not config files

sorry. this is what i found in /var/lib/elasticsearch. That must be inside node directory.
indices node.lock _state

just delete everything in this directory and restart nodes

not working after deleting content of this directory from each node. Still every node working as a single node cluster.

are you running all nodes on separate computers? do you have firewall opened ports 9200 and 9300? do nodes can reach each other? do you have multiple networks configured on nodes? can you set logging to DEBUG for all nodes? can you remove discovery.zen.minimum_master_nodes: 2 from config? can nodes resolve names to ip for all?

I am using 3 virtual machines for 3 nodes.
They all are on same network.
Firewall is not running on all nodes.
discovery.zen.minimum_master_nodes: 2 i am not using this setting curently in my config file.
Nodes can reach each other.
Same things are working for elasticsearch but not for open distro version.
Going to set logging to DEBUG
Thank you very much for your efforts @mmamaenko

also I am using network.host: 0.0.0.0 in my kubernetes cluster. Saw in elasticsearch forum this helped somebody forming the cluster

Resolved this issue. Thank you @mmamaenko for your constant support. i reboot my all Virtual machines and delete again /var/lib/elasticseach/nodes folder. and nodes started connecting and form a 3 node happy cluster. Now i think…i can add that setting for resolving split brain problem too.
I also tried scaling and worked for me.
Thank you!