Securityadmin.sh FAIL: Expected 4 nodes to return response, but got 6

Hi, I’m using open distro v0.10.0.0 with elasticsearch v6.8.1 in a cluster running as ec2 instances in AWS.

When the ec2 instances in the cluster are first created, they each run an ansible playbook to install and configure elasticsearch.

When a master node first initializes, it checks if the .opendistro_security index exists. If it doesn’t, then it runs securityadmin.sh, but it is failing with return code 255, and this output:

sh /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert /etc/elasticsearch/certs/CA.pem -cert /etc/elasticsearch/certs/elkdaddy.pem -key /etc/elasticsearch/certs/elkdaddy_pk8.key -keypass XXXXXXXXX -h ip-10-249-199-224.ec2.internal"
"WARNING: JAVA_HOME not set, will use /usr/bin/java
Open Distro Security Admin v6
Will connect to ip-10-249-199-224.ec2.internal:9300 … done
Elasticsearch Version: 6.8.1
Open Distro Security Version: 0.10.0.0
Connected as C=US,ST=Wisconsin,L=Milwaukee,OU=EMS,O=Northwestern Mutual,CN=ELKDADDY.nm.nmfco.com
Contacting elasticsearch cluster ‘elasticsearch’ and wait for YELLOW clusterstate …
Clustername: dev
Clusterstate: GREEN
Number of nodes: 4
Number of data nodes: 0
.opendistro_security index does not exists, attempt to create it … done (0-all replicas)
Populate config from /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/
Will update ‘security/config’ with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
SUCC: Configuration for ‘config’ created or updated
Will update ‘security/roles’ with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles.yml
SUCC: Configuration for ‘roles’ created or updated
Will update ‘security/rolesmapping’ with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles_mapping.yml
SUCC: Configuration for ‘rolesmapping’ created or updated
Will update ‘security/internalusers’ with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
SUCC: Configuration for ‘internalusers’ created or updated
Will update ‘security/actiongroups’ with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/action_groups.yml
SUCC: Configuration for ‘actiongroups’ created or updated
FAIL: Expected 4 nodes to return response, but got 6

Any idea how to work around this?

I think the node count may be changing because all the nodes in the cluster (in this case, 7 nodes), are created more or less at once in AWS via ec2 auto-scaling groups. When the master node is created, it may be that all of the nodes haven’t been created yet. But by the time the securityadmin.sh is finishing its work, perhaps more nodes have been created (and that might explain the ‘expected 4… but got 6’ message.

Again, any idea how to work around this?

AtDhVaAnNkCsE