OpenDistro not initialized after recreating security indes. Nodes report initialized. Cluster still RED

So I’m a bit confused by this one, due to some certificate renewal I had to reset some security settings and subsequently rebuild the security index.

That’s done, all successful using the security admin.

All nodes are report “initialized” but the cluster state is still red and curling the API still shows the following:

[2019-07-30T03:24:47,561][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [elasticsearch-opendistro-es-master-5ccdf48f59-fsm6s] Node 'elasticsearch-opendistro-es-master-5ccdf48f59-fsm6s' initialized
[2019-07-30T03:24:51,016][ERROR][c.a.o.s.a.BackendRegistry] [elasticsearch-opendistro-es-master-5ccdf48f59-fsm6s] Not yet initialized (you may need to run securityadmin)
[2019-07-30T03:24:52,583][ERROR][c.a.o.s.a.BackendRegistry] [elasticsearch-opendistro-es-master-5ccdf48f59-fsm6s] Not yet initialized (you may need to run securityadmin)

The index exists and is in a green state (I disabled security temporarily to check). I ran security admin manually and it was successful. I then reloaded it.

[root@elasticsearch-opendistro-es-master-5ccdf48f59-4kdkk tools]# sh securityadmin.sh -cd ../securityconfig/ -cert /usr/share/elasticsearch/config/admin-certs/elk-admin-crt.pem -cacert /usr/share/elasticsearch/config/admin-certs/elk-admin-root-ca.pem  -key /usr/share/elasticsearch/config/admin-certs/elk-admin-key.pem -icl -arc
Open Distro Security Admin v6
Will connect to localhost:9300 ... done
Elasticsearch Version: 6.7.1
Open Distro Security Version: 0.9.0.0
Connected as <blah>
Contacting elasticsearch cluster 'elasticsearch' ...
Clustername: elasticsearch
Clusterstate: RED
Number of nodes: 19
Number of data nodes: 6
.opendistro_security index already exists, so we do not need to create one.
Populate config from /usr/share/elasticsearch/plugins/opendistro_security/securityconfig
Will update 'security/config' with ../securityconfig/config.yml 
   SUCC: Configuration for 'config' created or updated
Will update 'security/roles' with ../securityconfig/roles.yml 
   SUCC: Configuration for 'roles' created or updated
Will update 'security/rolesmapping' with ../securityconfig/roles_mapping.yml 
   SUCC: Configuration for 'rolesmapping' created or updated
Will update 'security/internalusers' with ../securityconfig/internal_users.yml 
   SUCC: Configuration for 'internalusers' created or updated
Will update 'security/actiongroups' with ../securityconfig/action_groups.yml 
   SUCC: Configuration for 'actiongroups' created or updated
Done with success

Any ways I can bypass this? It’s a super weird scenario and I’d rather not lose our data.