Securityadmin.sh fails to update cluster config when using kirk certificates

Hi.

I am attempting to stand up a cluster of ES nodes on docker. The nodes are all the same AMI; ubuntu on intel in aws. Every node spins up and the cluster assembles properly. The amazing cerebro tool can talk to a master using the default admin/admin credentials and works as i expect. The same is true of Kibana: admin/admin gets me in!

My problem comes when i try to configure anything security related. I am able to log in to kibana, but as soon as i try to manage anything in the security section, i am immediately logged out and then redirected to the URL: /login?type=basicauthLogout#?_g=().

The only change that I am attempting to make right now is to change the admin/admin credentials, but when doing so, I am getting stuck with this error:

FAIL: Configuration for 'config' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]```

i am using the dummy certs that come with ODES.

when i try to run the securityadmin.sh from within one of the master node container:

[root@com config]# pwd
/usr/share/elasticsearch/config
[root@com config]# ls -lah
total 60K
drwxrwxr-x 1 elasticsearch root 4.0K May 29 23:38 .
drwxrwxr-x 1 root          root 4.0K May 29 22:41 ..
drwxr-x--- 2 elasticsearch root 4.0K May 29 22:41 discovery-ec2
-rw-rw---- 1 elasticsearch root  207 May 29 22:41 elasticsearch.keystore
-r--r----- 1 root          root 5.0K May 29 22:26 elasticsearch.yml
-r--r----- 1 root          root 1.7K May 29 22:26 esnode-key.pem
-r--r----- 1 root          root 1.7K May 29 22:26 esnode.pem
-rw-rw---- 1 elasticsearch root 3.6K Apr  2 15:56 jvm.options
-r--r----- 1 root          root 1.7K May 29 22:26 kirk-key.pem
-r--r----- 1 root          root 1.6K May 29 22:26 kirk.pem
-rw-rw-r-- 1 elasticsearch root  285 Apr 15 21:30 log4j2.properties
drwxr-x--- 2 elasticsearch root 4.0K May 29 22:41 repository-s3
-r--r----- 1 root          root 1.5K May 29 22:26 root-ca.pem
[root@com config]# /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd ../plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert ./root-ca.pem -cert ./kirk.pem -key ./kirk-key.pem
Open Distro Security Admin v6
Will connect to localhost:9300 ... done
Elasticsearch Version: 6.7.1
Open Distro Security Version: 0.9.0.0
Connected as CN=kirk,OU=client,O=client,L=test,C=de
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Clustername: MyClusterHere
Clusterstate: GREEN
Number of nodes: 7
Number of data nodes: 4
.opendistro_security index already exists, so we do not need to create one.
Populate config from /usr/share/elasticsearch/plugins/opendistro_security/securityconfig
Will update 'security/config' with ../plugins/opendistro_security/securityconfig/config.yml
   FAIL: Configuration for 'config' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]
Will update 'security/roles' with ../plugins/opendistro_security/securityconfig/roles.yml
   FAIL: Configuration for 'roles' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]
Will update 'security/rolesmapping' with ../plugins/opendistro_security/securityconfig/roles_mapping.yml
   FAIL: Configuration for 'rolesmapping' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]
Will update 'security/internalusers' with ../plugins/opendistro_security/securityconfig/internal_users.yml
   FAIL: Configuration for 'internalusers' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]
Will update 'security/actiongroups' with ../plugins/opendistro_security/securityconfig/action_groups.yml
   FAIL: Configuration for 'actiongroups' failed because of ElasticsearchSecurityException[no permissions for [] and User [name=CN=kirk,OU=client,O=client,L=test,C=de, roles=[], requestedTenant=null]]
FAIL: Expected 7 nodes to return response, but got only 0
Done with failures

Whoami:

[root@com config]# /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd ../plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert ./root-ca.pem -cert ./kirk.pem -key ./kirk-key.pem -w
Open Distro Security Admin v6
Will connect to localhost:9300 ... done
Elasticsearch Version: 6.7.1
Open Distro Security Version: 0.9.0.0
Connected as CN=kirk,OU=client,O=client,L=test,C=de
{
  "whoami" : {
    "dn" : "CN=kirk,OU=client,O=client,L=test,C=de",
    "is_admin" : true,
    "is_authenticated" : true,
    "is_node_certificate_request" : false
  }
}

This would seem to indicate that CN=kirk,OU=client,O=client,L=test,C=de is an admin user.

The docker-compose.yaml file that configures docker on each node in the cluster.

version: "3.5"
services:
  elasticsearch:
    build:
      context: .
      dockerfile: Dockerfile
      args:
        ODES_VERSION: "0.9.0"

    ####
    ## Configuration for Elastic is split up into several places depending on what needs to change / when.
    ##  Settings that are rarely changed are defined in elasticsearch.yaml
    ##  the `environment` directive is for settings that are handy to change w/o baking a new docker image
    ##  but a new AMI would be a good idea, and the `env_file` is for settings that are going to change per-cluster at run time.
    ##  A cloudinit shell script will update these var files as directed by terraform...
    ####

    env_file:
     - ./cluster-id.env
     - ./node-role.env
     - ./jvm-opts.env
     - ./network.env

    ulimits:
      # See: https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536

    
    volumes:
      # We have a few TLS things to take care of...
      # See: https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker-security/#sample-docker-compose-file
      ##
      - ./tls/root-ca.pem:/usr/share/elasticsearch/config/root-ca.pem
      - ./tls/esnode.pem:/usr/share/elasticsearch/config/esnode.pem
      - ./tls/esnode-key.pem:/usr/share/elasticsearch/config/esnode-key.pem
      - ./tls/kirk.pem:/usr/share/elasticsearch/config/kirk.pem
      - ./tls/kirk-key.pem:/usr/share/elasticsearch/config/kirk-key.pem

      # And don't forget the config file
      - ./elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      # And the security config files
      ##
      # Users/password hashes
      - ./security/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml

      # Where we validate user creds against
      - ./security/config.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml

      - type: volume
        source: odes-data
        target: /usr/share/elasticsearch/data
        volume:
          nocopy: true
      - type: volume
        source: odes-logs
        target: /usr/share/elasticsearch/logs
        volume:
          nocopy: true

    ports:
      # ES API is over 9200
      - target: 9200
        published: 9200
        protocol: tcp
        mode: host
      # Because we are not in a overlay swarm network, 9300 must also be exposed for inter-cluster comms
      - target: 9300
        published: 9300
        protocol: tcp
        mode: host
      # ODES Performance Analyzer is 9600
      - target: 9600
        published: 9600
        protocol: tcp
        mode: host


volumes:
  # Useful way to get bind behavior with volume features
  # See: https://stackoverflow.com/questions/39496564/docker-volume-custom-mount-point
  odes-data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /mnt/odes/data
  odes-logs:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /var/log/elasticsearch

The Dockerfile:

# See: https://docs.docker.com/engine/reference/builder/
##
# We need to inject a few ES plugins into the ODES image
#   discovery-ec2, repository-s3
#
# We do this here.
# See: https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker/#run-with-custom-plugins
##
ARG ODES_VERSION=latest
FROM amazon/opendistro-for-elasticsearch:${ODES_VERSION:-latest} AS base

# Plugins
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch discovery-ec2
RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install --batch repository-s3

the *env files mentioned in docker-compose.yaml:

root@com:/opt/odes/docker# cat *.env

cluster.name=<myClusterNameHere.
discovery.ec2.tag.es-cluster=<myClusterNameHere.
discovery.ec2.availability_zones=us-west-1b,us-west-1c
discovery.ec2.groups=<some-sg-id-here>

discovery.zen.minimum_master_nodes=3

# Due to shitty documentation / ES code, this must be set for all regions not us-east-1
# See: https://discuss.elastic.co/t/discovery-ec2-plugin-always-tries-to-ping-localhost-never-finds-the-nodes-that-it-should/160433/9
# This will be necessary until this is merged: https://github.com/elastic/elasticsearch/pull/27925
discovery.ec2.endpoint=ec2.us-west-1.amazonaws.com


ES_JAVA_OPTS=-Xms3988m -Xmx3988m

network.publish_host=<internalIpHere>
network.bind_host=0.0.0.0
node.master=true
node.data=false
node.ingest=false

And the elasticsearch.yaml (this file is identical on all nodes in the cluster)


action.destructive_requires_name: true
indices.fielddata.cache.size: 1% # default is unbounded


discovery.ec2.host_type: private_dns


plugin.mandatory: discovery-ec2
discovery.zen.hosts_provider: ec2

cloud.node.auto_attributes: true
cluster.routing.allocation.awareness.attributes: aws_availability_zone

discovery.ec2.protocol: https

bootstrap.memory_lock: true

opendistro_security.ssl.http.enabled_ciphers:
  - "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
  - "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
  - "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"
  - "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384"
  - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
  - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
  - "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256"
  - "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"
  - "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256"
  - "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384"
  - "TLS_DHE_RSA_WITH_AES_128_CBC_SHA256"
  - "TLS_DHE_RSA_WITH_AES_256_CBC_SHA256"

opendistro_security.ssl.http.enabled_protocols:
  # The gold standard, aim for this, first!
  - "TLSv1.3"

  # 1.2 is needed for consul-agent doing the health-checks....
  - "TLSv1.2"

# For now, we use the bundled certs that ship in the ODES images
opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem

# Currently, using the same ODES demo cert for every node, so this needs to stay off
opendistro_security.ssl.transport.enforce_hostname_verification: false

opendistro_security.ssl.http.enabled: true

# For now, we use the bundled certs that ship in the ODES images
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem

# For PoC, we allow the demo certs
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true

# the securityadmin.sh script is how we interface with the cluster to change security related things
# We will need to present a certificate with this set of properties...
##
opendistro_security.authcz.admin_dn:
# root@com:/opt/odes/docker/tls# openssl x509 -subject -nameopt RFC2253 -noout -in kirk.pem
#   subject=CN=kirk,OU=client,O=client,L=test,C=de
  - "CN=kirk,OU=client,O=client,L=test,C=de"


opendistro_security.nodes_dn:
  - 'CN=node-0.example.com,OU=node,O=node,L=test,DC=de'


I have made no changes to the securityconfig/config.yaml and only updated the hashes in securityconfig/internal_users.yaml. I was following the documentation here:

https://opendistro.github.io/for-elasticsearch-docs/docs/install/docker-security/#change-passwords-for-read-only-users

Docker info:

root@com:/opt/odes/docker/security# docker version
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77
 Built:             Sat May  4 02:35:57 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 01:59:36 2019
  OS/Arch:          linux/amd64
  Experimental:     false

environment info:

root@com:/opt/odes/docker/security# lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.2 LTS
Release:	18.04
Codename:	bionic

When the cluster is brand new, and i run

[root@com config]# /usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd ../plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert ./root-ca.pem -cert ./kirk.pem -key ./kirk-key.pem -r
Open Distro Security Admin v6
Will connect to localhost:9300 ... done
Elasticsearch Version: 6.7.1
Open Distro Security Version: 0.9.0.0
Connected as CN=kirk,OU=client,O=client,L=test,C=de
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Clustername: myClusterIdHere
Clusterstate: GREEN
Number of nodes: 7
Number of data nodes: 4
.opendistro_security index already exists, so we do not need to create one.
Will retrieve 'security/config' into ../plugins/opendistro_security/securityconfig/config_2019-May-30_00-34-40.yml
   SUCC: Configuration for 'config' stored in ../plugins/opendistro_security/securityconfig/config_2019-May-30_00-34-40.yml
Will retrieve 'security/roles' into ../plugins/opendistro_security/securityconfig/roles_2019-May-30_00-34-40.yml
   SUCC: Configuration for 'roles' stored in ../plugins/opendistro_security/securityconfig/roles_2019-May-30_00-34-40.yml
Will retrieve 'security/rolesmapping' into ../plugins/opendistro_security/securityconfig/roles_mapping_2019-May-30_00-34-40.yml
   SUCC: Configuration for 'rolesmapping' stored in ../plugins/opendistro_security/securityconfig/roles_mapping_2019-May-30_00-34-40.yml
Will retrieve 'security/internalusers' into ../plugins/opendistro_security/securityconfig/internal_users_2019-May-30_00-34-40.yml
   SUCC: Configuration for 'internalusers' stored in ../plugins/opendistro_security/securityconfig/internal_users_2019-May-30_00-34-40.yml
Will retrieve 'security/actiongroups' into ../plugins/opendistro_security/securityconfig/action_groups_2019-May-30_00-34-40.yml
   SUCC: Configuration for 'actiongroups' stored in ../plugins/opendistro_security/securityconfig/action_groups_2019-May-30_00-34-40.yml

i get:

[root@com securityconfig]# cat *_2019*.yml
---
UNLIMITED:
  readonly: true
  permissions:
  - "*"
INDICES_ALL:
  readonly: true
  permissions:
  - "indices:*"
ALL:
  readonly: true
  permissions:
  - "INDICES_ALL"
MANAGE:
  readonly: true
  permissions:
  - "indices:monitor/*"
  - "indices:admin/*"
CREATE_INDEX:
  readonly: true
  permissions:
  - "indices:admin/create"
  - "indices:admin/mapping/put"
MANAGE_ALIASES:
  readonly: true
  permissions:
  - "indices:admin/aliases*"
MONITOR:
  readonly: true
  permissions:
  - "INDICES_MONITOR"
INDICES_MONITOR:
  readonly: true
  permissions:
  - "indices:monitor/*"
DATA_ACCESS:
  readonly: true
  permissions:
  - "indices:data/*"
  - "CRUD"
WRITE:
  readonly: true
  permissions:
  - "indices:data/write*"
  - "indices:admin/mapping/put"
READ:
  readonly: true
  permissions:
  - "indices:data/read*"
  - "indices:admin/mappings/fields/get*"
DELETE:
  readonly: true
  permissions:
  - "indices:data/write/delete*"
CRUD:
  readonly: true
  permissions:
  - "READ"
  - "WRITE"
SEARCH:
  readonly: true
  permissions:
  - "indices:data/read/search*"
  - "indices:data/read/msearch*"
  - "SUGGEST"
SUGGEST:
  readonly: true
  permissions:
  - "indices:data/read/suggest*"
INDEX:
  readonly: true
  permissions:
  - "indices:data/write/index*"
  - "indices:data/write/update*"
  - "indices:admin/mapping/put"
  - "indices:data/write/bulk*"
GET:
  readonly: true
  permissions:
  - "indices:data/read/get*"
  - "indices:data/read/mget*"
CLUSTER_ALL:
  readonly: true
  permissions:
  - "cluster:*"
CLUSTER_MONITOR:
  readonly: true
  permissions:
  - "cluster:monitor/*"
CLUSTER_COMPOSITE_OPS_RO:
  readonly: true
  permissions:
  - "indices:data/read/mget"
  - "indices:data/read/msearch"
  - "indices:data/read/mtv"
  - "indices:admin/aliases/exists*"
  - "indices:admin/aliases/get*"
  - "indices:data/read/scroll"
CLUSTER_COMPOSITE_OPS:
  readonly: true
  permissions:
  - "indices:data/write/bulk"
  - "indices:admin/aliases*"
  - "indices:data/write/reindex"
  - "CLUSTER_COMPOSITE_OPS_RO"
MANAGE_SNAPSHOTS:
  readonly: true
  permissions:
  - "cluster:admin/snapshot/*"
  - "cluster:admin/repository/*"
---
opendistro_security:
  dynamic:
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: "192\\.168\\.0\\.10|192\\.168\\.0\\.11"
        remoteIpHeader: "x-forwarded-for"
        proxiesHeader: "x-forwarded-by"
    authc:
      kerberos_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 6
        http_authenticator:
          type: "kerberos"
          challenge: true
          config:
            krb_debug: false
            strip_realm_from_principal: true
        authentication_backend:
          type: "noop"
      basic_internal_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 4
        http_authenticator:
          type: "basic"
          challenge: true
        authentication_backend:
          type: "intern"
      proxy_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 3
        http_authenticator:
          type: "proxy"
          challenge: false
          config:
            user_header: "x-proxy-user"
            roles_header: "x-proxy-roles"
        authentication_backend:
          type: "noop"
      jwt_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 0
        http_authenticator:
          type: "jwt"
          challenge: false
          config:
            signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
            jwt_header: "Authorization"
            jwt_url_parameter: null
            roles_key: null
            subject_key: null
        authentication_backend:
          type: "noop"
      clientcert_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 2
        http_authenticator:
          type: "clientcert"
          config:
            username_attribute: "cn"
          challenge: false
        authentication_backend:
          type: "noop"
      ldap:
        http_enabled: false
        transport_enabled: false
        order: 5
        http_authenticator:
          type: "basic"
          challenge: false
        authentication_backend:
          type: "ldap"
          config:
            enable_ssl: false
            enable_start_tls: false
            enable_ssl_client_auth: false
            verify_hostnames: true
            hosts:
            - "localhost:8389"
            bind_dn: null
            password: null
            userbase: "ou=people,dc=example,dc=com"
            usersearch: "(sAMAccountName={0})"
            username_attribute: null
    authz:
      roles_from_myldap:
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: "ldap"
          config:
            enable_ssl: false
            enable_start_tls: false
            enable_ssl_client_auth: false
            verify_hostnames: true
            hosts:
            - "localhost:8389"
            bind_dn: null
            password: null
            rolebase: "ou=groups,dc=example,dc=com"
            rolesearch: "(member={0})"
            userroleattribute: null
            userrolename: "disabled"
            rolename: "cn"
            resolve_nested_roles: true
            userbase: "ou=people,dc=example,dc=com"
            usersearch: "(uid={0})"
      roles_from_another_ldap:
        enabled: false
        authorization_backend:
          type: "ldap"
---
admin:
  readonly: true
  hash: "$2a$12$VcCDgh2NDk07JGN0rjGbM.Ad41qVR/YFJcgHp0UGns5JDymv..TOG"
  roles:
  - "admin"
  attributes:
    attribute1: "value1"
    attribute2: "value2"
    attribute3: "value3"
logstash:
  hash: "$2a$12$u1ShR4l4uBS3Uv59Pa2y5.1uQuZBrZtmNfqB3iM/.jL0XoV9sghS2"
  roles:
  - "logstash"
kibanaserver:
  readonly: true
  hash: "$2y$12$fiB1irSXQvaH52Qt5Z1jyukO0S01gFYiE.SWEjY8dJMm4bPYN8hde"
kibanaro:
  hash: "$2a$12$JJSXNfTowz7Uu5ttXfeYpeYE0arACvcwlPBStB1F.MI7f0U9Z4DGC"
  roles:
  - "kibanauser"
  - "readall"
readall:
  hash: "$2a$12$ae4ycwzwvLtZxwZ82RmiEunBbIPiAmGZduBAjKN0TXdwQFtCwARz2"
  roles:
  - "readall"
snapshotrestore:
  hash: "$2y$12$DpwmetHKwgYnorbgdvORCenv4NAK8cPUg8AI6pxLCuWf/ALc0.v7W"
  roles:
  - "snapshotrestore[root@com securityconfig]#"
---
all_access:
  readonly: true
  cluster:
  - "UNLIMITED"
  indices:
    '*':
      '*':
      - "UNLIMITED"
  tenants:
    admin_tenant: "RW"
readall:
  readonly: true
  cluster:
  - "CLUSTER_COMPOSITE_OPS_RO"
  indices:
    '*':
      '*':
      - "READ"
readall_and_monitor:
  cluster:
  - "CLUSTER_MONITOR"
  - "CLUSTER_COMPOSITE_OPS_RO"
  indices:
    '*':
      '*':
      - "READ"
kibana_user:
  readonly: true
  cluster:
  - "INDICES_MONITOR"
  - "CLUSTER_COMPOSITE_OPS"
  indices:
    ?kibana:
      '*':
      - "MANAGE"
      - "INDEX"
      - "READ"
      - "DELETE"
    ?kibana-6:
      '*':
      - "MANAGE"
      - "INDEX"
      - "READ"
      - "DELETE"
    ?kibana_*:
      '*':
      - "MANAGE"
      - "INDEX"
      - "READ"
      - "DELETE"
    ?tasks:
      '*':
      - "INDICES_ALL"
    ?management-beats:
      '*':
      - "INDICES_ALL"
    '*':
      '*':
      - "indices:data/read/field_caps*"
      - "indices:data/read/xpack/rollup*"
      - "indices:admin/mappings/get*"
      - "indices:admin/get"
kibana_server:
  readonly: true
  cluster:
  - "CLUSTER_MONITOR"
  - "CLUSTER_COMPOSITE_OPS"
  - "cluster:admin/xpack/monitoring*"
  - "indices:admin/template*"
  - "indices:data/read/scroll*"
  indices:
    ?kibana:
      '*':
      - "INDICES_ALL"
    ?kibana-6:
      '*':
      - "INDICES_ALL"
    ?kibana_*:
      '*':
      - "INDICES_ALL"
    ?reporting*:
      '*':
      - "INDICES_ALL"
    ?monitoring*:
      '*':
      - "INDICES_ALL"
    ?tasks:
      '*':
      - "INDICES_ALL"
    ?management-beats*:
      '*':
      - "INDICES_ALL"
    '*':
      '*':
      - "indices:admin/aliases*"
logstash:
  cluster:
  - "CLUSTER_MONITOR"
  - "CLUSTER_COMPOSITE_OPS"
  - "indices:admin/template/get"
  - "indices:admin/template/put"
  indices:
    logstash-*:
      '*':
      - "CRUD"
      - "CREATE_INDEX"
    '*beat*':
      '*':
      - "CRUD"
      - "CREATE_INDEX"
manage_snapshots:
  cluster:
  - "MANAGE_SNAPSHOTS"
  indices:
    '*':
      '*':
      - "indices:data/write/index"
      - "indices:admin/create"
security_rest_api_access:
  readonly: true
kibana_read_only:
  readonly: true
own_index:
  cluster:
  - "CLUSTER_COMPOSITE_OPS"
  indices:
    ${user_name}:
      '*':
      - "INDICES_ALL"
---
all_access:
  readonly: true
  backendroles:
  - "admin"
logstash:
  backendroles:
  - "logstash"
kibana_server:
  readonly: true
  users:
  - "kibanaserver"
kibana_user:
  backendroles:
  - "kibanauser"
readall:
  readonly: true
  backendroles:
  - "readall"
manage_snapshots:
  readonly: true
  backendroles:
  - "snapshotrestore"
own_index:
  users:
  - "*"

But am still getting a mostly blank page when i attempt to load the app/security-configuration URL: