Troubleshooting Security Plugin Issues

Hi, I’m trying to get setup with OIDC running, however, something’s apparently wrongly configured, and I am getting no information from either Kibana nor elasticsearch logs. I’m running Opendistro on Amazon Linux 2.

I’m able to start ES and Kibana just fine with basic auth settings on, but as soon as I’m trying to enable OIDC, it’s broken.

What I’m seeing on the Kibana page is:
Red plugin:opendistro_security@7.1.1 An error occurred during initialisation, please check the logs.

The Kibana logs look like this:
{"type":"log","@timestamp":"2019-07-30T13:18:53Z","tags":["status","plugin:opendistro_security@7.1.1","error"],"pid":7039,"state":"red","message":"Status changed from yellow to red - An error occurred during initialisation, please check the logs.","prevState":"yellow","prevMsg":"'' is set to false, cookies are transmitted over unsecure HTTP connection. Consider using HTTPS and set this key to 'true'"}

I’ve tried to increase the log4j log levels for the security plugin in elasticsearch, but it’s not giving me any details: = logger.opendistro_security.level = trace logger.opendistro_security.appenderRef.rolling.ref = rolling logger.opendistro_security.rolling_old.ref = rolling_old logger.opendistro_security.additivity = false

The elasticsearch logs:
[2019-07-30T13:32:18,716][INFO ][o.e.c.r.a.AllocationService] [i-0c13e256e02e7f817] updating number_of_replicas to [1] for indices [.opendistro_security, .kibana_1] [2019-07-30T13:32:18,749][INFO ][o.e.c.s.MasterService ] [i-0c13e256e02e7f817] node-join[{i-06862dd041ed4261f}{Drx_0QEjSpa0D7f-kCt22Q}{SBa4JhzZRviEuiuSkq3dpQ}{}{}{rack=eu-central-1b} join existing leader], term: 12, version: 140, reason: added {{i-06862dd041ed4261f}{Drx_0QEjSpa0D7f-kCt22Q}{SBa4JhzZRviEuiuSkq3dpQ}{}{}{rack=eu-central-1b},} [2019-07-30T13:32:18,909][INFO ][o.e.c.s.ClusterApplierService] [i-0c13e256e02e7f817] added {{i-06862dd041ed4261f}{Drx_0QEjSpa0D7f-kCt22Q}{SBa4JhzZRviEuiuSkq3dpQ}{}{}{rack=eu-central-1b},}, term: 12, version: 140, reason: Publication{term=12, version=140} [2019-07-30T13:32:20,322][INFO ][o.e.c.r.a.AllocationService] [i-0c13e256e02e7f817] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.opendistro_security][0]] ...]).

I’m suspecting that something is wrong in my config, but I’ve made sure to double and triple-check with what other’s have configured in both kibana.yml and security configuration. Also, since I’m not even seeing any logs wrt. to the failure, I won’t bother you with config details.

My question is this: where are the logs I am asked to look into by Kibana?


I encountered the same issue. Looking a the source code of the Kibana plugin the error “Red plugin:opendistro_security@7.1.1 An error occurred during initialisation, please check the logs.” is related to having a wrong URL for the OIDC discovery, or not reachable URL, or some issues related to the certificate used by your OIDC (trust).

Thanks, yes, I suspect as much, wasn’t able to verify however without any logs. I have now (a lot of) logs since I fixed my = logger.opendistro_security.level = trace logger.opendistro_security.appenderRef.rolling.ref = rolling logger.opendistro_security.appenderRef.rolling_old.ref = rolling_old logger.opendistro_security.additivity = false

By the way there is not so much logging on the Kibana side (Node JS), when the setup of the SSO plugin occurs and fails at the stage you mentioned.

I figured as much, sadly. I’ve now tried several configuration changes, but all to no avail. I’m posting to you now the config I am trying. The kibana.yml is:
elasticsearch.hosts: https://localhost:9200
elasticsearch.ssl.verificationMode: none # to exclude hostname verification issues for now.
elasticsearch.username: ...
elasticsearch.password: ...
elasticsearch.requestHeadersWhitelist: ["Authorization", "security_tenant", "securitytenant"] # having both whitelisted for now.

opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]

opendistro_security.auth.type: "openid"
# Kibana is running in a private VPC that can access that endpoint and
# would also accept the certificate - it's properly signed.
opendistro_security.openid.connect_url: ""
opendistro_security.openid.client_id: "..."
opendistro_security.openid.client_secret: "..."
opendistro_security.openid.scope: "openid profile email"
opendistro_security.openid.base_redirect_url: "..."
opendistro_security.cookie.password: "..."

the security configuration is:


  type: "config"
  config_version: 2

      anonymous_auth_enabled: false
        enabled: false
        internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # left as-is for now.
        remoteIpHeader:  'x-forwarded-for'
        description: "Authenticate via HTTP Basic against internal users database"
        http_enabled: true
        transport_enabled: true
        order: 0
          type: basic
          challenge: false
          type: internal
        description: "Authenticate via OpenID"
        http_enabled: true
        transport_enabled: true
        order: 1
          type: openid
          challenge: false
            enable_ssl: true
            verify_hostnames: false # to exclude hostname verification issues for now.
          type: noop

I’m using the demo security setup from ODES, so my opendistro-relevant section of elasticsearch.yml looks like this:

opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: esnode.pem
opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
  - CN=kirk,OU=client,O=client,L=test, C=de
  - ",OU=node,O=node,L=test,DC=de"
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]

cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3

Hi the way to control the connectivity between Kibana and your IDP is to test a curl command from the Kibana host / container.


If the CA is not trusted by default in your Kibana host / container you can add the following property opendistro_security.openid.root_ca: to locate your CA perm file, or add the CA to /etc/pki/ca-trust/source/anchors/ and execute an update-ca-trust for Node JS.