Kibana + Kerberos auth is broken in 1.4.0?

Hi,
I’ve upgraded my OpenDistro ES + Kibana setup from 1.1.0 to 1.4.0 and noticed that querying data in Kibana no longer works:

  • https:// elastic-server /_opendistro/_security/authinfo - WORKS
  • https:// kibana-server /app/security-accountinfo - WORKS, shows my username and roles
  • https:// kibana-server /app/kibana#/discover - NOT WORKING
    image

Digging into HTTP packets between Kibana and Elasticsearch shows:

Authentication Exception :: {"path":"/_msearch","query":{"rest_total_hits_as_int":"true","ignore_throttled":"true"},"body":"{\"index\":\"jira\",\"ignore_unavailable\":true,\"preference\":1581456941567}\n{\"timeout\":\"30000ms\",\"version\":true,\"size\":500,\"sort\":[{\"date\":{\"order\":\"desc\",\"unmapped_type\":\"boolean\"}}],\"_source\":{\"excludes\":[]},\"aggs\":{\"2\":{\"date_histogram\":{\"field\":\"date\",\"fixed_interval\":\"30m\",\"time_zone\":\"America/Chicago\",\"min_doc_count\":1}}},\"stored_fields\":[\"*\"],\"script_fields\":{},\"docvalue_fields\":[{\"field\":\"Created\",\"format\":\"date_time\"},{\"field\":\"Resolved\",\"format\":\"date_time\"},{\"field\":\"Updated\",\"format\":\"date_time\"},{\"field\":\"created date\",\"format\":\"date_time\"},{\"field\":\"date\",\"format\":\"date_time\"},{\"field\":\"resolution date\",\"format\":\"date_time\"},{\"field\":\"updated date\",\"format\":\"date_time\"}],\"query\":{\"bool\":{\"must\":[],\"filter\":[{\"match_all\":{}},{\"range\":{\"date\":{\"format\":\"strict_date_optional_time\",\"gte\":\"2020-02-10T21:37:37.460Z\",\"lte\":\"2020-02-11T21:37:37.460Z\"}}}],\"should\":[],\"must_not\":[]}},\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"fragment_size\":2147483647}}\n","statusCode":401,"response":"{\"error\":{\"header\":{\"WWW-Authenticate\":\"Negotiate\"}}}","wwwAuthenticateDirective":"Negotiate"}
        at respond (/usr/s hare/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)
        at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)
        at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
        at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)
        at IncomingMessage.emit (events.js:194:15)
        at endReadableNT (_stream_readable.js:1103:12)
        at process._tickCallback (internal/process/next_tick.js:63:19)

so the important part here is:

"statusCode":401,"response":"{\"error\":{\"header\":{\"WWW-Authenticate\":\"Negotiate\"}}}","wwwAuthenticateDirective":"Negotiate"}

Is there any solution for this? Worked well previously!

We will look into this. Can you please provide your kerberos setup instruction and kibana.yml, securityconfig.yml ?

Thanks, Hardik!

So here is my setup. I’m deploying via docker containers on 2 VMs in ActiveDirectory environment:

  • metrics-elastic (amazon/opendistro-for-elasticsearch:1.4.0)
  • metrics-kibana (amazon/opendistro-for-elasticsearch-kibana:1.4.0)

Java options: -Xmx4g -Xms4g -Dsun.security.krb5.rcache=none

elasticsearch.yml

######## OpenDistro for Elasticsearch Security Configuration ########
opendistro_security.ssl.transport.pemcert_filepath: certs/esnode.pem
opendistro_security.ssl.transport.pemkey_filepath: certs/esnode-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: certs/root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: certs/esnode.pem
opendistro_security.ssl.http.pemkey_filepath: certs/esnode-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: certs/root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de

opendistro_security.kerberos.krb5_filepath: kerberos/krb5.conf
opendistro_security.kerberos.acceptor_keytab_filepath: kerberos/elastic.keytab
opendistro_security.kerberos.acceptor_principal: HTTP/metrics-elastic.corp.com

opendistro_security.audit.type: internal_elasticsearch
opendistro_security.audit.config.index: "'security-auditlog-'YYYY"
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access", "super"]
opendistro_security.unsupported.restapi.allow_securityconfig_modification: true
opendistro_security.authcz.rest_impersonation_user.admin: ["*"]

cluster.routing.allocation.disk.threshold_enabled: false

http.max_header_size: 16kb
http.max_content_length: 100mb

Directory /usr/share/elasticsearch/config/kerberos contains proper krb5.conf and keytab files. No issues with Kerberos on Elasticsearch side - works just as expected.

kibana.yml

server.name: kibana
server.ssl.enabled: false
server.ssl.key: config/certs/esnode-key.pem
server.ssl.certificate: config/certs/esnode.pem
elasticsearch.ssl.certificateAuthorities: config/certs/root-ca.pem
elasticsearch.ssl.verificationMode: none
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]

opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
opendistro_security.auth.type: kerberos

security config.yml

---
_meta:
  type: "config"
  config_version: 2
config:
  dynamic:
    filtered_alias_mode: "warn"
    disable_rest_auth: false
    disable_intertransport_auth: false
    respect_request_indices_options: false
    kibana:
      multitenancy_enabled: true
      server_username: "kibanaserver"
      index: ".kibana"
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: "10\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|192\\.168\\.\\d{1,3}\\\
          .\\d{1,3}|169\\.254\\.\\d{1,3}\\.\\d{1,3}|127\\.\\d{1,3}\\.\\d{1,3}\\.\\\
          d{1,3}|172\\.1[6-9]{1}\\.\\d{1,3}\\.\\d{1,3}|172\\.2[0-9]{1}\\.\\d{1,3}\\\
          .\\d{1,3}|172\\.3[0-1]{1}\\.\\d{1,3}\\.\\d{1,3}"
        remoteIpHeader: "X-Forwarded-For"
    authc:
      ldap:
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          challenge: false
          type: "basic"
          config: {}
        authentication_backend:
          type: "ldap"
          config:
            enable_ssl: false
            enable_start_tls: false
            enable_ssl_client_auth: false
            verify_hostnames: false
            hosts:
            - "corp.com:3268"
            bind_dn: "CN=account,OU=Account,DC=corp,DC=com"
            password: "*********"
            userbase: "DC=corp,DC=com"
            usersearch: "(sAMAccountName={0})"
            username_attribute: "sAMAccountName"
        description: "Migrated from v6"
      basic_internal_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 0
        http_authenticator:
          challenge: false
          type: "basic"
          config: {}
        authentication_backend:
          type: "intern"
          config: {}
        description: "Migrated from v6"
      kerberos_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 2
        http_authenticator:
          challenge: true
          type: "kerberos"
          config:
            krb_debug: false
            strip_realm_from_principal: true
        authentication_backend:
          type: "noop"
          config: {}
        description: "Migrated from v6"
    authz:
      roles_from_myldap:
        http_enabled: true
        transport_enabled: true
        authorization_backend:
          type: "ldap"
          config:
            enable_ssl: false
            verify_hostnames: false
            hosts:
            - "corp.com:3268"
            bind_dn:"CN=account,OU=Account,DC=corp,DC=com"
            password: "*******"
            rolebase: OU=Groups,DC=corp,DC=com"
            rolesearch: "(member={0})"
            userrolename: "disabled"
            rolename: "sAMAccountName"
            resolve_nested_roles: false
            userbase: "DC=corp,DC=com"
            usersearch: "(sAMAccountName={0})"
            custom_attr_maxval_len: 0
            skip_users:
            - "admin"
            - "kibanaserver"
        description: "Migrated from v6"
    auth_failure_listeners: {}
    do_not_fail_on_forbidden: false
    multi_rolespan_enabled: true
    hosts_resolver_mode: "ip-only"
    do_not_fail_on_forbidden_empty: false