Using PKI certificate for kibanaserver user: 'Tenant global_tenant is not allowed for this user'

I have been making progress in getting ODFE configured (1.13.2) with respect to security. I have successfully setup LDAP and been able to map a user (testadmin) who is in a specific group (odfe-admins) to be able to get mapped to the “all_access” built in role. By successful, I mean that I can see the kibana webpage, login, and see the UI as well as use the security admin tool to apply the security settings. My next step is to move the kibana connection to elasticsearch to use PKI. For the moment, I have kibana and elasticsearch (both installed from the ODFE repos on Centos 8) on the same machine. That seems to have gone pretty well after a few not too hard to fix issues. (NOTE: I have replaced my actual domain with testdomain.com in the rest, but I am using the ‘real’ domain in my configs). I have changed in kibana.yml:

elasticsearch.username: kibanaserver
elasticsearch.password: kibanaserver

to

elasticsearch.ssl.certificate: /etc/kibana/ssl/kibana.crt
elasticsearch.ssl.key: /etc/kibana/ssl/kibana.key
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/ssl/ca.crt"]

The cn of the certificate issues above is testodfe.testdomain.com. In roles_mapping.yml, I have changed the kibana_server role to the following:

kibana_server:
  reserved: true
  hosts: ['testodfe.testdomain.com']
  users:
  - "kibanaserver"
  - "testodfe.testdomain.com"

When I restart kibana with the above settings applied (and of course, with the securityadmin.sh being applied), I immediately start getting this in the cluster log for ES:

[2021-05-29T15:16:25,938][WARN ][c.a.o.s.c.PrivilegesInterceptorImpl] [testodfe.testdomain.com] Tenant global_tenant is not allowed for user testodfe.testdomain.com
[2021-05-29T15:16:28,439][WARN ][c.a.o.s.c.PrivilegesInterceptorImpl] [testodfe.testdomain.com] Tenant global_tenant is not allowed for user testodfe.testdomain.com
[2021-05-29T15:16:30,942][WARN ][c.a.o.s.c.PrivilegesInterceptorImpl] [testodfe.testdomain.com] Tenant global_tenant is not allowed for user testodfe.testdomain.com

It seems like it is finding the right user, as the log appears to correctly identify the user ‘testodfe.testdomain.com’. As a test, I tried seeing what error I get if I use simple authentication and specify a bad user (I get: ‘Authentication finally failed for kibanaserver2’ instead). If attempt to do the same in intentionally skewing the config for PKI and intentionally change the roles_mapping to some other host (testodfe2.testdomain.com, for example), I get:

No cluster-level perm match for User [name=testodfe2.testdomain.com, backend_roles=[], requestedTenant=null] Resolved [aliases=[*], allIndices=[*], types=[*], originalRequested=[*], remoteIndices=[]] [Action [cluster:monitor/nodes/info]] [RolesChecked []]. No permissions for [cluster:monitor/nodes/info]

So, it seems like it is correctly reading in the cn from the cert and matching the role in roles_mapping. However, I still get the ‘Tenant global_tenant is not allowed for user’, even though I think it is matching the same exact mapping that the simple authentication kibanaserver user is.

For reference, my authc in config.yml looks like this:

      clientcert_auth_domain:
        description: "Authenticate via SSL client certificates"
        http_enabled: true
        transport_enabled: true
        order: 2
        http_authenticator:
          type: clientcert
          config:
            username_attribute: cn #optional, if omitted DN becomes username
          challenge: false
        authentication_backend:
          type: noop

I am running out of ideas as to what is causing the issue. Does anyone have any suggestions? Thanks!

Hi @justme it seems some usernames are perhaps hardcoded somewhere because I tried your use case and managed to get the same error. The workaround was to create a role and map user testodfe2.testdomain.com to it. See below:

test:
  tenant_permissions:
    - tenant_patterns:
      - '*'
      allowed_actions:
        - 'kibana_all_write'

Hope this helps

1 Like

Thanks for the help! I tried to see if I could get that to work, but I am still having issues. To see if I could get it to work at all, I first just went overboard and tried to see if I added user ‘testodfe.testdomain.com’ to all_access in roles_mapping.yml if that would work and it appeared to (or at least the error dropped from the log). However, when I added your suggestion to roles.yml (and a matching ‘test’ in roles_mapping), I still get a repetitive error on restart of kibana:

[2021-06-08T15:52:21,277][INFO ][c.a.o.s.p.PrivilegesEvaluator] [testodfe.testdomain.com] No cluster-level perm match for User [name=testodfe.testdomain.com, backend_roles=[], requestedTenant=null] Resolved [aliases=[*], allIndices=[*], types=[*], originalRequested=[*], remoteIndices=[]] [Action [cluster:monitor/nodes/info]] [RolesChecked [test]]. No permissions for [cluster:monitor/nodes/info]

Is there a chance that the test role you gave needs extra permissions? Since the kibana_server role was hard coded, I can’t find the exact permissions it actually was loaded up with originally since it wasn’t in roles.yml.

Did you map testodfe.testdomain.com to kibana_server and the test role I sent? The additional permissions should be added on top kibana_server access.

1 Like

That did it! Thanks for the help. I had replaced, not augmented, the role_mapping.