Proxy_auth login failure

Hi all,
I’m evaluating OpenSearch and trying to get proxy authentication working.

My Setup:

  • OpenSearch version: 1.2.0
  • Deployment method: kubernetes
  • 1 dedicated Coordinator node
  • 1 dedicated Master node
  • 1 dedicated Data node

I have the following config.yml for the security plugin:

---
_meta:
  type: "config"
  config_version: 2

config:
  dynamic:
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: true
        internalProxies: '.*' # regex pattern
        remoteIpHeader:  'x-forwarded-for'
    authc:
      proxy_auth_domain:
        description: "Authenticate via proxy"
        http_enabled: true
        transport_enabled: true
        order: 0
        http_authenticator:
          type: proxy
          challenge: false
          config:
            user_header: proxy-uid
            roles_header: proxy-roles
        authentication_backend:
          type: noop
      basic_internal_auth_domain:
        description: "Authenticate via HTTP Basic against internal users database"
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          type: basic
          challenge: true
        authentication_backend:
          type: intern
    authz:
      <unmodified>

The following commands fail:

$ curl -k https://opensearch -H 'proxy-uid: myUser' -H 'proxy-roles: admin'
Unauthorized
$ curl -k https://opensearch -H 'proxy-uid: myUser' -H 'proxy-roles: all_access'
Unauthorized

But this command succeeds:

$ curl -k https://opensearch -u admin:admin
{
  "name" : "opensearch-coordinator-0",
  "cluster_name" : "my-cluster",
  "cluster_uuid" : "...",
  "version" : {
    "distribution" : "opensearch",
    "number" : "1.2.0",
    "build_type" : "tar",
    "build_hash" : "c459282fd67ddb17dcc545ec9bcdc805880bcbec",
    "build_date" : "2021-11-22T16:57:18.360386Z",
    "build_snapshot" : false,
    "lucene_version" : "8.10.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "The OpenSearch Project: https://opensearch.org/"
}

When I check via the Dashboard I see:

If I check the audit log I see entries for my failed request with audit_category set to FAILED_LOGIN. The audit log lists the proxy-user and proxy-roles headers as well. Neither the coordinator, master, or data nodes log anything to their logs.

Using the dashboard I added a backend role for myUser to role all_access but that didn’t seem to help either.

Any help as to what might be wrong or how I can debug this would be appreciated.

@ngolpaye Could you share your OpenSearch yaml config file?

@pablo the opensearch.yaml for all nodes is the default that comes with the 1.2.0 docker images. I’ve included a copy of the file below for convenience.

The 3 nodes types are configured differently via environment variables:

Master Node

Variable Value
cluster.name my-cluster
node.name <value from kubernetes metadata.name aka the pod name>
node.data false
node.ingest false
node.master true
discovery.seed_hosts opensearch-discovery
cluster.initial_master_nodes opensearch-master-0 (name of master pod)
bootstrap.memory_lock false
ES_JAVA_OPTS -Xms4g -Xmx4g

Data Node

Variable Value
cluster.name my-cluster
node.name <value from kubernetes metadata.name aka the pod name>
node.data true
node.ingest true
node.master false
discovery.seed_hosts opensearch-discovery
cluster.initial_master_nodes opensearch-master-0 (name of master pod)
bootstrap.memory_lock false
ES_JAVA_OPTS -Xms4g -Xmx4g

Coordinator Node

Variable Value
cluster.name my-cluster
node.name <value from kubernetes metadata.name aka the pod name>
node.data false
node.ingest false
node.master false
discovery.seed_hosts opensearch-discovery
cluster.initial_master_nodes opensearch-master-0 (name of master pod)
bootstrap.memory_lock false
ES_JAVA_OPTS -Xms4g -Xmx4g

opensearch.yaml

cluster.name: docker-cluster

# Bind to all interfaces because we don't know what IP address Docker will assign to us.
network.host: 0.0.0.0

# # minimum_master_nodes need to be explicitly set when bound on a public IP
# # set to 1 to allow single node clusters
# discovery.zen.minimum_master_nodes: 1

# Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
#discovery.type: single-node

######## Start OpenSearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
plugins.security.ssl.transport.pemcert_filepath: esnode.pem
plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
plugins.security.ssl.transport.enforce_hostname_verification: false
plugins.security.ssl.http.enabled: true
plugins.security.ssl.http.pemcert_filepath: esnode.pem
plugins.security.ssl.http.pemkey_filepath: esnode-key.pem
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
plugins.security.allow_unsafe_democertificates: true
plugins.security.allow_default_init_securityindex: true
plugins.security.authcz.admin_dn:
  - CN=kirk,OU=client,O=client,L=test, C=de

plugins.security.audit.type: internal_opensearch
plugins.security.enable_snapshot_restore_privilege: true
plugins.security.check_snapshot_restore_write_privileges: true
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
plugins.security.system_indices.enabled: true
plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opensearch-observability", ".opendistro-asynchronous-search-response*", ".replication-metadata-store"]
node.max_local_storage_nodes: 3
######## End OpenSearch Security Demo Configuration ########

@ngolpaye Your curl command is missing x-forwarded-for header.

Can you try the below?

curl -k https://opensearch -H 'proxy-uid: myUser' -H 'proxy-roles: all_access' -H "x-forwarded-for: 1.1.1.1"

Thanks! That got me further. Modified your query slightly for pretty output:

curl -k https://opensearch?pretty -H 'proxy-uid: myUser' -H 'proxy-roles: all_access' -H 'x-forwarded-for: 1.1.1.1'
{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "no permissions for [cluster:monitor/main] and User [name=myUser, backend_roles=[all_access], requestedTenant=null]"
      }
    ],
    "type" : "security_exception",
    "reason" : "no permissions for [cluster:monitor/main] and User [name=myUser, backend_roles=[all_access], requestedTenant=null]"
  },
  "status" : 403
}

I would have thought all_access should give me full access just like admin user?

I also tried adding -H 'securitytenant: Global', -H 'securitytenant: global_tenant', and -H 'securitytenant: admin_tenant' to the curl call but still got a 403 response as above:

{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "no permissions for [cluster:monitor/main] and User [name=myUser, backend_roles=[all_access], requestedTenant=admin_tenant]"
      }
    ],
    "type" : "security_exception",
    "reason" : "no permissions for [cluster:monitor/main] and User [name=myUser, backend_roles=[all_access], requestedTenant=admin_tenant]"
  },
  "status" : 403
}

I followed these instructions using the Dashboard to add myUser as a backend role to the all_access role:

Thank you for all the help so far!

@ngolpaye Got the same result. However, if I use proxy-roles: admin then I get a response.
Still testing.

Thank you. I think I understand now where my confusion is and it makes perfect sense now.

I had assumed backend_roles was a mapping of users to the role all_access. However, I think it’s actually a mapping of roles to the permission set all_access.

If that is correct thinking then everything is making sense and it’s all working as expected now. Thank you very much for all your help!