How to setup SAML with Jumpcloud - Solved

Hi team,

I decided to create a Topic where I have fixed and solved SAML authentication issue with Jumpcloud as IDP and Kibana as SP on other side, with practical example so it can help people trying to set SAML on their own.
I had a topic with an issue that I had trying to solve it with @clsa : SAML/Okta login to Kibana not working with roles

My configuration consists of 3 Elasticsearch nodes, 1 Kibana, Opendistro version 1.4.
Important note is to have oss elasticsearch, and proper versions for Opendistro versions. Compatibility matrix can be found here:

There are 3 sides for configuration, Jumpcloud (or Okta or something similar) as IDP, and Elasticsearch/Kibana on other side.

Jumpcloud config should look like this:

  • You need to create 2 groups, which will be later mapped with Opendistro Security Roles. One group will be for admins, other one for read only users. Let us call them kibana_admin and kibana_user (Mine are called different, but for sake of this tutorial I gave them different names. Make sure to change name of groups, because I am not sure if kibana_user is reserved name). Add your users that should have Kibana access into proper groups.
  • Create SAML application.
    Pick create Custom SAML App from menu.
    Configration example below:




    Configure it properly, attach groups to it, save configuration and export metadata file from it once configuration is saved.

Elasticsearch configuration:
For elasticsearch side, you need to make changes to 4 files, elasticsearch.yml, metadata file from Jumpcloud, config.yml and roles_mapping.yml
IMPORTANT NOTE make sure that you have configuration same on all 3 Elasticsearch nodes, this ate me serious time. Security script does update your security config index, but for some reason you need to have configuration on all nodes.
This is elasticsearch.yml example that uses self signed certs:

# ======================== Elasticsearch Configuration =========================
# ---------------------------------- Cluster -----------------------------------
cluster.name: {{ es_cluster_name }}
# ------------------------------------ Node ------------------------------------
node.name: {{ hostname }}
node.master: {{ es_master_node }}
node.data: {{ es_data_node }}
node.attr.rack: {{ es_node_rack }}
# ----------------------------------- Paths ------------------------------------
path.data: {{ es_data_path }}
path.logs: {{ es_logs_path }}
# ----------------------------------- Memory -----------------------------------
bootstrap.memory_lock: {{ es_memory_lock }}
# ---------------------------------- Network -----------------------------------
#network.host: {{ node_ip }}
network.host: 0.0.0.0
http.port: 9200
# --------------------------------- Discovery ----------------------------------
cluster.initial_master_nodes: ["{{ hostname_01 }}", "{{ hostname_02 }}"]
discovery.zen.ping.unicast.hosts: ["{{ hostname_01 }}","{{ hostname_02 }}","{{ hostname_03 }}"]
discovery.zen.minimum_master_nodes: 2
# ---------------------------------- Gateway -----------------------------------
# ---------------------------------- Various -----------------------------------
######## Start OpenDistro for Elasticsearch Security Demo Configuration ########
# WARNING: revise all the lines below before you go into production
opendistro_security.ssl.transport.pemcert_filepath: node.pem
opendistro_security.ssl.transport.pemkey_filepath: node-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false
opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: node.pem
opendistro_security.ssl.http.pemkey_filepath: node-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
opendistro_security.allow_unsafe_democertificates: true
opendistro_security.allow_default_init_securityindex: true
opendistro_security.authcz.admin_dn:
  - 'CN=ADMIN,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
#  - CN=kirk,OU=client,O=client,L=test, C=de
opendistro_security.nodes_dn:
  - 'CN={{ hostname_01 }},OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
  - 'CN={{ hostname_02 }},OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
  - 'CN={{ hostname_03 }},OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
node.max_local_storage_nodes: 3
######## End OpenDistro for Elasticsearch Security Demo Configuration ########

Next is config.yml file, that is located on this path /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml. I will give you minimal example that worked in my place

---

# This is the main Open Distro Security configuration file where authentication
# and authorization is defined.
#
# You need to configure at least one authentication domain in the authc of this file.
# An authentication domain is responsible for extracting the user credentials from
# the request and for validating them against an authentication backend like Active Directory for example.
#
# If more than one authentication domain is configured the first one which succeeds wins.
# If all authentication domains fail then the request is unauthenticated.
# In this case an exception is thrown and/or the HTTP status is set to 401.
#
# After authentication authorization (authz) will be applied. There can be zero or more authorizers which collect
# the roles from a given backend for the authenticated user.
#
# Both, authc and auth can be enabled/disabled separately for REST and TRANSPORT layer. Default is true for both.
#        http_enabled: true
#        transport_enabled: true
#
# For HTTP it is possible to allow anonymous authentication. If that is the case then the HTTP authenticators try to
# find user credentials in the HTTP request. If credentials are found then the user gets regularly authenticated.
# If none can be found the user will be authenticated as an "anonymous" user. This user has always the username "anonymous"
# and one role named "anonymous_backendrole".
# If you enable anonymous authentication all HTTP authenticators will not challenge.
#
#
# Note: If you define more than one HTTP authenticators make sure to put non-challenging authenticators like "proxy" or "clientcert"
# first and the challenging one last.
# Because it's not possible to challenge a client with two different authentication methods (for example
# Kerberos and Basic) only one can have the challenge flag set to true. You can cope with this situation
# by using pre-authentication, e.g. sending a HTTP Basic authentication header in the request.
#
# Default value of the challenge flag is true.
#
#
# HTTP
#   basic (challenging)
#   proxy (not challenging, needs xff)
#   kerberos (challenging)
#   clientcert (not challenging, needs https)
#   jwt (not challenging)
#   host (not challenging) #DEPRECATED, will be removed in a future version.
#                          host based authentication is configurable in roles_mapping

# Authc
#   internal
#   noop
#   ldap

# Authz
#   ldap
#   noop



_meta:
  type: "config"
  config_version: 2

config:
  dynamic:
    # Set filtered_alias_mode to 'disallow' to forbid more than 2 filtered aliases per index
    # Set filtered_alias_mode to 'warn' to allow more than 2 filtered aliases per index but warns about it (default)
    # Set filtered_alias_mode to 'nowarn' to allow more than 2 filtered aliases per index silently
    #filtered_alias_mode: warn
    #do_not_fail_on_forbidden: false
    #kibana:
    # Kibana multitenancy
    #multitenancy_enabled: true
    #server_username: kibanaserver
    #index: '.kibana'
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
        #internalProxies: '.*' # trust all internal proxies, regex pattern
        #remoteIpHeader:  'x-forwarded-for'
        ###### see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html for regex help
        ###### more information about XFF https://en.wikipedia.org/wiki/X-Forwarded-For
        ###### and here https://tools.ietf.org/html/rfc7239
        ###### and https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve
    authc:
      saml_auth_domain:
        http_enabled: true
        transport_enabled: false
        # order SAML must be order 1, and basic_internal_auth_domain must 0 in order to internal users can connect to elasticsearch
        order: 1
        http_authenticator:
          type: saml
          challenge: true
          config:
            idp:
              # metadata_file: /etc/elasticsearch/elastic-metadata-jumpcloud.xml
              metadata_file: elastic-metadata-jumpcloud.xml
			  # entity_id needs to be same as one that is configured on Jumpcloud as per screenshots
              entity_id: https://sso.jumpcloud.com/
            sp:
			  # entity_id needs to be same as one that is configured on Jumpcloud as per screenshots
              entity_id: kibana-saml
            # kibana_url is public IP that is contacting with Jumpcloud, make sure to be in format https://someipaddress.com/
            kibana_url: {{ kibana_pub_url }}
            # roles_key value must be same as group attribute defined in Jumpcloud configuration as per screenshots
            roles_key: roles
            # exchange_key value must be at least 32 characters long, and must be even number of characters (32,34,36 etc) and in this format 'Foo123bar..'
            exchange_key: {{ exchange_key }}
        authentication_backend:
          type: noop
      basic_internal_auth_domain:
        description: "Authenticate via HTTP Basic against internal users database"
        http_enabled: true
        transport_enabled: true
        order: 0
        http_authenticator:
          type: basic
          challenge: false
        authentication_backend:
          type: intern

Roles mapping are really important, as it is core feature that separate privileged users with read only one. Here is my example of how I managed to connect Jumpcloud groups with Opendistro roles, roles_mapping.yml is in same directory as config.yml:

---
# In this file users, backendroles and hosts can be mapped to Open Distro Security roles.
# Permissions for Opendistro roles are configured in roles.yml

_meta:
  type: "rolesmapping"
  config_version: 2

# Define your roles mapping here

## Demo roles mapping

all_access:
  reserved: false
  backend_roles:
  - "admin"
  - "kibana_admin" # name of Jumpcloud group
  description: "Maps admin to all_access"

own_index:
  reserved: false
  users:
  - "*"
  description: "Allow full access to an index named like the username"

logstash:
  reserved: false
  backend_roles:
  - "logstash"

# most of basic permissions goes here
#
kibana_user:
  reserved: false
  backend_roles:
  - "kibanauser"
  - "kibana_user" # name of Jumpcloud group
  description: "Maps kibanauser to kibana_user"

# read only permissions
readall:
  reserved: false
  backend_roles:
  - "readall"
  - "kibana_user" # name of Jumpcloud group

manage_snapshots:
  reserved: false
  backend_roles:
  - "snapshotrestore"

kibana_server:
  reserved: true
  users:
  - "kibanaserver"

Fourth file is elastic-metadata-jumpcloud.xml that is actually file you have downloaded from Jumpcloud as metadata file. Place it on path /etc/elasticsearch/elastic-metadata-jumpcloud.xml, and make sure its readable by elasticsearch.
After setting configuration perform security script on one of master nodes:

/usr/share/elasticsearch/plugins/opendistro_security/tools/securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert
/etc/elasticsearch/root-ca.pem -cert /etc/elasticsearch/admin.pem -key /etc/elasticsearch/admin-key.pem
Maybe its not necessary but I did also a restart on all 3 nodes.

Kibana configuration
After Elasticsearch configuration, set config.yml file on Kibana server, make sure that certificate for Kibana is same as the one uploaded on Jumpcloud. Here is kibana.yml example:

# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# or in the "license" file accompanying this file. This file is distributed
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
# express or implied. See the License for the specific language governing
# permissions and limitations under the License.

# Description:
# Default Kibana configuration for Open Distro.
server.host: "{{ node_hostname }}"

elasticsearch.hosts: ["https://{{ es_node_01 }}:9200","https://{{ es_node_02 }}:9200","https://{{ es_node_03 }}:9200"]
elasticsearch.username: {{ kibana_user }}
elasticsearch.password: {{ kibana_password }}
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]

elasticsearch.preserveHost: true
kibana.index: ".kibana"

elasticsearch.pingTimeout: 1500
elasticsearch.requestTimeout: 30000
elasticsearch.shardTimeout: 30000
elasticsearch.startupTimeout: 5000

logging.dest: /var/log/kibana/kibana.log

opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]

# opendistro_security.auth.type: "basicauth"
opendistro_security.auth.type: "saml"
server.xsrf.whitelist: ["/_opendistro/_security/saml/acs/idpinitiated", "/_opendistro/_security/saml/acs", "/_opendistro/_security/saml/logout"]

#SSL
elasticsearch.ssl.verificationMode: full

server.ssl.enabled: true
server.ssl.key: /etc/kibana/kibana-key.pem
server.ssl.certificate: /etc/kibana/kibana.pem
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/root-ca.pem"]

I hope someone will find this useful and it can help them solve their issues.
I did not cover 100% of explanations, just the one I think are most important ones, regarding actual configuration and possible common issues. There are couple of more articles and pages that can be very helpful and that you can also understand more deeply the SAML and configuration:

Good luck everyone!

2 Likes