Seems to be already configured for Security. Quit

Hi

We are getting below log message when we are deploying in Kubernetes.

OpenDistro for Elasticsearch Security Demo Installer
** Warning: Do not use on production or public reachable systems **
Basedir: /usr/share/elasticsearch
Elasticsearch install type: rpm/deb on CentOS Linux release 7.9.2009 (Core)
Elasticsearch config dir: /usr/share/elasticsearch/config
Elasticsearch config file: /usr/share/elasticsearch/config/elasticsearch.yml
Elasticsearch bin dir: /usr/share/elasticsearch/bin
Elasticsearch plugins dir: /usr/share/elasticsearch/plugins
Elasticsearch lib dir: /usr/share/elasticsearch/lib
Detected Elasticsearch Version: x-content-7.10.2
Detected Open Distro Security Version: 1.13.1.0
/usr/share/elasticsearch/config/elasticsearch.yml seems to be already configured for Security. Quit.

Thanks
Sharath

Values.yaml

Copyright 2019 Viasat, Inc.

Licensed under the Apache License, Version 2.0 (the “License”).

You may not use this file except in compliance with the License.

A copy of the License is located at

http://www.apache.org/licenses/LICENSE-2.0

or in the “license” file accompanying this file. This file is distributed

on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either

express or implied. See the License for the specific language governing

permissions and limitations under the License.

kibana:
enabled: true
image: cnadatabase_prod/amazon/opendistro-for-elasticsearch-kibana
imageTag: 1.13.2

Specifies the image pull policy. Can be “Always” or “IfNotPresent” or “Never”.

Default to “Always”.

imagePullPolicy: “”
replicas: 1
port: 5601
externalPort: 443
resources: {}

limits:

cpu: 2500m

memory: 2Gi

requests:

cpu: 500m

memory: 512Mi

readinessProbe:
livenessProbe:
startupProbe:

elasticsearchAccount:
secret: “”
keyPassphrase:
enabled: false

extraEnvs:

extraVolumes:

- name: extras

emptyDir: {}

extraVolumeMounts:

- name: extras

mountPath: /usr/share/extras

readOnly: true

extraInitContainers:

- name: do-something

image: busybox

command: [‘do’, ‘something’]

extraContainers:

- name: do-something

image: busybox

command: [‘do’, ‘something’]

ssl:
kibana:
enabled: true
existingCertSecret: kibana-certs
existingCertSecretCertSubPath: kibana-crt.pem
existingCertSecretKeySubPath: kibana-key.pem
existingCertSecretRootCASubPath: kibana-root-ca.pem
elasticsearch:
enabled: true
existingCertSecret: rest-certs
existingCertSecretCertSubPath: elk-rest-crt.pem
existingCertSecretKeySubPath: elk-rest-key.pem
existingCertSecretRootCASubPath: elk-rest-root-ca.pem

configDirectory: “/usr/share/kibana/config”
certsDirectory: “/usr/share/kibana/certs”

ingress:
## Set to true to enable ingress record generation
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: “true”
labels: {}
path: /
hosts:
- chart-example.local
tls:
# - secretName: chart-example-tls
# hosts:
# - chart-example.local

service:
type: LoadBalancer
annotations: {}

config:
## Default Kibana configuration from kibana-docker.
# server.name: kibana
server.host: “0.0.0.0”

## Replace with Elasticsearch DNS name picked during Service deployment
# elasticsearch.requestTimeout: 360000
 elasticsearch.hosts: "https://k8s-es-opendistro-es-client-service:9200"
 elasticsearch.username: kibanaserver
 elasticsearch.password: kibanaserver

## Kibana TLS Config
# server.ssl.enabled: true
# server.ssl.key: /usr/share/kibana/certs/kibana-key.pem
# server.ssl.certificate: /usr/share/kibana/certs/kibana-crt.pem
# elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/certs/kibana-root-ca.pem

# opendistro_security.cookie.secure: true
# opendistro_security.cookie.password: ${COOKIE_PASS}
 server.ssl.enabled: true
 server.ssl.key: /usr/share/kibana/certs/kibana-key.pem
 server.ssl.certificate: /usr/share/kibana/certs/kibana-crt.pem
 elasticsearch.ssl.certificateAuthorities: /usr/share/kibana/certs/kibana-root-ca.pem
 elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
 elasticsearch.ssl.verificationMode: none

 opendistro_security.multitenancy.enabled: true
 opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
 opendistro_security.readonly_mode.roles: ["kibana_read_only"]

 newsfeed.enabled: false
 telemetry.optIn: false
 telemetry.enabled: false
 security.showInsecureClusterWarning: false

Node labels for pod assignment

ref: Assigning Pods to Nodes | Kubernetes

nodeSelector: {}

Tolerations for pod assignment

ref: Taints and Tolerations | Kubernetes

tolerations:

affinity: {}

serviceAccount:
## Specifies whether a ServiceAccount should be created
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the fullname template
name:

podAnnotations: {}

global:
clusterName: “k8s-es”

psp:
create: false

rbac:
enabled: false

Optionally override the docker registry to use for images

imageRegistry: harbor-repo.xxxx.com

Optionally specify an array of imagePullSecrets.

Secrets must be manually created in the namespace.

imagePullSecrets:
- regsecret

elasticsearch:

Used when deploying hot/warm architecture. Allows second aliased deployment to find cluster.

Default {{ template opendistro-es.fullname }}-discovery.

discoveryOverride: “”
securityConfig:
enabled: true
path: “/usr/share/elasticsearch/plugins/opendistro_security/securityconfig”
actionGroupsSecret:
configSecret: “security-config”
internalUsersSecret:
rolesSecret:
rolesMappingSecret:
tenantsSecret:
#The following option simplifies securityConfig by using a single secret and specifying the respective secrets in the corresponding files instead of creating different secrets for config,internal users, roles, roles mapping and tenants
#Note that this is an alternative to the above secrets and shouldn’t be used if the above secrets are used
config:
securityConfigSecret:
data: {}
# config.yml: |-
# internal_users.yml: |-
# roles.yml: |-
# rolesMapping.yml: |-
# tenants.yml: |-

securityContext to apply to the pod. Allows for running as non-root

securityContextCustom:
fsGroup: 1000
runAsUser: 1000
runAsGroup: 1000

extraEnvs:

extraInitContainers:

- name: do-something

image: busybox

command: [‘do’, ‘something’]

extraVolumes:

- name: extras

emptyDir: {}

extraVolumeMounts:

- name: extras

mountPath: /usr/share/extras

readOnly: true

initContainer:
image: busybox
imageTag: 1.27.2

Set optimal sysctl’s. This requires privilege. Can be disabled if

the system has already been preconfigured.

sysctl:
enabled: false

Give SYS_CHROOT cap to ES pods. This might not be neccesary

sys_chroot:
enabled: true

init container to chown the mount volume. not neccesary if setting a

fsGroup in the securityContext

fixmount:
enabled: false

ssl:
## TLS is mandatory for the transport layer and can not be disabled
transport:
existingCertSecret: transport-certs
existingCertSecretCertSubPath: elk-transport-crt.pem
existingCertSecretKeySubPath: elk-transport-key.pem
existingCertSecretRootCASubPath: elk-transport-root-ca.pem
rest:
enabled: true
existingCertSecret: rest-certs
existingCertSecretCertSubPath: elk-rest-crt.pem
existingCertSecretKeySubPath: elk-rest-key.pem
existingCertSecretRootCASubPath: elk-rest-root-ca.pem
admin:
enabled: true
existingCertSecret: admin-certs
existingCertSecretCertSubPath: admin-crt.pem
existingCertSecretKeySubPath: admin-key.pem
existingCertSecretRootCASubPath: admin-root-ca.pem

master:
enabled: true
replicas: 3
updateStrategy: “RollingUpdate”

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  ##
  # existingClaim:

  ## The subdirectory of the volume to mount to, useful in dev environments
  ## and one PV for multiple services.
  ##
  subPath: ""

  ## Open Distro master Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "default"
  accessModes:
    - ReadWriteOnce
  size: 2Gi
  annotations: {}

resources: {}
#  limits:
#    cpu: 1
#    memory: 1024Mi
#  requests:
#    cpu: 200m
#    memory: 1024Mi
javaOpts: "-Xms512m -Xmx512m"
podDisruptionBudget:
  enabled: false
  minAvailable: 1
readinessProbe: []
livenessProbe:
  tcpSocket:
    port: transport
  initialDelaySeconds: 60
  periodSeconds: 10
startupProbe: []
nodeSelector: {}
tolerations: []
## Anti-affinity to disallow deploying client and master nodes on the same worker node
affinity: {}
#  podAntiAffinity:
#    requiredDuringSchedulingIgnoredDuringExecution:
#      - topologyKey: "kubernetes.io/hostname"
#        labelSelector:
#          matchLabels:
#            role: master
podAnnotations: {}

extraInitContainers: []
# - name: do-something
#   image: busybox
#   command: ['do', 'something']

extraContainers: []
# - name: do-something
#   image: busybox
#   command: ['do', 'something']

data:
enabled: true
## Enables dedicated statefulset for data. Otherwise master nodes as data storage
dedicatedPod:
enabled: true
replicas: 3
updateStrategy: “RollingUpdate”

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  ##
  # existingClaim:

  ## The subdirectory of the volume to mount to, useful in dev environments
  ## and one PV for multiple services.
  ##
  subPath: ""

  ## Open Distro master Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "default"
  accessModes:
    - ReadWriteOnce
  size: 4Gi
  annotations: {}

resources: {}
#  limits:
#    cpu: 1
#    memory: 1024Mi
#  requests:
#    cpu: 200m
#    memory: 1024Mi
javaOpts: "-Xms512m -Xmx512m"
podDisruptionBudget:
  enabled: false
  minAvailable: 1
readinessProbe: []
livenessProbe:
  tcpSocket:
    port: transport
  initialDelaySeconds: 60
  periodSeconds: 10
startupProbe: []
nodeSelector: {}
tolerations: []
## Anti-affinity to disallow deploying client and master nodes on the same worker node
affinity: {}
#  podAntiAffinity:
#    preferredDuringSchedulingIgnoredDuringExecution:
#      - weight: 1
#        podAffinityTerm:
#          topologyKey: "kubernetes.io/hostname"
#          labelSelector:
#            matchLabels:
#              role: data
podAnnotations: {}

client:
enabled: true
## Enables dedicated deployment for client/ingest. Otherwise master nodes as client/ingest
dedicatedPod:
enabled: true
service:
type: LoadBalancer
annotations: {}
# # Defined ELB backend protocol as HTTPS to allow connection to Elasticsearch API
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https

    # # ARN of ACM certificate registered to the deployed ELB for handling connections over TLS
    # # ACM certificate should be issued to the DNS hostname defined earlier (elk.sec.example.com)
    # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:111222333444:certificate/c69f6022-b24f-43d9-b9c8-dfe288d9443d"
    # service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"

    # service.beta.kubernetes.io/aws-load-balancer-connection-draining-enabled: "true"
    # service.beta.kubernetes.io/aws-load-balancer-connection-draining-timeout: "60"
    # service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"

    # # Annotation to create internal only ELB
    # service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
replicas: 2
javaOpts: "-Xms512m -Xmx512m"
ingress:
  ## Set to true to enable ingress record generation
  enabled: false
  annotations: {}
  #  kubernetes.io/ingress.class: nginx
  #  kubernetes.io/tls-acme: "true"
  #  # Depending on your Ingress Controller you may need to set one of the two below annotations to have NGINX call the backend using HTTPS
  #  nginx.org/ssl-services:"{{ template "opendistro-es.fullname" . }}-client-service"
  #  nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  labels: {}
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local
resources: {}
#  limits:
#    cpu: 1
#    memory: 1024Mi
#  requests:
#    cpu: 200m
#    memory: 1024Mi
podDisruptionBudget:
  enabled: false
  minAvailable: 1
readinessProbe: []
livenessProbe:
  tcpSocket:
    port: transport
  initialDelaySeconds: 60
  periodSeconds: 10
startupProbe: []
nodeSelector: {}
tolerations: []
## Weighted anti-affinity to disallow deploying client node to the same worker node as master node
affinity: {}
#  podAntiAffinity:
#    preferredDuringSchedulingIgnoredDuringExecution:
#      - weight: 1
#        podAffinityTerm:
#          topologyKey: "kubernetes.io/hostname"
#          labelSelector:
#            matchLabels:
#              role: client
podAnnotations: {}

config:
#opendistro_security.disabled: true
opendistro_security.ssl.transport.pemcert_filepath: elk-transport-crt.pem
opendistro_security.ssl.transport.pemkey_filepath: elk-transport-key.pem
opendistro_security.ssl.transport.pemtrustedcas_filepath: elk-transport-root-ca.pem
opendistro_security.ssl.transport.enforce_hostname_verification: false

opendistro_security.ssl.http.enabled: true
opendistro_security.ssl.http.pemcert_filepath: elk-rest-crt.pem
opendistro_security.ssl.http.pemkey_filepath: elk-rest-key.pem
opendistro_security.ssl.http.pemtrustedcas_filepath: elk-rest-root-ca.pem

opendistro_security.allow_default_init_securityindex: false
opendistro_security.authcz.admin_dn:
- 'CN=admin,OU=IT,O=xxxx,L=PA,ST=CA,C=US'
opendistro_security.nodes_dn:
- 'CN=k8s-es'
- 'CN=k8s-es,OU=IT,O=xxxx,L=PA,ST=CA,C=US'

opendistro_security.advanced_modules_enabled: true
opendistro_security.roles_mapping_resolution: BOTH
opendistro_security.audit.ignore_users: ['kibanaserver']
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
cluster.routing.allocation.disk.threshold_enabled: false
## Example Config
opendistro_security.allow_unsafe_democertificates: false
# opendistro_security.allow_default_init_securityindex: true
# opendistro_security.audit.type: internal_elasticsearch
# opendistro_security.enable_snapshot_restore_privilege: true
# opendistro_security.check_snapshot_restore_write_privileges: true
# cluster.routing.allocation.disk.threshold_enabled: false
# opendistro_security.audit.config.disabled_rest_categories: NONE
# opendistro_security.audit.config.disabled_transport_categories: NONE
# cluster:
#   name: ${CLUSTER_NAME}
# node:
#   master: ${NODE_MASTER}
#   data: ${NODE_DATA}
#   name: ${NODE_NAME}
#   ingest: ${NODE_INGEST}
#   max_local_storage_nodes: 1
#   attr.box_type: hot

# processors: ${PROCESSORS:1}

# network.host: ${NETWORK_HOST}

# thread_pool.bulk.queue_size: 800

# path:
#   data: /usr/share/elasticsearch/data
#   logs: /usr/share/elasticsearch/logs

# http:
#   enabled: ${HTTP_ENABLE}
#   compression: true

# discovery:
#   zen:
#     ping.unicast.hosts: ${DISCOVERY_SERVICE}
#     minimum_master_nodes: ${NUMBER_OF_MASTERS}

# # TLS Configuration Transport Layer
# opendistro_security.ssl.transport.pemcert_filepath: elk-transport-crt.pem
# opendistro_security.ssl.transport.pemkey_filepath: elk-transport-key.pem
# opendistro_security.ssl.transport.pemtrustedcas_filepath: elk-transport-root-ca.pem
# opendistro_security.ssl.transport.enforce_hostname_verification: false

# # TLS Configuration REST Layer
# opendistro_security.ssl.http.enabled: true
# opendistro_security.ssl.http.pemcert_filepath: elk-rest-crt.pem
# opendistro_security.ssl.http.pemkey_filepath: elk-rest-key.pem
# opendistro_security.ssl.http.pemtrustedcas_filepath: elk-rest-root-ca.pem

log4jConfig: “”

loggingConfig:
## Default config
## you can override this using by setting a system property, for example -Des.logger.level=DEBUG
es.logger.level: INFO
rootLogger: ${es.logger.level}, console
logger:
## log action execution errors for easier debugging
action: DEBUG
## reduce the logging for aws, too much is logged under the default INFO
com.amazonaws: WARN
appender:
console:
type: console
layout:
type: consolePattern
conversionPattern: “[%d{ISO8601}][%-5p][%-25c] %m%n”

transportKeyPassphrase:
enabled: false
passPhrase:

sslKeyPassphrase:
enabled: false
passPhrase:

maxMapCount: 262144

image: cnadatabase_prod/amazon/opendistro-for-elasticsearch
imageTag: 1.13.2

Specifies the image pull policy. Can be “Always” or “IfNotPresent” or “Never”.

Default to “Always”.

imagePullPolicy: “”

configDirectory: /usr/share/elasticsearch/config

serviceAccount:
## Specifies whether a ServiceAccount should be created
create: true
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the fullname template
name:

nameOverride: “”
fullnameOverrde: “”

moved to Security category.

@sharath
Would you be able to paste the config in Preformatted text format as its difficult to read as plain text.
What exactly is the issue you are having. The demo install checks if the elasticsearch.yml file already has reference to “opendistro_security” and if it does, it doesn’t add any entries, which is why you are seeing the message.

Can you check if the cluster forms and you are able to query elasticsearch from within one of the nodes with below command:

curl -u admin:admin --insecure -X GET "https://localhost:9200"