SSl/TSL connection error - Elasticsearch

Hi,

I have created the opendistro cluster for Elasticsearch in k8s environment, so cluster is up and I have ingress to connect to the cluster, like cluster external address would be like https://app.example.com/clusterid. So I want to connect the cluster using this url for any http REST API’s.
I have ClusterIP type service which exposes the port 9200.
So when I try to connect the cluster using https://app.example.com/clusterid address, I am getting below logs in Elasticsearch pods.

I able to connect to cluster using port-forward with admin/admi credentials. but I try to connect using ingress it gives the error. I tried to disable the opendistro-security plugins, disabling all the security related config in the elasticsearch.yaml but still it did not work.

Any help would be much appreciated.

[2021-07-27T10:31:00,461][ERROR][c.a.o.s.s.h.n.OpenDistroSecuritySSLNettyHttpServerTransport] [elasticsearch] Exception during establishing a SSL connection: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:
io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1246) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1314) ~[netty-handler-4.1.49.Final.jar:4.1.49.Final]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:501) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:440) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
at java.lang.Thread.run(Thread.java:832) [?:?]
[2021-07-27T10:31:00,463][WARN ][o.e.h.AbstractHttpServerTransport] [elasticsearch] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/11.11.11.11:9200, remoteAddress=/12.12.12.12:58840}

Moved this to the security category, that will get it in front of the right people.

@nk2812 Are you using helm? Can you please provide your configuration files (redact any sensitive details)

No, I am not using helm. Below is my configuration files.

this is my elasticsearch.yml

cluster.name: "docker-cluster"
network.host: 0.0.0.0

Using below file I am creating the cluster.

apiVersion: abc.example.com/v1alpha1
kind: OpenDistroCluster
metadata:
  name: opendistro-sample
spec:
  nodeSets:
    - name: master
      count: 3
      podTemplate:
        metadata:
            name: opendistro-sample
            labels:
                opendistro-cluster: "true"
        spec:
            initContainers:
              - name: sysctl
                image: busybox
                command: ['sh', '-c', "sysctl -w vm.max_map_count=262144"]
            containers:
              - name: opendistro
                image: amazon/opendistro-for-elasticsearch:1.13.2
                env:
                    - name: cluster.name
                      value: "opendistro-cluster"
                    - name: bootstrap.memory_lock
                      value: "true"
                ports:
                    - containerPort: 9200
                    - containerPort: 9600 # required for Performance Analyzer
                resources:
                    requests:
                        memory: 4Gi
                        cpu: 1
                    limits:
                        memory: 4Gi
                        cpu: 1

Also can you guide me on how can I provide below felids, my pods will be created dynamically so I do not have host names before start of the cluster.

     - discovery.seed_hosts=odfe-node1,odfe-node2
     - cluster.initial_master_nodes=odfe-node1,odfe-node2

Thanks.

@Anthony , Even i am facing same issue (io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record:) when I try to access this Ip and port from outside on my web browser, I dont see elasticsearch connecting. Inside container able to access with ip and local host :9200.
same error i tried with dns instead of local host.

. i’m using helm chart . please find the below configuration

cluster.name: opensearch-cluster

Bind to all interfaces because we don’t know what IP address Docker will assign to us.

network.host: 0.0.0.0

# minimum_master_nodes need to be explicitly set when bound on a public IP

# set to 1 to allow single node clusters

discovery.zen.minimum_master_nodes: 1

Setting network.host to a non-loopback address enables the annoying bootstrap checks. “Single-node” mode disables them again.

discovery.type: single-node

Start OpenSearch Security Demo Configuration

WARNING: revise all the lines below before you go into production

plugins:
security:
ssl:
transport:
pemcert_filepath: esnode.pem
pemkey_filepath: esnode-key.pem
pemtrustedcas_filepath: root-ca.pem
enforce_hostname_verification: false
http:
enabled: true
pemcert_filepath: esnode.pem
pemkey_filepath: esnode-key.pem
pemtrustedcas_filepath: root-ca.pem
allow_unsafe_democertificates: true
allow_default_init_securityindex: true
authcz:
admin_dn:
- CN=kirk,OU=client,O=client,L=test,C=de
audit.type: internal_opensearch
enable_snapshot_restore_privilege: true
check_snapshot_restore_write_privileges: true
restapi:
roles_enabled: [“all_access”, “security_rest_api_access”]
system_indices:
enabled: true
indices:
[
“.opendistro-alerting-config”,
“.opendistro-alerting-alert*”,
“.opendistro-anomaly-results*”,
“.opendistro-anomaly-detector*”,
“.opendistro-anomaly-checkpoints”,
“.opendistro-anomaly-detection-state”,
“.opendistro-reports-",
".opendistro-notifications-
”,
“.opendistro-notebooks”,
“.opendistro-asynchronous-search-response*”,
]
######## End OpenSearch Security Demo Configuration ########

@skopen
Can you confirm which helm chart you are using? Is it from the docs

Did you try the defaults mentioned in the docs?
Also have you set up ingress to access remotely or are you using port-forward (are you running kubernetes on minikube locally?)

@Anthony , yes same helm charts in docs which you shared.
yeah i try with default config... actually i created 2 ingress file one for opensearch dashboard and opensearch. 
able to opensearch_dashboard dns url. but for opensearch dns url not able to access. 

Using ingress , please find the below config for opensearch,

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: xxxx
    alb.ingress.kubernetes.io/group.name: elasticsearch
    alb.ingress.kubernetes.io/inbound-cidrs: 10.0.0.0/8
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 443}]'
    alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/ssl-policy: xxxx
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/rewrite-target: /
    external-dns.alpha.kubernetes.io/hostname: elasticsearch.xxxx.com
     	kubernetes.io/ingress.class: alb
  finalizers:
  - group.ingress.k8s.aws/xxx
  generation: 2
  name: elasticsearch
  namespace: logging
spec:
  rules:
  - host: elasticsearch.xxxtest.com
    http:
      paths:
      - backend:
          service:
            name: opensearch-cluster-master
            port:
              number: 9200
        path: /
        pathType: Prefix
status:
  loadBalancer: {}

@Anthony : Do we need change or modify anything in default configuratio, when we are using ingress for opensearch and opensearchdashboard ?

Can you provide your values.yaml (redact any sensitive details).

Also how are you testing access to opensearch? Are you providing username and password?

@Anthony 
 curl -XGET https://elastic.xxx.com -u 'admin:admin' --insecure

please find the values.yml for opensearch,
---
clusterName: "opensearch-cluster"
nodeGroup: "master"

# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: "opensearch-cluster-master"

# OpenSearch roles that will be applied to this nodeGroup
# These will be set as environment variable "node.roles". E.g. node.roles=master,ingest,data,remote_cluster_client
roles:
  - master
  - ingest
  - data
  - remote_cluster_client

replicas: 2
minimumMasterNodes: 1

# if not set, falls back to parsing .Values.imageTag, then .Chart.appVersion.
majorVersion: ""

global:
  # Set if you want to change the default docker registry, e.g. a private one.
  dockerRegistry: ""

# Allows you to add any config files in {{ .Values.opensearchHome }}/config
opensearchHome: /usr/share/opensearch
# such as opensearch.yml and log4j2.properties
config:
  # Values must be YAML literal style scalar / YAML multiline string.
  # <filename>: |
  #   <formatted-value(s)>
  # log4j2.properties: |
  #   status = error
  #
  #   appender.console.type = Console
  #   appender.console.name = console
  #   appender.console.layout.type = PatternLayout
  #   appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
  #
  #   rootLogger.level = info
  #   rootLogger.appenderRef.console.ref = console
  opensearch.yml: |
    cluster.name: opensearch-cluster

    # Bind to all interfaces because we don't know what IP address Docker will assign to us.
    network.host: 0.0.0.0

    # # minimum_master_nodes need to be explicitly set when bound on a public IP
    # # set to 1 to allow single node clusters
    # discovery.zen.minimum_master_nodes: 1

    # Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
    # discovery.type: single-node

    # Start OpenSearch Security Demo Configuration
    # WARNING: revise all the lines below before you go into production
    plugins:
      security:
        ssl:
          transport:
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
            enforce_hostname_verification: false
          http:
            enabled: true
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
        allow_unsafe_democertificates: true
        allow_default_init_securityindex: true
        authcz:
          admin_dn:
            - CN=kirk,OU=client,O=client,L=test,C=de
        audit.type: internal_opensearch
        enable_snapshot_restore_privilege: true
        check_snapshot_restore_write_privileges: true
        restapi:
          roles_enabled: ["all_access", "security_rest_api_access"]
        system_indices:
          enabled: true
          indices:
            [
              ".opendistro-alerting-config",
              ".opendistro-alerting-alert*",
              ".opendistro-anomaly-results*",
              ".opendistro-anomaly-detector*",
              ".opendistro-anomaly-checkpoints",
              ".opendistro-anomaly-detection-state",
              ".opendistro-reports-*",
              ".opendistro-notifications-*",
              ".opendistro-notebooks",
              ".opendistro-asynchronous-search-response*",
            ]
    ######## End OpenSearch Security Demo Configuration ########
  # log4j2.properties:

# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
#  - name: MY_ENVIRONMENT_VAR
#    value: the_value_goes_here

# Allows you to load environment variables from kubernextes secret or config map
envFrom: []
# - secretRef:
#     name: env-secret
# - configMapRef:
#     name: config-map

# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []

hostAliases: []
# - ip: "127.0.0.1"
#   hostnames:
#   - "foo.local"
#   - "bar.local"

image: "xxxxxxxx/opensearchproject/opensearch"
# override image tag, which is .Chart.AppVersion by default
imageTag: "1.1.0"
imagePullPolicy: "IfNotPresent"

podAnnotations: {}
  # iam.amazonaws.com/role: es-cluster

# additionals labels
labels: {}

opensearchJavaOpts: "-Xmx512M -Xms512M"

resources:
  limits:
    cpu: "1000m"
    memory: "2Gi"
  requests:
    cpu: "250m"
    memory: "200Mi"

initResources: {}
  # limits:
  #   cpu: "25m"
  #   # memory: "128Mi"
  # requests:
  #   cpu: "25m"
  #   memory: "128Mi"

sidecarResources: {}
  # limits:
  #   cpu: "25m"
  #   # memory: "128Mi"
  # requests:
  #   cpu: "25m"
  #   memory: "128Mi"

networkHost: "0.0.0.0"

rbac:
  create: false
  serviceAccountAnnotations: {}
  serviceAccountName: ""

podSecurityPolicy:
  create: false
  name: ""
  spec:
    privileged: true
    fsGroup:
      rule: RunAsAny
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
      - secret
      - configMap
      - persistentVolumeClaim
      - emptyDir

persistence:
  enabled: true
  # Set to false to disable the `fsgroup-volume` initContainer that will update permissions on the persistent disk.
  enableInitChown: true
  # override image, which is busybox by default
  # image: busybox
  # override image tag, which is latest by default
  # imageTag:
  labels:
    # Add default labels for the volumeClaimTemplate of the StatefulSet
    enabled: false
  # OpenSearch Persistent Volume Storage Class
  # If defined, storageClassName: <storageClass>
  # If set to "-", storageClassName: "", which disables dynamic provisioning
  # If undefined (the default) or set to null, no storageClassName spec is
  #   set, choosing the default provisioner.  (gp2 on AWS, standard on
  #   GKE, AWS & OpenStack)
  #
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

extraVolumes: []
  # - name: extras
  #   emptyDir: {}

extraVolumeMounts: []
  # - name: extras
  #   mountPath: /usr/share/extras
  #   readOnly: true

extraContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

extraInitContainers: []
  # - name: do-somethings
  #   image: busybox
  #   command: ['do', 'something']

# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""

# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"

# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort"
antiAffinity: "soft"

# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}

# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"

# The environment variables injected by service links are not used, but can lead to slow OpenSearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true

protocol: http
httpPort: 9200
transportPort: 9300

service:
  labels: {}
  labelsHeadless: {}
  type: ClusterIP
  nodePort: ""
  annotations: {}
  httpPortName: http
  transportPortName: transport
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  externalTrafficPolicy: ""

updateStrategy: RollingUpdate

# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1

podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000

securityContext:
  capabilities:
    drop:
      - ALL
  # readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000

securityConfig:
  enabled: true
  path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
  actionGroupsSecret:
  configSecret:
  internalUsersSecret:
  rolesSecret:
  rolesMappingSecret:
  tenantsSecret:
  # The following option simplifies securityConfig by using a single secret and
  # specifying the config files as keys in the secret instead of creating
  # different secrets for for each config file.
  # Note that this is an alternative to the individual secret configuration
  # above and shouldn't be used if the above secrets are used.
  config:
    # There are multiple ways to define the configuration here:
    # * If you define anything under data, the chart will automatically create
    #   a secret and mount it.
    # * If you define securityConfigSecret, the chart will assume this secret is
    #   created externally and mount it.
    # * It is an error to define both data and securityConfigSecret.
    securityConfigSecret: ""
    data: {}
      # config.yml: |-
      # internal_users.yml: |-
      # roles.yml: |-
      # roles_mapping.yml: |-
      # action_groups.yml: |-
      # tenants.yml: |-

# How long to wait for opensearch to stop gracefully
terminationGracePeriod: 120

sysctlVmMaxMapCount: 262144

readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 3
  timeoutSeconds: 2000

## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""

imagePullSecrets: []
nodeSelector: {}
tolerations: []

# Enabling this will publically expose your OpenSearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
  enabled: false
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

nameOverride: ""
fullnameOverride: ""

masterTerminationFix: false

lifecycle: {}
  # preStop:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
  # postStart:
  #   exec:
  #     command:
  #       - bash
  #       - -c
  #       - |
  #         #!/bin/bash
  #         # Add a template to adjust number of shards/replicas1
  #         TEMPLATE_NAME=my_template
  #         INDEX_PATTERN="logstash-*"
  #         SHARD_COUNT=8
  #         REPLICA_COUNT=1
  #         ES_URL=http://localhost:9200
  #         while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
  #         curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'

keystore: []

networkPolicy:
  ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
  ## In order for a Pod to access OpenSearch, it needs to have the following label:
  ## {{ template "uname" . }}-client: "true"
  ## Example for default configuration to access HTTP port:
  ## opensearch-master-http-client: "true"
  ## Example for default configuration to access transport port:
  ## opensearch-master-transport-client: "true"

  http:
    enabled: true

# Deprecated
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""

## Set optimal sysctl's. This requires privilege. Can be disabled if
## the system has already been preconfigured. (Ex: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html)
## Also see: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
sysctl:
  enabled: false

## Enable to add 3rd Party / Custom plugins not offered in the default OpenSearch image.
plugins:
  enabled: false
  installList: []
  # - example-fake-plugin

Can you edit your reply to enclose the config file in script tags please

@Anthony : updated script .
Do we need to modify anything specify in opensearch values .yml ?

@skopen
Finally got it working using the below config. Bare in mind I’m no expert in kubernetes.

Used the demo chart mentioned earlier and default values.yml file.
After everything was up and running, created ingress with below:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
  rules:
  - host: demo.local
    http:
      paths:
      - path: /
        pathType: Exact
        backend:
          service:
            name: opensearch-cluster-master
            port:
              number: 9200

Was then able to query opensearch using below command:

curl -XGET "https://demo.local" -uadmin:admin --insecure

(Tested using minikube running locally)

@Anthony

NAME READY STATUS RESTARTS AGE
pod/opensearch-cluster-master-0 1/1 Running 0 3d6h
pod/opensearch-cluster-master-1 1/1 Running 0 3d6h
pod/opensearch-cluster-master-2 1/1 Running 0 3d6h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 172.20.0.1 443/TCP 69d
service/opensearch-cluster-master ClusterIP 172.20.38.5 9200/TCP,9300/TCP 3d6h
service/opensearch-cluster-master-headless ClusterIP None 9200/TCP,9300/TCP 3d6h

NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-state-metrics-67c64bdc66 1 0 0 67d

NAME READY AGE
statefulset.apps/opensearch-cluster-master 3/3 3d6h

Above are my opensearch status , when i expose the service i can get only for cluster-0 only

{
“name” : “opensearch-cluster-master-0”,
“cluster_name” : “opensearch-cluster”,
“cluster_uuid” : “tTS4BwyLQlLPzBxipyAQ”,
“version” : {
“distribution” : “opensearch”,
“number” : “1.2.3”,
“build_type” : “tar”,
“build_hash” : “8a529d77c7432bc45b005ac13b2741b57d4a”,
“build_date” : “2021-12-21T01:36:21.407473Z”,
“build_snapshot” : false,
“lucene_version” : “8.10.1”,
“minimum_wire_compatibility_version” : “6.8.0”,
“minimum_index_compatibility_version” : “6.0.0-beta1”
},
“tagline” : “The OpenSearch Project: https://opensearch.org/
}

How i can expose the opensearch service ?when i try with alb get 503 bad gateway .