Logs from network devices not shown in kibana?

Hi,

we just recently deployed opendistro (or is it now called Opensearch?) + logstash via docker. Awesome tool. We used to have Elasticstack deployed a long time ago, but never really got to the point to actually configure and use it in production. Now we would like to collect the logs from our Ubiquity switches. Currently there is unfortunately no module for filebeat to handle this directly. There is one for Cisco, but I assume the logs are different and so we tried doing this the old school way:

Syslog client → Central Rsyslog → Logstash → Elasticsearch.

For the switches, we have configured the Unifi controller (central system that manages all the Ubiquity switches) to send all the logs to a central rsyslog server and now we are struggling what would be the best way to ship the logs to elasticsearch. We basically followed this article to ship the logs from rsyslog to logstash and from there to elasticsearch:

For testing I set up a linux as a rsyslog client as well as the unifi system. I can see both systems sending their logs when looking at /var/log/messages on the central rsyslog server:

Aug 12 06:55:07 katello qdrouterd: 2021-08-12 06:55:07.722479 +0000 ROUTER_CORE (info) [C80731][L176131] Link attached: dir=out source={pulp.agent.bdef005b-da2d-48f7-b6b5-58a4ad88a3cc expire:sess} target={<none> expire:sess}
Aug 12 08:57:34 ubisw01.local 00aabbddeeff,US-48-500W-1.00.00+12698: switch: DOT1X: Radius authenticated in unauthenticated VLAN on interface 0/36.

I first tried to set up the central rsyslog server to have a unique directory under /var/log/ for each rsyslog client but there seems to be a probem with my rsyslog.conf syntax and so I just have all logs go in to /var/log/messages on the central rsyslog server for now.

Also I can see that the logs from the linux client and the switches are send from the central rsyslog server to elasticsearch when tailing the logstash container:

docker logs -f logstash

{
         "procid" => "7494",
       "facility" => "daemon",
       "severity" => "notice",
     "@timestamp" => 2021-08-12T06:50:47.155Z,
           "host" => "172.28.7.221",
    "programname" => "puppet-agent",
        "message" => "Applied catalog in 4.88 seconds",
     "sysloghost" => "gedasvl401",
       "@version" => "1"
}

{
         "procid" => "-",
       "facility" => "daemon",
       "severity" => "notice",
     "@timestamp" => 2021-08-12T09:01:40.000Z,
           "host" => "172.28.7.221",
    "programname" => "18e829ac1ac1,US-48-500W-5.43.35+12698",
        "message" => " switch: TRAPMGR: Link Down: 0/5",
     "sysloghost" => "ubi48-08.a.space.corp",
       "@version" => "1"
}

But I can only see the logs from the linux client in Kibana. I assume that the format of the logs from the switches is not the same RFC type as the usual syslog and that is why elasticsearch seems to refuse it? But this is just a guess. I have not yet found a way to actually see why logstash is not submitting the logs from the switches to elasticsearch.

Basically we are completely open on how to ship the logs. There is not even a requirement to first ship them to rsyslog but there are just so many ways to do it that we are absolutely lost what would be the best and most efficient way. But I assume that shipping it to logstash prior shipping it to elasticsearch could be useful since the we need to grok the logs.

Best Regards,
Oliver

Ok, digged a bit further and I can confirm that we are missing logs in elastic and in kibana. Some logs appear in elastic but not in kibana and some don’t even appear in elastic…

Let me address a couple of things:

OpenSearch vs Open Distro:

Open Distro and OpenSearch are two different, but related, things.

Open Distro is open source Elasticsearch and open source Kibana + open source plugins and tools, all packaged together.

In January 2021, Elastic (the company) ceased producing an open source version of Elasticsearch and Kibana.

OpenSearch is a forked version of open source Elasticsearch, OpenSearch Dashboards is a forked version of Kibana. The OpenSearch package includes the open source plugins and tools originating from Open Distro. OpenSearch is wire compatible with Elasticsearch 7.10.2 (from which it was forked), only differing in things like version reporting, etc.

Hope that clears that up! Now - which one are you using?

Your logs:

Let me get this straight: you can see the logs from the linux client but not the logs from your switches? Are both of these going through logstash?

There is some confusing logstash information out there and multiple versions of logstash which are hard to match up with the right ES or OpenSearch version. For either Open Distro or OpenSearch, I’d suggest using this one: Logstash OSS with OpenSearch Output Plugin.

Hi,

yes that pretty much clears it up for me thanks. In the meantime I also read some more info on opendistro and opensearch. We are using opendistro right now:

[root@gedasvl400 ict]# grep image docker-compose.yml
image: amazon/opendistro-for-elasticsearch:1.13.2
image: amazon/opendistro-for-elasticsearch:1.13.2
image: amazon/opendistro-for-elasticsearch-kibana:1.13.2
image: docker.elastic.co/logstash/logstash-oss:7.10.2

I have now reconfigured our setup to be:

network device (syslog UDP) > Logstash ( input type syslog port 9001 udp) → Elasticsearch

this seems to be working pretty well. At least I get the logs from the linux client just fine. The logs from the network devices need grok tweaking since I get a _grokparsefailure_sysloginput.

Best Regards,
Oliver

Ok, I finally managed to get around the grokparse error by using this logstash.conf:

input {
#  http {
#    port => 5011
#    codec => "line"
#  }
#  udp {
#    port => 5012
#    codec => "json"
#  }
#  tcp {
#    port => 5013
#    codec => "json_lines"
#  }
  syslog {
    port => 9001
    type => syslog
  }
#  udp {
#    port => 9001
#    type => syslog
#  }
}

filter {
if ( [type] == "syslog" and "_grokparsefailure_sysloginput" in [tags]) {
grok {
match => [ "message", "%{SYSLOG5424PRI}(\s+)?%{CISCOTIMESTAMP:timestamp}\s+%{GREEDYDATA:message}" ]
remove_tag => ["_grokparsefailure_sysloginput"]
break_on_match => true
add_tag => [ "unifi_syslog", "grokked" ]
}
syslog_pri {
  syslog_pri_field_name => "syslog5424_pri"
}
dns {
  reverse => [ "host" ]
  action => "replace"
}

}
}

output {
  elasticsearch {
    hosts => ["https://odfe-node1:9200"]
    ssl => true
    ssl_certificate_verification => false
    user => logstash
    password => logstash
    ilm_enabled => false
    #index => "logstash"
    index => "logstash-%{+YYYY.MM.dd}"
  }
  stdout {
  }
}

Not great but it does it's job. I guess the ubiquity switches use some sort of rfc5424 syslog format while logstash's syslog input plug only supports rfc3164.

The dns part rewrites the host field with a hostname as the logsource field was not available.

Some of the fields like facility, facility_label, priority, program that we see for a linux syslog client are not translated by my logstash.conf but maybe I will try this again later.

Cheers,
Oliver

glad you made progress!