I have a index management policy defined that deletes indexes after 3 days. That policy appears to work in most cases. However, there are a smattering of indexes (maybe 10%) that keep re-appearing. At first, I assumed the index management plugin wasn’t doing its job (sorry @dbbaughe!) but now I suspect the problem might be elsewhere.
Here is a screenshot showing the records in the .opendistro-ism-managed-index-history index for one of these ever re-appearing indexes (I hope it is readable).
Based on what I see, it appears the Index Management plug-in is actually doing it’s job and deleting the index every 3 days. But, then, a few hours later, the index is re-created and the IM plug-in (re-)initializes the policy for it. Is that a correct interpretation of these records? I’m confused by the cases where there are two instances of the “Successfully initialized policy…” message without a “Transitioning to doomed” message. (For example: the messages on the 4/22 and 4/23).
If so, any idea what’s going on here?
The processing flow is Fluent Bit is collecting records from across a Kubernetes cluster and sending them to Elasticsearch. I have an ingest pipeline that is redirecting the incoming messages based on the namespace and timestamp.
Two possibilities that have crossed my mind:
a) Fluent Bit is still sending messages with “old” time stamps many days after that day passed;
b) ES is not completely deleting the index and later rediscovers some “old” data laying around recreates the shard.
Both theories seem far-fetched. Any other explanations?