is there any option to limit size of bulk request from logstash to OpenSearch Service in AWS?
On our dev envs. we have quite small instances, and we get errors because of too big payload reaching OpenSearch (AWS limits - Amazon OpenSearch Service limits - Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) )
Maybe some split filter or something similar? - for now I couldn’t find any working solution.
Thanks for hints in advance.
@grh I assume you’re meaning limiting on the Logstash side?
yes, Logstash side - as we don’t have access to modify OpenSearch Service in AWS, for now we’re using
“pipeline.batch.size: XX” but this is only workaround and not good for overall performance.
It would be perfect to have ability to set max payload size to i.e. 9MB, and even smallest OSS instances in AWS will cope with it.
Seems fairly reasonable to me. I’m not an expert in the codebase, but I think that this is the line that determines the payload size:
The comments are enlightening. I’ve found that a lot of the ‘limits’ in the forked code are pretty much… wild estimations. Which kinda drives me nuts, personally. You can experiment if you want to modify the output plugin - it’s not a terribly complex codebase.
That being said, maybe you want to put a feature request in for this:
Actually looks like there is a open PR:
yepp, thanks for this, hopefully they will approve and merge this PR