-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Description
Bug Report
Describe the bug
When using FluentBit with systemd as an input, a rewrite_tag filter, and logdna output, we are experiencing very large chunks (~45MB) being created by the rewrite_tag emitter. This is resulting in requests to flush to LogDNA failing (their max payload size is 10MB).
Perhaps it's a lack of understanding on our part as to how chunks are handled/grow in size, but based on the FluentBit docs I wouldn't have expected chunks of this size given that they are usually around 2MB.
To Reproduce
service:
flush: 1
log_level: info
parsers_file: "/etc/fluent-bit/parsers.conf"
http_server: on
http_listen: 0.0.0.0
http_port: 2020
storage.path: "/data/flb/storage"
storage.checksum: off
storage.metrics: on
storage.max_chunks_up: 128
storage.backlog.mem_limit: 512MB
pipeline:
inputs:
- name: systemd
db: "/data/flb/systemd.db"
storage.type: filesystem
threaded: true
tag: "inputs.systemd"
# ... We have some other unrelated inputs here
filters:
- name: grep
match: "inputs.systemd"
exclude: CONTAINER_NAME ^(example_container)
- name: lua
match: "inputs.systemd"
script: "app-name.lua"
call: set_app_name
- name: rewrite_tag
match: "inputs.systemd"
rule: $app .* apps.$app false
# ... We have some other filters which apply to `app.xxxxx`
outputs:
# Dynamically create an output of type `null` or `stdout`
# based on whether SEND_TO_STDOUT is set to `true`
- name: ${STDOUT_PROCESS}
match: "apps.*"
format: json
- name: logdna
# Explicitly set the log level to warning because all HTTP request information
# is logged at info level, which is overly noisy
log_level: warn
match: "apps.*"
tls: on
workers: 1
retry_limit: no_limits
api_key: ${LOGDNA_API_KEY}
hostname: ${HOSTNAME}Expected behavior
Chunks to stay at a sensible size, ~2MB (or some way to limit max chunk size to <10MB)
Screenshots
Your Environment
- Version used: 3.2.2
- Environment name and version: Docker
- Server type and version: Raspberry Pi 4
- Operating System and version: Debian 12
- Filters and plugins: systemd, rewrite_tag, logdna
Additional context
Due to having unlimited retries on our LogDNA output, this was completely blocking our pipeline. I understand that having no limit to the number of retries could cause issues, however I wouldn't have expected the reason for failures being FluentBit creating chunks of such a large size.