-
Notifications
You must be signed in to change notification settings - Fork 2
Kafka Exporter #131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kafka Exporter #131
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fantastic work on this, I love the breadth of feature support. Some comments throughout, but I think the big ones are mostly around handling the new multiple exporter cases. I'm happy to help on supporting that if need be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just needs the small doc fix I think, but 👍
README.md
Outdated
| --kafka-exporter-traces-topic | otlp_traces | | | ||
| --kafka-exporter-metrics-topic | otlp_metrics | | | ||
| --kafka-exporter-logs-topic | otlp_logs | | | ||
| --kafka-exporter-format | json | json, protobuf | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be protobuf now?
Completes: STR-3448
Closes: #122
Adds support for the Kafka exporter. We're currently marking experimental. On the configuration side we have exposed all of the most important and high priority configuration parameters as marked https://docs.confluent.io/platform/current/clients/librdkafka/html/md_CONFIGURATION.html#autotoc_md93 and tried to maintain parity with the reference collector.
Additionally the user can set and override any librdkafka option using the
kafka_exporter_custom_config
option, exposing all potential options.There is a significant amount of test coverage and in addition to unit tests this PR features integration tests, which are placed behind a feature flag. The integration test covers advanced functionality including message key based partitioning to verify the partition by resource attributes features for metrics and logs are spread consistently and randomly across Kafka partitions.
There's a few other things we'll want to follow up with.
topic_from_metadata_key
topic_from_attribute
. We should investigate how important these are before implementingpoll_ready
API to signal backpressure to the pipeline. We need to spend further time investigating the best tokio patterns for working with rdkafka.In terms of throughput and resource utilization, values are inline with current exporters and we don't see any crazy high amounts of resources or diminished throughput when testing with loadtester at high volumes.. e.g 400K span/sec etc.