Caution
Grafana Agent has reached End-of-Life (EOL) on November 1, 2025. Agent is no longer receiving vendor support and will no longer receive security or bug fixes. Current users of Agent Static mode, Agent Flow mode, and Agent Operator should proceed with migrating to Grafana Alloy. If you have already migrated to Alloy, no further action is required. Read more about why we recommend migrating to Grafana Alloy.
Important: This documentation is about an older version. It's relevant only to the release noted, many of the features and functions have been updated or replaced. Please view the current version.
otelcol.receiver.kafka
otelcol.receiver.kafka accepts telemetry data from a Kafka broker and
forwards it to other otelcol.* components.
NOTE:
otelcol.receiver.kafkais a wrapper over the upstream OpenTelemetry Collectorkafkareceiver from theotelcol-contribdistribution. Bug reports or feature requests will be redirected to the upstream repository, if necessary.
Multiple otelcol.receiver.kafka components can be specified by giving them
different labels.
Usage
otelcol.receiver.kafka "LABEL" {
brokers = ["BROKER_ADDR"]
protocol_version = "PROTOCOL_VERSION"
output {
metrics = [...]
logs = [...]
traces = [...]
}
}Arguments
The following arguments are supported:
The encoding argument determines how to decode messages read from Kafka.
encoding must be one of the following strings:
"otlp_proto": Decode messages as OTLP protobuf."jaeger_proto": Decode messages as a single Jaeger protobuf span."jaeger_json": Decode messages as a single Jaeger JSON span."zipkin_proto": Decode messages as a list of Zipkin protobuf spans."zipkin_json": Decode messages as a list of Zipkin JSON spans."zipkin_thrift": Decode messages as a list of Zipkin Thrift spans."raw": Copy the message bytes into the body of a log record.
"otlp_proto" must be used to read all telemetry types from Kafka; other
encodings are signal-specific.
Blocks
The following blocks are supported inside the definition of
otelcol.receiver.kafka:
The > symbol indicates deeper levels of nesting. For example,
authentication > tls refers to a tls block defined inside an
authentication block.
authentication block
The authentication block holds the definition of different authentication
mechanisms to use when connecting to Kafka brokers. It doesn’t support any
arguments and is configured fully through inner blocks.
plaintext block
The plaintext block configures PLAIN authentication against Kafka brokers.
The following arguments are supported:
sasl block
The sasl block configures SASL authentication against Kafka brokers.
The following arguments are supported:
The mechanism argument can be set to one of the following strings:
"PLAIN""AWS_MSK_IAM""SCRAM-SHA-256""SCRAM-SHA-512"
When mechanism is set to "AWS_MSK_IAM", the aws_msk child block must also be provided.
aws_msk block
The aws_msk block configures extra parameters for SASL authentication when
using the AWS_MSK_IAM mechanism.
The following arguments are supported:
tls block
The tls block configures TLS settings used for connecting to the Kafka
brokers. If the tls block isn’t provided, TLS won’t be used for
communication.
The following arguments are supported:
kerberos block
The kerberos block configures Kerberos authentication against the Kafka
broker.
The following arguments are supported:
When use_keytab is false, the password argument is required. When
use_keytab is true, the file pointed to by the keytab_file argument is
used for authentication instead. At most one of password or keytab_file
must be provided.
metadata block
The metadata block configures how to retrieve and store metadata from the
Kafka broker.
The following arguments are supported:
If the include_all_topics argument is true, otelcol.receiver.kafka
maintains a full set of metadata for all topics rather than the minimal set
that has been necessary so far. Including the full set of metadata is more
convenient for users but can consume a substantial amount of memory if you have
many topics and partitions.
Retrieving metadata may fail if the Kafka broker is starting up at the same
time as the otelcol.receiver.kafka component. The retry child
block can be provided to customize retry behavior.
retry block
The retry block configures how to retry retrieving metadata when retrieval
fails.
The following arguments are supported:
autocommit block
The autocommit block configures how to automatically commit updated topic
offsets back to the Kafka brokers.
The following arguments are supported:
message_marking block
The message_marking block configures when Kafka messages are marked as read.
The following arguments are supported:
By default, a Kafka message is marked as read immediately after it is retrieved
from the Kafka broker. If the after_execution argument is true, messages are
only read after the telemetry data is forwarded to components specified in the
output block.
When after_execution is true, messages are only marked as read when they are
decoded successfully and components where the data was forwarded did not return
an error. If the include_unsuccessful argument is true, messages are marked
as read even if decoding or forwarding failed. Setting include_unsuccessful
has no effect if after_execution is false.
WARNING: Setting
after_executiontotrueandinclude_unsuccessfultofalsecan block the entire Kafka partition if message processing returns a permanent error, such as failing to decode.
output block
The output block configures a set of components to forward resulting
telemetry data to.
The following arguments are supported:
The output block must be specified, but all of its arguments are optional. By
default, telemetry data is dropped. To send telemetry data to other components,
configure the metrics, logs, and traces arguments accordingly.
Exported fields
otelcol.receiver.kafka does not export any fields.
Component health
otelcol.receiver.kafka is only reported as unhealthy if given an invalid
configuration.
Debug information
otelcol.receiver.kafka does not expose any component-specific debug
information.
Example
This example forwards read telemetry data through a batch processor before finally sending it to an OTLP-capable endpoint:
otelcol.receiver.kafka "default" {
brokers = ["localhost:9092"]
protocol_version = "2.0.0"
output {
metrics = [otelcol.processor.batch.default.input]
logs = [otelcol.processor.batch.default.input]
traces = [otelcol.processor.batch.default.input]
}
}
otelcol.processor.batch "default" {
output {
metrics = [otelcol.exporter.otlp.default.input]
logs = [otelcol.exporter.otlp.default.input]
traces = [otelcol.exporter.otlp.default.input]
}
}
otelcol.exporter.otlp "default" {
client {
endpoint = env("OTLP_ENDPOINT")
}
}


