Description
Description
What?
The reroute
ingest pipeline processor has options for only 2 of the 3 Elastic Agent data stream naming convention pieces, which are dataset
and namespace
. We need to add the third one, type
that populates the data_stream.type
.
Related documentation
- https://www.elastic.co/guide/en/elasticsearch/reference/current/reroute-processor.html
- https://www.elastic.co/guide/en/fleet/8.17/data-streams.html#data-streams-naming-scheme
- https://www.elastic.co/guide/en/ecs/current/ecs-data_stream.html
A new ingest pipeline option may be like this
Name | Required | Default | Description |
---|---|---|---|
type | no | {{data_stream.type}} | Field references or a static value for the type part of the data stream name. In addition to the criteria for index names, cannot contain - and ..... Valid values are logs , metrics , .... |
Current workarounds include using the destination
option. Another option is to set the topic dynamically using Fleet topic settings, which is documented in Deploying Elastic Agent with Confluent Cloud's Elasticsearch Connector as well.
Why?
In an ingest architecture that is
Elastic Agent -> Kafka -> Kafka Elasticsearch sink -> Elasticsearch
The easiest configuration option is to send both the Logs and Metrics to a single Kafka topic as there is only one output defined at the Agent policy and Integration policy level. Therefore, it would be helpful if the reroute
processor could be used to update the index name and subsequently use the new index's default ingest pipeline to properly handle the data source.