Create an Amazon Bedrock inference endpoint Generally available; Added in 8.12.0

PUT /_inference/{task_type}/{amazonbedrock_inference_id}

Create an inference endpoint to perform an inference task with the amazonbedrock service.


You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform.

    Values are completion or text_embedding.

  • amazonbedrock_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

  • timeout string

    Specifies the amount of time to wait for the inference endpoint to be created.

    Values are -1 or 0.

application/json

Body

  • chunking_settings object

    Chunking configuration object

    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string Required

      This parameter is only applicable when using the recursive chunking strategy.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string] Required

      A list of strings used as possible split points when chunking text with the recursive strategy.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    Value is amazonbedrock.

  • service_settings object Required
    Hide service_settings attributes Show service_settings attributes object
    • access_key string Required

      A valid AWS access key that has permissions to use Amazon Bedrock and access to models for inference requests.

    • model string Required

      The base model ID or an ARN to a custom model based on a foundational model. The base model IDs can be found in the Amazon Bedrock documentation. Note that the model ID must be available for the provider chosen and your IAM user must have access to the model.

      External documentation
    • provider string

      The model provider for your deployment. Note that some providers may support only certain task types. Supported providers include:

      • amazontitan - available for text_embedding and completion task types
      • anthropic - available for completion task type only
      • ai21labs - available for completion task type only
      • cohere - available for text_embedding and completion task types
      • meta - available for completion task type only
      • mistral - available for completion task type only
    • region string Required

      The region that your model or ARN is deployed in. The list of available regions per model can be found in the Amazon Bedrock documentation.

      External documentation
    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the service.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • voyageai service: 2000
        • watsonxai service: 120
    • secret_key string Required

      A valid AWS secret key that is paired with the access_key. For informationg about creating and managing access and secret keys, refer to the AWS documentation.

      External documentation
  • task_settings object
    Hide task_settings attributes Show task_settings attributes object
    • max_new_tokens number

      For a completion task, it sets the maximum number for the output tokens to be generated.

      Default value is 64.

    • temperature number

      For a completion task, it is a number between 0.0 and 1.0 that controls the apparent creativity of the results. At temperature 0.0 the model is most deterministic, at temperature 1.0 most random. It should not be used if top_p or top_k is specified.

    • top_k number

      For a completion task, it limits samples to the top-K most likely words, balancing coherence and variability. It is only available for anthropic, cohere, and mistral providers. It is an alternative to temperature; it should not be used if temperature is specified.

    • top_p number

      For a completion task, it is a number in the range of 0.0 to 1.0, to eliminate low-probability tokens. Top-p uses nucleus sampling to select top tokens whose sum of likelihoods does not exceed a certain value, ensuring both variety and coherence. It is an alternative to temperature; it should not be used if temperature is specified.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      Chunking configuration object

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be higher than 300 or lower than 20 (for sentence strategy) or 10 (for word strategy).

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string Required

        This parameter is only applicable when using the recursive chunking strategy.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string] Required

        A list of strings used as possible split points when chunking text with the recursive strategy.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required
    • task_settings object
    • inference_id string Required

      The inference Id

    • task_type string Required

      Values are text_embedding or completion.

PUT /_inference/{task_type}/{amazonbedrock_inference_id}
PUT _inference/text_embedding/amazon_bedrock_embeddings
{
    "service": "amazonbedrock",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "provider": "amazontitan",
        "model": "amazon.titan-embed-text-v2:0"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="amazon_bedrock_embeddings",
    inference_config={
        "service": "amazonbedrock",
        "service_settings": {
            "access_key": "AWS-access-key",
            "secret_key": "AWS-secret-key",
            "region": "us-east-1",
            "provider": "amazontitan",
            "model": "amazon.titan-embed-text-v2:0"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "amazon_bedrock_embeddings",
  inference_config: {
    service: "amazonbedrock",
    service_settings: {
      access_key: "AWS-access-key",
      secret_key: "AWS-secret-key",
      region: "us-east-1",
      provider: "amazontitan",
      model: "amazon.titan-embed-text-v2:0",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "amazon_bedrock_embeddings",
  body: {
    "service": "amazonbedrock",
    "service_settings": {
      "access_key": "AWS-access-key",
      "secret_key": "AWS-secret-key",
      "region": "us-east-1",
      "provider": "amazontitan",
      "model": "amazon.titan-embed-text-v2:0"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "amazon_bedrock_embeddings",
    "body" => [
        "service" => "amazonbedrock",
        "service_settings" => [
            "access_key" => "AWS-access-key",
            "secret_key" => "AWS-secret-key",
            "region" => "us-east-1",
            "provider" => "amazontitan",
            "model" => "amazon.titan-embed-text-v2:0",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"amazonbedrock","service_settings":{"access_key":"AWS-access-key","secret_key":"AWS-secret-key","region":"us-east-1","provider":"amazontitan","model":"amazon.titan-embed-text-v2:0"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/amazon_bedrock_embeddings"
client.inference().put(p -> p
    .inferenceId("amazon_bedrock_embeddings")
    .taskType(TaskType.TextEmbedding)
    .inferenceConfig(i -> i
        .service("amazonbedrock")
        .serviceSettings(JsonData.fromJson("{\"access_key\":\"AWS-access-key\",\"secret_key\":\"AWS-secret-key\",\"region\":\"us-east-1\",\"provider\":\"amazontitan\",\"model\":\"amazon.titan-embed-text-v2:0\"}"))
    )
);
Request examples
Run `PUT _inference/text_embedding/amazon_bedrock_embeddings` to create an inference endpoint that performs a text embedding task.
{
    "service": "amazonbedrock",
    "service_settings": {
        "access_key": "AWS-access-key",
        "secret_key": "AWS-secret-key",
        "region": "us-east-1",
        "provider": "amazontitan",
        "model": "amazon.titan-embed-text-v2:0"
    }
}
Run `PUT _inference/completion/openai-completion` to create an inference endpoint to perform a completion task type.
{
    "service": "openai",
    "service_settings": {
        "api_key": "OpenAI-API-Key",
        "model_id": "gpt-3.5-turbo"
    }
}