Loading

Enable large language model (LLM) access

Stack Serverless Security

Elastic Security uses large language models (LLMs) for some of its advanced analytics features. To enable these features, you can connect a third-party LLM provider or a custom local LLM.

Important

Different LLMs have varying performance when used to power different features and use-cases. For more information about how various models perform on different tasks in Elastic Security, refer to the Large language model performance matrix.

Elastic Managed LLM is the default large language model (LLM) connector available in the AI Assistant for eligible users. It provides immediate access to generative AI features without requiring any setup or external model integration.

The Elastic Managed LLM is available out-of-the box; no manual connector setup or API key management is required for initial use. However, you can configure and use a third-party LLM connector, such as OpenAI, Azure, or Amazon Bedrock if you prefer.

To learn more about security and data privacy, refer to the connector documentation and download the model card.

Important

Using the Elastic Managed LLM incurs additional costs. Refer to Elastic Cloud pricing for more information.

Follow these guides to connect to one or more third-party LLM providers:

You can connect to LM Studio to use a custom LLM deployed and managed by you.

Stack 9.0.0 Serverless Unavailable

You can use preconfigured connectors to set up a third-party LLM connector.

If you use a preconfigured connector for your LLM connector we recommend you add the exposeConfig: true parameter within the xpack.actions.preconfigured section of the kibana.yml config file. This parameter makes debugging easier by adding configuration information to the debug logs, including which large language model the connector uses.