Delete an autoscaling policy
Generally available; Added in 7.11.0
NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
DELETE /_autoscaling/policy/*
resp = client.autoscaling.delete_autoscaling_policy(
name="*",
)
const response = await client.autoscaling.deleteAutoscalingPolicy({
name: "*",
});
response = client.autoscaling.delete_autoscaling_policy(
name: "*"
)
$resp = $client->autoscaling()->deleteAutoscalingPolicy([
"name" => "*",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_autoscaling/policy/*"
client.autoscaling().deleteAutoscalingPolicy(d -> d
.name("*")
);
{
"acknowledged": true
}
Get behavioral analytics collections
Technical preview; Added in 8.8.0
All methods and paths for this operation:
GET _application/analytics/my*
resp = client.search_application.get_behavioral_analytics(
name="my*",
)
const response = await client.searchApplication.getBehavioralAnalytics({
name: "my*",
});
response = client.search_application.get_behavioral_analytics(
name: "my*"
)
$resp = $client->searchApplication()->getBehavioralAnalytics([
"name" => "my*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my*"
client.searchApplication().getBehavioralAnalytics(g -> g
.name("my*")
);
{
"my_analytics_collection": {
"event_data_stream": {
"name": "behavioral_analytics-events-my_analytics_collection"
}
},
"my_analytics_collection2": {
"event_data_stream": {
"name": "behavioral_analytics-events-my_analytics_collection2"
}
}
}
PUT _application/analytics/my_analytics_collection
resp = client.search_application.put_behavioral_analytics(
name="my_analytics_collection",
)
const response = await client.searchApplication.putBehavioralAnalytics({
name: "my_analytics_collection",
});
response = client.search_application.put_behavioral_analytics(
name: "my_analytics_collection"
)
$resp = $client->searchApplication()->putBehavioralAnalytics([
"name" => "my_analytics_collection",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection"
client.searchApplication().putBehavioralAnalytics(p -> p
.name("my_analytics_collection")
);
DELETE _application/analytics/my_analytics_collection/
resp = client.search_application.delete_behavioral_analytics(
name="my_analytics_collection",
)
const response = await client.searchApplication.deleteBehavioralAnalytics({
name: "my_analytics_collection",
});
response = client.search_application.delete_behavioral_analytics(
name: "my_analytics_collection"
)
$resp = $client->searchApplication()->deleteBehavioralAnalytics([
"name" => "my_analytics_collection",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/analytics/my_analytics_collection/"
client.searchApplication().deleteBehavioralAnalytics(d -> d
.name("my_analytics_collection")
);
Compact and aligned text (CAT)
The compact and aligned text (CAT) APIs aim are intended only for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, it's recommend to use a corresponding JSON API.
All the cat commands accept a query string parameter help to see all the headers and info they provide, and the /_cat command alone lists all the available commands.
Get aliases
Generally available
All methods and paths for this operation:
Get the cluster's index aliases, including filter and routing information. This API does not return data stream aliases.
IMPORTANT: CAT APIs are only intended for human consumption using the command line or the Kibana console. They are not intended for use by applications. For application consumption, use the aliases API.
Required authorization
- Index privileges:
view_index_metadata
Path parameters
-
A comma-separated list of aliases to retrieve. Supports wildcards (
*). To retrieve all aliases, omit this parameter or use*or_all.
Query parameters
-
A comma-separated list of columns names to display. It supports simple wildcards.
Supported values include:
alias(ora): The name of the alias.index(ori,idx): The name of the index the alias points to.filter(orf,fi): The filter applied to the alias.routing.index(orri,routingIndex): Index routing value for the alias.routing.search(orrs,routingSearch): Search routing value for the alias.is_write_index(orw,isWriteIndex): Indicates if the index is the write index for the alias.
Values are
alias,a,index,i,idx,filter,f,fi,routing.index,ri,routingIndex,routing.search,rs,routingSearch,is_write_index,w, orisWriteIndex. -
List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:ascor:descas a suffix to the column name. -
The type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. It supports comma-separated values, such as
open,hidden.Supported values include:
all: Match any data stream or index, including hidden ones.open: Match open, non-hidden indices. Also matches any non-hidden data stream.closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden: Match hidden data streams and hidden indices. Must be combined withopen,closed, orboth.none: Wildcard expressions are not accepted.
Values are
all,open,closed,hidden, ornone. -
If
true, the request computes the list of selected nodes from the local cluster state. Iffalsethe list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node.
GET _cat/aliases?format=json&v=true
resp = client.cat.aliases(
format="json",
v=True,
)
const response = await client.cat.aliases({
format: "json",
v: "true",
});
response = client.cat.aliases(
format: "json",
v: "true"
)
$resp = $client->cat()->aliases([
"format" => "json",
"v" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/aliases?format=json&v=true"
client.cat().aliases();
[
{
"alias": "alias1",
"index": "test1",
"filter": "-",
"routing.index": "-",
"routing.search": "-",
"is_write_index": "true"
},
{
"alias": "alias1",
"index": "test1",
"filter": "*",
"routing.index": "-",
"routing.search": "-",
"is_write_index": "true"
},
{
"alias": "alias3",
"index": "test1",
"filter": "-",
"routing.index": "1",
"routing.search": "1",
"is_write_index": "true"
},
{
"alias": "alias4",
"index": "test1",
"filter": "-",
"routing.index": "2",
"routing.search": "1,2",
"is_write_index": "true"
}
]
Get datafeeds
Generally available; Added in 7.7.0
All methods and paths for this operation:
Get configuration and usage information about datafeeds.
This API returns a maximum of 10,000 datafeeds.
If the Elasticsearch security features are enabled, you must have monitor_ml, monitor, manage_ml, or manage
cluster privileges to use this API.
IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get datafeed statistics API.
Required authorization
- Cluster privileges:
monitor_ml
Query parameters
-
Specifies what to do when the request:
- Contains wildcard expressions and there are no datafeeds that match.
- Contains the
_allstring or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
If
true, the API returns an empty datafeeds array when there are no matches and the subset of results when there are partial matches. Iffalse, the API returns a 404 status code when there are no matches or only partial matches. -
Comma-separated list of column names to display.
Supported values include:
ae(orassignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.bc(orbuckets.count,bucketsCount): The number of buckets processed.id: A numerical character string that uniquely identifies the datafeed.na(ornode.address,nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.ne(ornode.ephemeral_id,nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.ni(ornode.id,nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.nn(ornode.name,nodeName): For started datafeeds only, the name of the node where the datafeed is started.sba(orsearch.bucket_avg,searchBucketAvg): The average search time per bucket, in milliseconds.sc(orsearch.count,searchCount): The number of searches run by the datafeed.seah(orsearch.exp_avg_hour,searchExpAvgHour): The exponential average search time per hour, in milliseconds.st(orsearch.time,searchTime): The total time the datafeed spent searching, in milliseconds.s(orstate): The status of the datafeed:starting,started,stopping, orstopped. Ifstarting, the datafeed has been requested to start but has not yet started. Ifstarted, the datafeed is actively receiving data. Ifstopping, the datafeed has been requested to stop gracefully and is completing its final action. Ifstopped, the datafeed is stopped and will not receive data until it is re-started.
-
Comma-separated list of column names or column aliases used to sort the response.
Supported values include:
ae(orassignment_explanation): For started datafeeds only, contains messages relating to the selection of a node.bc(orbuckets.count,bucketsCount): The number of buckets processed.id: A numerical character string that uniquely identifies the datafeed.na(ornode.address,nodeAddress): For started datafeeds only, the network address of the node where the datafeed is started.ne(ornode.ephemeral_id,nodeEphemeralId): For started datafeeds only, the ephemeral ID of the node where the datafeed is started.ni(ornode.id,nodeId): For started datafeeds only, the unique identifier of the node where the datafeed is started.nn(ornode.name,nodeName): For started datafeeds only, the name of the node where the datafeed is started.sba(orsearch.bucket_avg,searchBucketAvg): The average search time per bucket, in milliseconds.sc(orsearch.count,searchCount): The number of searches run by the datafeed.seah(orsearch.exp_avg_hour,searchExpAvgHour): The exponential average search time per hour, in milliseconds.st(orsearch.time,searchTime): The total time the datafeed spent searching, in milliseconds.s(orstate): The status of the datafeed:starting,started,stopping, orstopped. Ifstarting, the datafeed has been requested to start but has not yet started. Ifstarted, the datafeed is actively receiving data. Ifstopping, the datafeed has been requested to stop gracefully and is completing its final action. Ifstopped, the datafeed is stopped and will not receive data until it is re-started.
GET _cat/ml/datafeeds?v=true&format=json
resp = client.cat.ml_datafeeds(
v=True,
format="json",
)
const response = await client.cat.mlDatafeeds({
v: "true",
format: "json",
});
response = client.cat.ml_datafeeds(
v: "true",
format: "json"
)
$resp = $client->cat()->mlDatafeeds([
"v" => "true",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/datafeeds?v=true&format=json"
client.cat().mlDatafeeds();
[
{
"id": "datafeed-high_sum_total_sales",
"state": "stopped",
"buckets.count": "743",
"search.count": "7"
},
{
"id": "datafeed-low_request_rate",
"state": "stopped",
"buckets.count": "1457",
"search.count": "3"
},
{
"id": "datafeed-response_code_rates",
"state": "stopped",
"buckets.count": "1460",
"search.count": "18"
},
{
"id": "datafeed-url_scanning",
"state": "stopped",
"buckets.count": "1460",
"search.count": "18"
}
]
Get trained models
Generally available; Added in 7.7.0
All methods and paths for this operation:
Get configuration and usage information about inference trained models.
IMPORTANT: CAT APIs are only intended for human consumption using the Kibana console or command line. They are not intended for use by applications. For application consumption, use the get trained models statistics API.
Required authorization
- Cluster privileges:
monitor_ml
Query parameters
-
Specifies what to do when the request: contains wildcard expressions and there are no models that match; contains the
_allstring or no identifiers and there are no matches; contains wildcard expressions and there are only partial matches. Iftrue, the API returns an empty array when there are no matches and the subset of results when there are partial matches. Iffalse, the API returns a 404 status code when there are no matches or only partial matches. -
A comma-separated list of column names to display.
Supported values include:
create_time(orct): The time when the trained model was created.created_by(orc,createdBy): Information on the creator of the trained model.data_frame_analytics_id(ordf,dataFrameAnalytics,dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.description(ord): The description of the trained model.heap_size(orhs,modelHeapSize): The estimated heap size to keep the trained model in memory.id: Identifier for the trained model.ingest.count(oric,ingestCount): The total number of documents that are processed by the model.ingest.current(oricurr,ingestCurrent): The total number of document that are currently being handled by the trained model.ingest.failed(orif,ingestFailed): The total number of failed ingest attempts with the trained model.ingest.pipelines(orip,ingestPipelines): The total number of ingest pipelines that are referencing the trained model.ingest.time(orit,ingestTime): The total time that is spent processing documents with the trained model.license(orl): The license level of the trained model.operations(oro,modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.version(orv): The Elasticsearch version number in which the trained model was created.
-
A comma-separated list of column names or aliases used to sort the response.
Supported values include:
create_time(orct): The time when the trained model was created.created_by(orc,createdBy): Information on the creator of the trained model.data_frame_analytics_id(ordf,dataFrameAnalytics,dfid): Identifier for the data frame analytics job that created the model. Only displayed if it is still available.description(ord): The description of the trained model.heap_size(orhs,modelHeapSize): The estimated heap size to keep the trained model in memory.id: Identifier for the trained model.ingest.count(oric,ingestCount): The total number of documents that are processed by the model.ingest.current(oricurr,ingestCurrent): The total number of document that are currently being handled by the trained model.ingest.failed(orif,ingestFailed): The total number of failed ingest attempts with the trained model.ingest.pipelines(orip,ingestPipelines): The total number of ingest pipelines that are referencing the trained model.ingest.time(orit,ingestTime): The total time that is spent processing documents with the trained model.license(orl): The license level of the trained model.operations(oro,modelOperations): The estimated number of operations to use the trained model. This number helps measuring the computational complexity of the model.version(orv): The Elasticsearch version number in which the trained model was created.
-
Skips the specified number of transforms.
-
The maximum number of transforms to display.
GET _cat/ml/trained_models?v=true&format=json
resp = client.cat.ml_trained_models(
v=True,
format="json",
)
const response = await client.cat.mlTrainedModels({
v: "true",
format: "json",
});
response = client.cat.ml_trained_models(
v: "true",
format: "json"
)
$resp = $client->cat()->mlTrainedModels([
"v" => "true",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/ml/trained_models?v=true&format=json"
client.cat().mlTrainedModels();
[
{
"id": "ddddd-1580216177138",
"heap_size": "0b",
"operations": "196",
"create_time": "2025-03-25T00:01:38.662Z",
"type": "pytorch",
"ingest.pipelines": "0",
"data_frame.id": "__none__"
},
{
"id": "lang_ident_model_1",
"heap_size": "1mb",
"operations": "39629",
"create_time": "2019-12-05T12:28:34.594Z",
"type": "lang_ident",
"ingest.pipelines": "0",
"data_frame.id": "__none__"
}
]
Query parameters
-
List of columns to appear in the response. Supports simple wildcards.
-
List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:ascor:descas a suffix to the column name. -
If
true, the request computes the list of selected nodes from the local cluster state. Iffalsethe list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
Period to wait for a connection to the master node.
External documentation
GET /_cat/nodeattrs?v=true&format=json
resp = client.cat.nodeattrs(
v=True,
format="json",
)
const response = await client.cat.nodeattrs({
v: "true",
format: "json",
});
response = client.cat.nodeattrs(
v: "true",
format: "json"
)
$resp = $client->cat()->nodeattrs([
"v" => "true",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/nodeattrs?v=true&format=json"
client.cat().nodeattrs();
[
{
"node": "node-0",
"host": "127.0.0.1",
"ip": "127.0.0.1",
"attr": "testattr",
"value": "test"
}
]
[
{
"name": "node-0",
"pid": "19566",
"attr": "testattr",
"value": "test"
}
]
Get plugin information
Generally available
Get a list of plugins running on each node of a cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the nodes info API.
Required authorization
- Cluster privileges:
monitor
Query parameters
-
List of columns to appear in the response. Supports simple wildcards.
-
List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:ascor:descas a suffix to the column name. -
Include bootstrap plugins in the response
-
If
true, the request computes the list of selected nodes from the local cluster state. Iffalsethe list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
Period to wait for a connection to the master node.
External documentation
GET /_cat/plugins?v=true&s=component&h=name,component,version,description&format=json
resp = client.cat.plugins(
v=True,
s="component",
h="name,component,version,description",
format="json",
)
const response = await client.cat.plugins({
v: "true",
s: "component",
h: "name,component,version,description",
format: "json",
});
response = client.cat.plugins(
v: "true",
s: "component",
h: "name,component,version,description",
format: "json"
)
$resp = $client->cat()->plugins([
"v" => "true",
"s" => "component",
"h" => "name,component,version,description",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/plugins?v=true&s=component&h=name,component,version,description&format=json"
client.cat().plugins();
[
{ "name": "U7321H6", "component": "analysis-icu", "version": "8.17.0", "description": "The ICU Analysis plugin integrates the Lucene ICU module into Elasticsearch, adding ICU-related analysis components."},
{"name": "U7321H6", "component": "analysis-kuromoji", "verison": "8.17.0", description: "The Japanese (kuromoji) Analysis plugin integrates Lucene kuromoji analysis module into elasticsearch."},
{"name" "U7321H6", "component": "analysis-nori", "version": "8.17.0", "description": "The Korean (nori) Analysis plugin integrates Lucene nori analysis module into elasticsearch."},
{"name": "U7321H6", "component": "analysis-phonetic", "verison": "8.17.0", "description": "The Phonetic Analysis plugin integrates phonetic token filter analysis with elasticsearch."},
{"name": "U7321H6", "component": "analysis-smartcn", "verison": "8.17.0", "description": "Smart Chinese Analysis plugin integrates Lucene Smart Chinese analysis module into elasticsearch."},
{"name": "U7321H6", "component": "analysis-stempel", "verison": "8.17.0", "description": "The Stempel (Polish) Analysis plugin integrates Lucene stempel (polish) analysis module into elasticsearch."},
{"name": "U7321H6", "component": "analysis-ukrainian", "verison": "8.17.0", "description": "The Ukrainian Analysis plugin integrates the Lucene UkrainianMorfologikAnalyzer into elasticsearch."},
{"name": "U7321H6", "component": "discovery-azure-classic", "verison": "8.17.0", "description": "The Azure Classic Discovery plugin allows to use Azure Classic API for the unicast discovery mechanism"},
{"name": "U7321H6", "component": "discovery-ec2", "verison": "8.17.0", "description": "The EC2 discovery plugin allows to use AWS API for the unicast discovery mechanism."},
{"name": "U7321H6", "component": "discovery-gce", "verison": "8.17.0", "description": "The Google Compute Engine (GCE) Discovery plugin allows to use GCE API for the unicast discovery mechanism."},
{"name": "U7321H6", "component": "mapper-annotated-text", "verison": "8.17.0", "description": "The Mapper Annotated_text plugin adds support for text fields with markup used to inject annotation tokens into the index."},
{"name": "U7321H6", "component": "mapper-murmur3", "verison": "8.17.0", "description": "The Mapper Murmur3 plugin allows to compute hashes of a field's values at index-time and to store them in the index."},
{"name": "U7321H6", "component": "mapper-size", "verison": "8.17.0", "description": "The Mapper Size plugin allows document to record their uncompressed size at index time."},
{"name": "U7321H6", "component": "store-smb", "verison": "8.17.0", "description": "The Store SMB plugin adds support for SMB stores."}
]
Get snapshot information
Generally available; Added in 2.1.0
All methods and paths for this operation:
Get information about the snapshots stored in one or more repositories. A snapshot is a backup of an index or running Elasticsearch cluster. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get snapshot API.
Required authorization
- Cluster privileges:
monitor_snapshot
Path parameters
-
A comma-separated list of snapshot repositories used to limit the request. Accepts wildcard expressions.
_allreturns all repositories. If any repository fails during the request, Elasticsearch returns an error.
Query parameters
-
A comma-separated list of columns names to display. It supports simple wildcards.
Supported values include:
id(orsnapshot): The ID of the snapshot, such as 'snap1'.repository(orre,repo): The name of the repository, such as 'repo1'.status(ors): State of the snapshot process. Returned values are: 'FAILED': The snapshot process failed. 'INCOMPATIBLE': The snapshot process is incompatible with the current cluster version. 'IN_PROGRESS': The snapshot process started but has not completed. 'PARTIAL': The snapshot process completed with a partial success. 'SUCCESS': The snapshot process completed with a full success.start_epoch(orste,startEpoch): The unix epoch time at which the snapshot process started.start_time(orsti,startTime): 'HH:MM:SS' time at which the snapshot process started.end_epoch(orete,endEpoch): The unix epoch time at which the snapshot process ended.end_time(oreti,endTime): 'HH:MM:SS' time at which the snapshot process ended.duration(ordur): The time it took the snapshot process to complete in time units.indices(ori): The number of indices in the snapshot.successful_shards(orss): The number of successful shards in the snapshot.failed_shards(orfs): The number of failed shards in the snapshot.total_shards(orts): The total number of shards in the snapshot.reason(orr): The reason for any snapshot failures.
Values are
id,snapshot,repository,re,repo,status,s,start_epoch,ste,startEpoch,start_time,sti,startTime,end_epoch,ete,endEpoch,end_time,eti,endTime,duration,dur,indices,i,successful_shards,ss,failed_shards,fs,total_shards,ts,reason, orr. -
List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:ascor:descas a suffix to the column name. -
Period to wait for a connection to the master node.
External documentation
GET /_cat/snapshots/repo1?v=true&s=id&format=json
resp = client.cat.snapshots(
repository="repo1",
v=True,
s="id",
format="json",
)
const response = await client.cat.snapshots({
repository: "repo1",
v: "true",
s: "id",
format: "json",
});
response = client.cat.snapshots(
repository: "repo1",
v: "true",
s: "id",
format: "json"
)
$resp = $client->cat()->snapshots([
"repository" => "repo1",
"v" => "true",
"s" => "id",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/snapshots/repo1?v=true&s=id&format=json"
client.cat().snapshots();
[
{
"id": "snap1",
"repository": "repo1",
"status": "FAILED",
"start_epoch": "1445616705",
"start_time": "18:11:45",
"end_epoch": "1445616978",
"end_time": "18:16:18",
"duration": "4.6m",
"indices": "1",
"successful_shards": "4",
"failed_shards": "1",
"total_shards": "5"
},
{
"id": "snap2",
"repository": "repo1",
"status": "SUCCESS",
"start_epoch": "1445634298",
"start_time": "23:04:58",
"end_epoch": "1445634672",
"end_time": "23:11:12",
"duration": "6.2m",
"indices": "2",
"successful_shards": "10",
"failed_shards": "0",
"total_shards": "10"
}
]
Get index template information
Generally available; Added in 5.2.0
All methods and paths for this operation:
Get information about the index templates in a cluster. You can use index templates to apply index settings and field mappings to new indices at creation. IMPORTANT: cat APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get index template API.
Required authorization
- Cluster privileges:
monitor
Path parameters
-
The name of the template to return. Accepts wildcard expressions. If omitted, all templates are returned.
Query parameters
-
List of columns to appear in the response. Supports simple wildcards.
-
List of columns that determine how the table should be sorted. Sorting defaults to ascending and can be changed by setting
:ascor:descas a suffix to the column name. -
If
true, the request computes the list of selected nodes from the local cluster state. Iffalsethe list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. -
Period to wait for a connection to the master node.
External documentation
GET _cat/templates/my-template-*?v=true&s=name&format=json
resp = client.cat.templates(
name="my-template-*",
v=True,
s="name",
format="json",
)
const response = await client.cat.templates({
name: "my-template-*",
v: "true",
s: "name",
format: "json",
});
response = client.cat.templates(
name: "my-template-*",
v: "true",
s: "name",
format: "json"
)
$resp = $client->cat()->templates([
"name" => "my-template-*",
"v" => "true",
"s" => "name",
"format" => "json",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_cat/templates/my-template-*?v=true&s=name&format=json"
client.cat().templates();
[
{
"name": "my-template-0",
"index_patterns": "[te*]",
"order": "500",
"version": null,
"composed_of": "[]"
},
{
"name": "my-template-1",
"index_patterns": "[tea*]",
"order": "501",
"version": null,
"composed_of": "[]"
},
{
"name": "my-template-2",
"index_patterns": "[teak*]",
"order": "502",
"version": "7",
"composed_of": "[]"
}
]
Check in a connector
Technical preview; Added in 8.12.0
Update the last_seen field in the connector and set it to the current timestamp.
PUT _connector/my-connector/_check_in
resp = client.connector.check_in(
connector_id="my-connector",
)
const response = await client.connector.checkIn({
connector_id: "my-connector",
});
response = client.connector.check_in(
connector_id: "my-connector"
)
$resp = $client->connector()->checkIn([
"connector_id" => "my-connector",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/my-connector/_check_in"
client.connector().checkIn(c -> c
.connectorId("my-connector")
);
{
"result": "updated"
}
Query parameters
-
Starting offset
-
Specifies a max number of results to get
-
A comma-separated list of connector index names to fetch connector documents for
-
A comma-separated list of connector names to fetch connector documents for
-
A comma-separated list of connector service types to fetch connector documents for
-
A wildcard query string that filters connectors with matching name, description or index name
GET _connector
resp = client.connector.list()
const response = await client.connector.list();
response = client.connector.list
$resp = $client->connector()->list();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector"
client.connector().list(l -> l);
Cancel a connector sync job
Beta; Added in 8.12.0
Cancel a connector sync job, which sets the status to cancelling and updates cancellation_requested_at to the current time.
The connector service is then responsible for setting the status of connector sync jobs to cancelled.
PUT _connector/_sync_job/my-connector-sync-job-id/_cancel
resp = client.connector.sync_job_cancel(
connector_sync_job_id="my-connector-sync-job-id",
)
const response = await client.connector.syncJobCancel({
connector_sync_job_id: "my-connector-sync-job-id",
});
response = client.connector.sync_job_cancel(
connector_sync_job_id: "my-connector-sync-job-id"
)
$resp = $client->connector()->syncJobCancel([
"connector_sync_job_id" => "my-connector-sync-job-id",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job-id/_cancel"
client.connector().syncJobCancel(s -> s
.connectorSyncJobId("my-connector-sync-job-id")
);
Check in a connector sync job
Technical preview
Check in a connector sync job and set the last_seen field to the current time before updating it in the internal index.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
PUT _connector/_sync_job/my-connector-sync-job/_check_in
resp = client.connector.sync_job_check_in(
connector_sync_job_id="my-connector-sync-job",
)
const response = await client.connector.syncJobCheckIn({
connector_sync_job_id: "my-connector-sync-job",
});
response = client.connector.sync_job_check_in(
connector_sync_job_id: "my-connector-sync-job"
)
$resp = $client->connector()->syncJobCheckIn([
"connector_sync_job_id" => "my-connector-sync-job",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job/_check_in"
client.connector().syncJobCheckIn(s -> s
.connectorSyncJobId("my-connector-sync-job")
);
Set a connector sync job error
Technical preview
Set the error field for a connector sync job and set its status to error.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
PUT _connector/_sync_job/my-connector-sync-job/_error
{
"error": "some-error"
}
resp = client.connector.sync_job_error(
connector_sync_job_id="my-connector-sync-job",
error="some-error",
)
const response = await client.connector.syncJobError({
connector_sync_job_id: "my-connector-sync-job",
error: "some-error",
});
response = client.connector.sync_job_error(
connector_sync_job_id: "my-connector-sync-job",
body: {
"error": "some-error"
}
)
$resp = $client->connector()->syncJobError([
"connector_sync_job_id" => "my-connector-sync-job",
"body" => [
"error" => "some-error",
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"error":"some-error"}' "$ELASTICSEARCH_URL/_connector/_sync_job/my-connector-sync-job/_error"
client.connector().syncJobError(s -> s
.connectorSyncJobId("my-connector-sync-job")
.error("some-error")
);
{
"error": "some-error"
}
Create a connector sync job
Beta; Added in 8.12.0
Create a connector sync job document in the internal index and initialize its counters and timestamps with default values.
POST _connector/_sync_job
{
"id": "connector-id",
"job_type": "full",
"trigger_method": "on_demand"
}
resp = client.connector.sync_job_post(
id="connector-id",
job_type="full",
trigger_method="on_demand",
)
const response = await client.connector.syncJobPost({
id: "connector-id",
job_type: "full",
trigger_method: "on_demand",
});
response = client.connector.sync_job_post(
body: {
"id": "connector-id",
"job_type": "full",
"trigger_method": "on_demand"
}
)
$resp = $client->connector()->syncJobPost([
"body" => [
"id" => "connector-id",
"job_type" => "full",
"trigger_method" => "on_demand",
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"id":"connector-id","job_type":"full","trigger_method":"on_demand"}' "$ELASTICSEARCH_URL/_connector/_sync_job"
client.connector().syncJobPost(s -> s
.id("connector-id")
.jobType(SyncJobType.Full)
.triggerMethod(SyncJobTriggerMethod.OnDemand)
);
{
"id": "connector-id",
"job_type": "full",
"trigger_method": "on_demand"
}
Update the connector features
Technical preview
Update the connector features in the connector document. This API can be used to control the following aspects of a connector:
- document-level security
- incremental syncs
- advanced sync rules
- basic sync rules
Normally, the running connector service automatically manages these features. However, you can use this API to override the default behavior.
To sync data using self-managed connectors, you need to deploy the Elastic connector service on your own infrastructure. This service runs automatically on Elastic Cloud for Elastic managed connectors.
PUT _connector/my-connector/_features
{
"features": {
"document_level_security": {
"enabled": true
},
"incremental_sync": {
"enabled": true
},
"sync_rules": {
"advanced": {
"enabled": false
},
"basic": {
"enabled": true
}
}
}
}
resp = client.connector.update_features(
connector_id="my-connector",
features={
"document_level_security": {
"enabled": True
},
"incremental_sync": {
"enabled": True
},
"sync_rules": {
"advanced": {
"enabled": False
},
"basic": {
"enabled": True
}
}
},
)
const response = await client.connector.updateFeatures({
connector_id: "my-connector",
features: {
document_level_security: {
enabled: true,
},
incremental_sync: {
enabled: true,
},
sync_rules: {
advanced: {
enabled: false,
},
basic: {
enabled: true,
},
},
},
});
response = client.connector.update_features(
connector_id: "my-connector",
body: {
"features": {
"document_level_security": {
"enabled": true
},
"incremental_sync": {
"enabled": true
},
"sync_rules": {
"advanced": {
"enabled": false
},
"basic": {
"enabled": true
}
}
}
}
)
$resp = $client->connector()->updateFeatures([
"connector_id" => "my-connector",
"body" => [
"features" => [
"document_level_security" => [
"enabled" => true,
],
"incremental_sync" => [
"enabled" => true,
],
"sync_rules" => [
"advanced" => [
"enabled" => false,
],
"basic" => [
"enabled" => true,
],
],
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"features":{"document_level_security":{"enabled":true},"incremental_sync":{"enabled":true},"sync_rules":{"advanced":{"enabled":false},"basic":{"enabled":true}}}}' "$ELASTICSEARCH_URL/_connector/my-connector/_features"
client.connector().updateFeatures(u -> u
.connectorId("my-connector")
.features(f -> f
.documentLevelSecurity(d -> d
.enabled(true)
)
.incrementalSync(i -> i
.enabled(true)
)
.syncRules(s -> s
.advanced(a -> a
.enabled(false)
)
.basic(b -> b
.enabled(true)
)
)
)
);
{
"features": {
"document_level_security": {
"enabled": true
},
"incremental_sync": {
"enabled": true
},
"sync_rules": {
"advanced": {
"enabled": false
},
"basic": {
"enabled": true
}
}
}
}
{
"features": {
"document_level_security": {
"enabled": true
}
}
}
{
"result": "updated"
}
Update the connector filtering
Beta; Added in 8.12.0
Update the draft filtering configuration of a connector and marks the draft validation state as edited. The filtering draft is activated once validated by the running Elastic connector service. The filtering property is used to configure sync rules (both basic and advanced) for a connector.
PUT _connector/my-g-drive-connector/_filtering
{
"rules": [
{
"field": "file_extension",
"id": "exclude-txt-files",
"order": 0,
"policy": "exclude",
"rule": "equals",
"value": "txt"
},
{
"field": "_",
"id": "DEFAULT",
"order": 1,
"policy": "include",
"rule": "regex",
"value": ".*"
}
]
}
resp = client.connector.update_filtering(
connector_id="my-g-drive-connector",
rules=[
{
"field": "file_extension",
"id": "exclude-txt-files",
"order": 0,
"policy": "exclude",
"rule": "equals",
"value": "txt"
},
{
"field": "_",
"id": "DEFAULT",
"order": 1,
"policy": "include",
"rule": "regex",
"value": ".*"
}
],
)
const response = await client.connector.updateFiltering({
connector_id: "my-g-drive-connector",
rules: [
{
field: "file_extension",
id: "exclude-txt-files",
order: 0,
policy: "exclude",
rule: "equals",
value: "txt",
},
{
field: "_",
id: "DEFAULT",
order: 1,
policy: "include",
rule: "regex",
value: ".*",
},
],
});
response = client.connector.update_filtering(
connector_id: "my-g-drive-connector",
body: {
"rules": [
{
"field": "file_extension",
"id": "exclude-txt-files",
"order": 0,
"policy": "exclude",
"rule": "equals",
"value": "txt"
},
{
"field": "_",
"id": "DEFAULT",
"order": 1,
"policy": "include",
"rule": "regex",
"value": ".*"
}
]
}
)
$resp = $client->connector()->updateFiltering([
"connector_id" => "my-g-drive-connector",
"body" => [
"rules" => array(
[
"field" => "file_extension",
"id" => "exclude-txt-files",
"order" => 0,
"policy" => "exclude",
"rule" => "equals",
"value" => "txt",
],
[
"field" => "_",
"id" => "DEFAULT",
"order" => 1,
"policy" => "include",
"rule" => "regex",
"value" => ".*",
],
),
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"rules":[{"field":"file_extension","id":"exclude-txt-files","order":0,"policy":"exclude","rule":"equals","value":"txt"},{"field":"_","id":"DEFAULT","order":1,"policy":"include","rule":"regex","value":".*"}]}' "$ELASTICSEARCH_URL/_connector/my-g-drive-connector/_filtering"
client.connector().updateFiltering(u -> u
.connectorId("my-g-drive-connector")
.rules(List.of(FilteringRule.of(f -> f
.field("file_extension")
.id("exclude-txt-files")
.order(0)
.policy(FilteringPolicy.Exclude)
.rule(FilteringRuleRule.Equals)
.value("txt")),FilteringRule.of(f -> f
.field("_")
.id("DEFAULT")
.order(1)
.policy(FilteringPolicy.Include)
.rule(FilteringRuleRule.Regex)
.value(".*"))))
);
{
"rules": [
{
"field": "file_extension",
"id": "exclude-txt-files",
"order": 0,
"policy": "exclude",
"rule": "equals",
"value": "txt"
},
{
"field": "_",
"id": "DEFAULT",
"order": 1,
"policy": "include",
"rule": "regex",
"value": ".*"
}
]
}
{
"advanced_snippet": {
"value": [{
"tables": [
"users",
"orders"
],
"query": "SELECT users.id AS id, orders.order_id AS order_id FROM users JOIN orders ON users.id = orders.user_id"
}]
}
}
{
"result": "updated"
}
Update the connector index name
Beta; Added in 8.12.0
Update the index_name field of a connector, specifying the index where the data ingested by the connector is stored.
PUT _connector/my-connector/_index_name
{
"index_name": "data-from-my-google-drive"
}
resp = client.connector.update_index_name(
connector_id="my-connector",
index_name="data-from-my-google-drive",
)
const response = await client.connector.updateIndexName({
connector_id: "my-connector",
index_name: "data-from-my-google-drive",
});
response = client.connector.update_index_name(
connector_id: "my-connector",
body: {
"index_name": "data-from-my-google-drive"
}
)
$resp = $client->connector()->updateIndexName([
"connector_id" => "my-connector",
"body" => [
"index_name" => "data-from-my-google-drive",
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index_name":"data-from-my-google-drive"}' "$ELASTICSEARCH_URL/_connector/my-connector/_index_name"
client.connector().updateIndexName(u -> u
.connectorId("my-connector")
.indexName("data-from-my-google-drive")
);
{
"index_name": "data-from-my-google-drive"
}
{
"result": "updated"
}
curl \
--request PUT 'http://api.example.com/_connector/{connector_id}/_native' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"is_native":true}'
Update the connector pipeline
Beta; Added in 8.12.0
When you create a new connector, the configuration of an ingest pipeline is populated with default settings.
PUT _connector/my-connector/_pipeline
{
"pipeline": {
"extract_binary_content": true,
"name": "my-connector-pipeline",
"reduce_whitespace": true,
"run_ml_inference": true
}
}
resp = client.connector.update_pipeline(
connector_id="my-connector",
pipeline={
"extract_binary_content": True,
"name": "my-connector-pipeline",
"reduce_whitespace": True,
"run_ml_inference": True
},
)
const response = await client.connector.updatePipeline({
connector_id: "my-connector",
pipeline: {
extract_binary_content: true,
name: "my-connector-pipeline",
reduce_whitespace: true,
run_ml_inference: true,
},
});
response = client.connector.update_pipeline(
connector_id: "my-connector",
body: {
"pipeline": {
"extract_binary_content": true,
"name": "my-connector-pipeline",
"reduce_whitespace": true,
"run_ml_inference": true
}
}
)
$resp = $client->connector()->updatePipeline([
"connector_id" => "my-connector",
"body" => [
"pipeline" => [
"extract_binary_content" => true,
"name" => "my-connector-pipeline",
"reduce_whitespace" => true,
"run_ml_inference" => true,
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"pipeline":{"extract_binary_content":true,"name":"my-connector-pipeline","reduce_whitespace":true,"run_ml_inference":true}}' "$ELASTICSEARCH_URL/_connector/my-connector/_pipeline"
client.connector().updatePipeline(u -> u
.connectorId("my-connector")
.pipeline(p -> p
.extractBinaryContent(true)
.name("my-connector-pipeline")
.reduceWhitespace(true)
.runMlInference(true)
)
);
{
"pipeline": {
"extract_binary_content": true,
"name": "my-connector-pipeline",
"reduce_whitespace": true,
"run_ml_inference": true
}
}
{
"result": "updated"
}
Resume an auto-follow pattern
Generally available; Added in 7.5.0
Resume a cross-cluster replication auto-follow pattern that was paused. The auto-follow pattern will resume configuring following indices for newly created indices that match its patterns on the remote cluster. Remote indices created while the pattern was paused will also be followed unless they have been deleted or closed in the interim.
Required authorization
- Cluster privileges:
manage_ccr
Query parameters
-
The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. It can also be set to
-1to indicate that the request should never timeout.External documentation
POST /_ccr/auto_follow/my_auto_follow_pattern/resume
resp = client.ccr.resume_auto_follow_pattern(
name="my_auto_follow_pattern",
)
const response = await client.ccr.resumeAutoFollowPattern({
name: "my_auto_follow_pattern",
});
response = client.ccr.resume_auto_follow_pattern(
name: "my_auto_follow_pattern"
)
$resp = $client->ccr()->resumeAutoFollowPattern([
"name" => "my_auto_follow_pattern",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ccr/auto_follow/my_auto_follow_pattern/resume"
client.ccr().resumeAutoFollowPattern(r -> r
.name("my_auto_follow_pattern")
);
{
"acknowledged" : true
}
Downsample an index
Technical preview; Added in 8.5.0
Aggregate a time series (TSDS) index and store pre-computed statistical summaries (min, max, sum, value_count and avg) for each metric field grouped by a configured time interval.
For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index.
All documents within an hour interval are summarized and stored as a single document in the downsample index.
NOTE: Only indices in a time series data stream are supported.
Neither field nor document level security can be defined on the source index.
The source index must be read only (index.blocks.write: true).
POST /my-time-series-index/_downsample/my-downsampled-time-series-index
{
"fixed_interval": "1d"
}
resp = client.indices.downsample(
index="my-time-series-index",
target_index="my-downsampled-time-series-index",
config={
"fixed_interval": "1d"
},
)
const response = await client.indices.downsample({
index: "my-time-series-index",
target_index: "my-downsampled-time-series-index",
config: {
fixed_interval: "1d",
},
});
response = client.indices.downsample(
index: "my-time-series-index",
target_index: "my-downsampled-time-series-index",
body: {
"fixed_interval": "1d"
}
)
$resp = $client->indices()->downsample([
"index" => "my-time-series-index",
"target_index" => "my-downsampled-time-series-index",
"body" => [
"fixed_interval" => "1d",
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"fixed_interval":"1d"}' "$ELASTICSEARCH_URL/my-time-series-index/_downsample/my-downsampled-time-series-index"
client.indices().downsample(d -> d
.index("my-time-series-index")
.targetIndex("my-downsampled-time-series-index")
.config(c -> c
.fixedInterval(f -> f
.time("1d")
)
)
);
{
"fixed_interval": "1d"
}
Get the status for a data stream lifecycle
Generally available; Added in 8.11.0
Get information about an index or data stream's current data stream lifecycle status, such as time since index creation, time since rollover, the lifecycle configuration managing the index, or any errors encountered during lifecycle execution.
Query parameters
-
indicates if the API should return the default values the system uses for the index's lifecycle
-
Specify timeout for connection to master
External documentation
GET .ds-metrics-2023.03.22-000001/_lifecycle/explain
resp = client.indices.explain_data_lifecycle(
index=".ds-metrics-2023.03.22-000001",
)
const response = await client.indices.explainDataLifecycle({
index: ".ds-metrics-2023.03.22-000001",
});
response = client.indices.explain_data_lifecycle(
index: ".ds-metrics-2023.03.22-000001"
)
$resp = $client->indices()->explainDataLifecycle([
"index" => ".ds-metrics-2023.03.22-000001",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/.ds-metrics-2023.03.22-000001/_lifecycle/explain"
client.indices().explainDataLifecycle(e -> e
.index(".ds-metrics-2023.03.22-000001")
);
{
"indices": {
".ds-metrics-2023.03.22-000001": {
"index" : ".ds-metrics-2023.03.22-000001",
"managed_by_lifecycle" : true,
"index_creation_date_millis" : 1679475563571,
"time_since_index_creation" : "843ms",
"rollover_date_millis" : 1679475564293,
"time_since_rollover" : "121ms",
"lifecycle" : { },
"generation_time" : "121ms"
}
}
{
"indices": {
".ds-metrics-2023.03.22-000001": {
"index" : ".ds-metrics-2023.03.22-000001",
"managed_by_lifecycle" : true,
"index_creation_date_millis" : 1679475563571,
"time_since_index_creation" : "843ms",
"lifecycle" : {
"enabled": true
},
"error": "{\"type\":\"validation_exception\",\"reason\":\"Validation Failed: 1: this action would add [2] shards, but this cluster
currently has [4]/[3] maximum normal shards open;\"}"
}
}
Update data stream settings
Generally available
This API can be used to override settings on specific data streams. These overrides will take precedence over what is specified in the template that the data stream matches. To prevent your data stream from getting into an invalid state, only certain settings are allowed. If possible, the setting change is applied to all backing indices. Otherwise, it will be applied when the data stream is next rolled over.
Required authorization
- Index privileges:
manage
Query parameters
-
If
true, the request does not actually change the settings on any data streams or indices. Instead, it simulates changing the settings and reports back to the user what would have happened had these settings actually been applied. -
The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation -
The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
Body
Required
resp = client.indices.put_data_stream_settings(
name="my-data-stream",
settings={
"index.lifecycle.name": "new-test-policy",
"index.number_of_shards": 11
},
)
const response = await client.indices.putDataStreamSettings({
name: "my-data-stream",
settings: {
"index.lifecycle.name": "new-test-policy",
"index.number_of_shards": 11,
},
});
response = client.indices.put_data_stream_settings(
name: "my-data-stream",
body: {
"index.lifecycle.name": "new-test-policy",
"index.number_of_shards": 11
}
)
$resp = $client->indices()->putDataStreamSettings([
"name" => "my-data-stream",
"body" => [
"index.lifecycle.name" => "new-test-policy",
"index.number_of_shards" => 11,
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index.lifecycle.name":"new-test-policy","index.number_of_shards":11}' "$ELASTICSEARCH_URL/_data_stream/my-data-stream/_settings"
{
"index.lifecycle.name" : "new-test-policy",
"index.number_of_shards": 11
}
{
"data_streams": [
{
"name": "my-data-stream",
"applied_to_data_stream": true,
"settings": {
"index": {
"lifecycle": {
"name": "new-test-policy"
},
"number_of_shards": "11"
}
},
"effective_settings": {
"index": {
"lifecycle": {
"name": "new-test-policy"
},
"mode": "standard",
"number_of_shards": "11",
"number_of_replicas": "0"
}
},
"index_settings_results": {
"applied_to_data_stream_only": [
"index.number_of_shards"
],
"applied_to_data_stream_and_backing_indices": [
"index.lifecycle.name"
]
}
}
]
}
{
"data_streams": [
{
"name": "my-data-stream",
"applied_to_data_stream": true,
"settings": {
"index": {
"lifecycle": {
"name": "new-test-policy"
},
"number_of_shards": "11"
}
},
"effective_settings": {
"index": {
"lifecycle": {
"name": "new-test-policy"
},
"mode": "standard",
"number_of_shards": "11",
"number_of_replicas": "0"
}
},
"index_settings_results": {
"applied_to_data_stream_only": [
"index.number_of_shards"
],
"applied_to_data_stream_and_backing_indices": [
"index.lifecycle.name"
],
"errors": [
{
"index": ".ds-my-data-stream-2025.05.28-000001",
"error": "index [.ds-my-data-stream-2025.05.28-000001] blocked by: [FORBIDDEN/9/index metadata (api)];"
}
]
}
}
]
}
{
"data_streams": [
{
"name": "my-data-stream",
"applied_to_data_stream": false,
"error": "Cannot set the following settings on a data stream: [index.number_of_replicas]",
"settings": {},
"effective_settings": {},
"index_settings_results": {
"applied_to_data_stream_only": [],
"applied_to_data_stream_and_backing_indices": []
}
}
]
}
Get a document by its ID
Generally available
Get a document and its source or stored fields from an index.
By default, this API is realtime and is not affected by the refresh rate of the index (when data will become visible for search).
In the case where stored fields are requested with the stored_fields parameter and the document has been updated but is not yet refreshed, the API will have to parse and analyze the source to extract the stored fields.
To turn off realtime behavior, set the realtime parameter to false.
Source filtering
By default, the API returns the contents of the _source field unless you have used the stored_fields parameter or the _source field is turned off.
You can turn off _source retrieval by using the _source parameter:
GET my-index-000001/_doc/0?_source=false
If you only need one or two fields from the _source, use the _source_includes or _source_excludes parameters to include or filter out particular fields.
This can be helpful with large documents where partial retrieval can save on network overhead
Both parameters take a comma separated list of fields or wildcard expressions.
For example:
GET my-index-000001/_doc/0?_source_includes=*.id&_source_excludes=entities
If you only want to specify includes, you can use a shorter notation:
GET my-index-000001/_doc/0?_source=*.id
Routing
If routing is used during indexing, the routing value also needs to be specified to retrieve a document. For example:
GET my-index-000001/_doc/2?routing=user1
This request gets the document with ID 2, but it is routed based on the user. The document is not fetched if the correct routing is not specified.
Distributed
The GET operation is hashed into a specific shard ID. It is then redirected to one of the replicas within that shard ID and returns the result. The replicas are the primary shard and its replicas within that shard ID group. This means that the more replicas you have, the better your GET scaling will be.
Versioning support
You can use the version parameter to retrieve the document only if its current version is equal to the specified one.
Internally, Elasticsearch has marked the old document as deleted and added an entirely new document. The old version of the document doesn't disappear immediately, although you won't be able to access it. Elasticsearch cleans up deleted documents in the background as you continue to index more data.
Required authorization
- Index privileges:
read
Query parameters
-
The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.
If it is set to
_local, the operation will prefer to be run on a local allocated shard when possible. If it is set to a custom value, the value is used to guarantee that the same shards will be used for the same custom value. This can help with "jumping values" when hitting different shards in different refresh states. A sample value can be something like the web session ID or the user name. -
If
true, the request is real-time as opposed to near-real-time. -
If
true, the request refreshes the relevant shards before retrieving the document. Setting it totrueshould be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing). -
A custom value used to route operations to a specific shard.
-
Indicates whether to return the
_sourcefield (trueorfalse) or lists the fields to return. -
A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in
_source_includesquery parameter. If the_sourceparameter isfalse, this parameter is ignored. -
A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the
_source_excludesquery parameter. If the_sourceparameter isfalse, this parameter is ignored. -
A comma-separated list of stored fields to return as part of a hit. If no fields are specified, no stored fields are included in the response. If this field is specified, the
_sourceparameter defaults tofalse. Only leaf fields can be retrieved with thestored_fieldoption. Object fields can't be returned;if specified, the request fails. -
The version number for concurrency control. It must match the current version of the document for the request to succeed.
-
The version type.
Supported values include:
internal: Use internal versioning that starts at 1 and increments with each update or delete.external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: Theexternal_gteversion type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
Values are
internal,external, orexternal_gte.
GET my-index-000001/_doc/1?stored_fields=tags,counter
resp = client.get(
index="my-index-000001",
id="1",
stored_fields="tags,counter",
)
const response = await client.get({
index: "my-index-000001",
id: 1,
stored_fields: "tags,counter",
});
response = client.get(
index: "my-index-000001",
id: "1",
stored_fields: "tags,counter"
)
$resp = $client->get([
"index" => "my-index-000001",
"id" => "1",
"stored_fields" => "tags,counter",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_doc/1?stored_fields=tags,counter"
{
"_index": "my-index-000001",
"_id": "0",
"_version": 1,
"_seq_no": 0,
"_primary_term": 1,
"found": true,
"_source": {
"@timestamp": "2099-11-15T14:12:12",
"http": {
"request": {
"method": "get"
},
"response": {
"status_code": 200,
"bytes": 1070000
},
"version": "1.1"
},
"source": {
"ip": "127.0.0.1"
},
"message": "GET /search HTTP/1.1 200 1070000",
"user": {
"id": "kimchy"
}
}
}
{
"_index": "my-index-000001",
"_id": "1",
"_version": 1,
"_seq_no" : 22,
"_primary_term" : 1,
"found": true,
"fields": {
"tags": [
"production"
]
}
}
{
"_index": "my-index-000001",
"_id": "2",
"_version": 1,
"_seq_no" : 13,
"_primary_term" : 1,
"_routing": "user1",
"found": true,
"fields": {
"tags": [
"env2"
]
}
}
Check for a document source
Generally available; Added in 5.4.0
Check whether a document source exists in an index. For example:
HEAD my-index-000001/_source/1
A document's source is not available if it is disabled in the mapping.
Required authorization
- Index privileges:
read
Path parameters
-
A comma-separated list of data streams, indices, and aliases. It supports wildcards (
*). -
A unique identifier for the document.
Query parameters
-
The node or shard the operation should be performed on. By default, the operation is randomized between the shard replicas.
-
If
true, the request is real-time as opposed to near-real-time. -
If
true, the request refreshes the relevant shards before retrieving the document. Setting it totrueshould be done after careful thought and verification that this does not cause a heavy load on the system (and slow down indexing). -
A custom value used to route operations to a specific shard.
-
Indicates whether to return the
_sourcefield (trueorfalse) or lists the fields to return. -
A comma-separated list of source fields to exclude in the response.
-
A comma-separated list of source fields to include in the response.
-
The version number for concurrency control. It must match the current version of the document for the request to succeed.
-
The version type.
Supported values include:
internal: Use internal versioning that starts at 1 and increments with each update or delete.external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: Theexternal_gteversion type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
Values are
internal,external, orexternal_gte.
HEAD my-index-000001/_source/1
resp = client.exists_source(
index="my-index-000001",
id="1",
)
const response = await client.existsSource({
index: "my-index-000001",
id: 1,
});
response = client.exists_source(
index: "my-index-000001",
id: "1"
)
$resp = $client->existsSource([
"index" => "my-index-000001",
"id" => "1",
]);
curl --head -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index-000001/_source/1"
client.existsSource(e -> e
.id("1")
.index("my-index-000001")
);
Get multiple term vectors
Generally available
All methods and paths for this operation:
Get multiple term vectors with a single request.
You can specify existing documents by index and ID or provide artificial documents in the body of the request.
You can specify the index in the request body or request URI.
The response contains a docs array with all the fetched termvectors.
Each element has the structure provided by the termvectors API.
Artificial documents
You can also use mtermvectors to generate term vectors for artificial documents provided in the body of the request.
The mapping used is determined by the specified _index.
Required authorization
- Index privileges:
read
Query parameters
-
A comma-separated list of documents ids. You must define ids as parameter or set "ids" or "docs" in the request body
-
A comma-separated list or wildcard expressions of fields to include in the statistics. It is used as the default list unless a specific field list is provided in the
completion_fieldsorfielddata_fieldsparameters. -
If
true, the response includes the document count, sum of document frequencies, and sum of total term frequencies. -
If
true, the response includes term offsets. -
If
true, the response includes term payloads. -
If
true, the response includes term positions. -
The node or shard the operation should be performed on. It is random by default.
-
If true, the request is real-time as opposed to near-real-time.
-
A custom value used to route operations to a specific shard.
-
If true, the response includes term frequency and document frequency.
-
If
true, returns the document version as part of a hit. -
The version type.
Supported values include:
internal: Use internal versioning that starts at 1 and increments with each update or delete.external: Only index the document if the specified version is strictly higher than the version of the stored document or if there is no existing document.external_gte: Only index the document if the specified version is equal or higher than the version of the stored document or if there is no existing document. NOTE: Theexternal_gteversion type is meant for special use cases and should be used with care. If used incorrectly, it can result in loss of data.
Values are
internal,external, orexternal_gte.
POST /my-index-000001/_mtermvectors
{
"docs": [
{
"_id": "2",
"fields": [
"message"
],
"term_statistics": true
},
{
"_id": "1"
}
]
}
resp = client.mtermvectors(
index="my-index-000001",
docs=[
{
"_id": "2",
"fields": [
"message"
],
"term_statistics": True
},
{
"_id": "1"
}
],
)
const response = await client.mtermvectors({
index: "my-index-000001",
docs: [
{
_id: "2",
fields: ["message"],
term_statistics: true,
},
{
_id: "1",
},
],
});
response = client.mtermvectors(
index: "my-index-000001",
body: {
"docs": [
{
"_id": "2",
"fields": [
"message"
],
"term_statistics": true
},
{
"_id": "1"
}
]
}
)
$resp = $client->mtermvectors([
"index" => "my-index-000001",
"body" => [
"docs" => array(
[
"_id" => "2",
"fields" => array(
"message",
),
"term_statistics" => true,
],
[
"_id" => "1",
],
),
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"docs":[{"_id":"2","fields":["message"],"term_statistics":true},{"_id":"1"}]}' "$ELASTICSEARCH_URL/my-index-000001/_mtermvectors"
client.mtermvectors(m -> m
.docs(List.of(MultiTermVectorsOperation.of(mu -> mu
.id("2")
.fields("message")
.termStatistics(true)),MultiTermVectorsOperation.of(mu -> mu
.id("1"))))
.index("my-index-000001")
);
{
"docs": [
{
"_id": "2",
"fields": [
"message"
],
"term_statistics": true
},
{
"_id": "1"
}
]
}
{
"ids": [ "1", "2" ],
"fields": [
"message"
],
"term_statistics": true
}
{
"docs": [
{
"_index": "my-index-000001",
"doc" : {
"message" : "test test test"
}
},
{
"_index": "my-index-000001",
"doc" : {
"message" : "Another test ..."
}
}
]
}
Delete an enrich policy
Generally available; Added in 7.5.0
Deletes an existing enrich policy and its enrich index.
Query parameters
-
Period to wait for a connection to the master node.
External documentation
DELETE /_enrich/policy/my-policy
resp = client.enrich.delete_policy(
name="my-policy",
)
const response = await client.enrich.deletePolicy({
name: "my-policy",
});
response = client.enrich.delete_policy(
name: "my-policy"
)
$resp = $client->enrich()->deletePolicy([
"name" => "my-policy",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy"
client.enrich().deletePolicy(d -> d
.name("my-policy")
);
Run an enrich policy
Generally available; Added in 7.5.0
Create the enrich index for an existing enrich policy.
Query parameters
-
Period to wait for a connection to the master node.
External documentation -
If
true, the request blocks other enrich policy execution requests until complete.
PUT /_enrich/policy/my-policy/_execute?wait_for_completion=false
resp = client.enrich.execute_policy(
name="my-policy",
wait_for_completion=False,
)
const response = await client.enrich.executePolicy({
name: "my-policy",
wait_for_completion: "false",
});
response = client.enrich.execute_policy(
name: "my-policy",
wait_for_completion: "false"
)
$resp = $client->enrich()->executePolicy([
"name" => "my-policy",
"wait_for_completion" => "false",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/policy/my-policy/_execute?wait_for_completion=false"
client.enrich().executePolicy(e -> e
.name("my-policy")
.waitForCompletion(false)
);
Get enrich stats
Generally available; Added in 7.5.0
Returns enrich coordinator statistics and information about enrich policies that are currently executing.
Query parameters
-
Period to wait for a connection to the master node.
External documentation
GET /_enrich/_stats
resp = client.enrich.stats()
const response = await client.enrich.stats();
response = client.enrich.stats
$resp = $client->enrich()->stats();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_enrich/_stats"
client.enrich().stats(s -> s);
ES|QL
The Elasticsearch Query Language (ES|QL) provides a powerful way to filter, transform, and analyze data stored in Elasticsearch, and in the future in other runtimes.
Get the dangling indices
Generally available; Added in 7.9.0
If Elasticsearch encounters index data that is absent from the current cluster state, those indices are considered to be dangling.
For example, this can happen if you delete more than cluster.indices.tombstones.size indices while an Elasticsearch node is offline.
Use this API to list dangling indices, which you can then import or delete.
Required authorization
- Cluster privileges:
manage
GET /_dangling
resp = client.dangling_indices.list_dangling_indices()
const response = await client.danglingIndices.listDanglingIndices();
response = client.dangling_indices.list_dangling_indices
$resp = $client->danglingIndices()->listDanglingIndices();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_dangling"
client.danglingIndices().listDanglingIndices();
{
"dangling_indices": [
{
"index_name": "my-index-000001",
"index_uuid": "zmM4e0JtBkeUjiHD-MihPQ",
"creation_date_millis": 1589414451372,
"node_ids": [
"pL47UN3dAb2d5RCWP6lQ3e"
]
}
]
}
Get tokens from text analysis
Generally available
All methods and paths for this operation:
The analyze API performs analysis on a text string and returns the resulting tokens.
Generating excessive amount of tokens may cause a node to run out of memory.
The index.analyze.max_token_count setting enables you to limit the number of tokens that can be produced.
If more than this limit of tokens gets generated, an error occurs.
The _analyze endpoint without a specified index will always use 10000 as its limit.
Required authorization
- Index privileges:
index
Path parameters
-
Index used to derive the analyzer. If specified, the
analyzeror field parameter overrides this value. If no index is specified or the index does not have a default analyzer, the analyze API uses the standard analyzer.
Query parameters
-
Index used to derive the analyzer. If specified, the
analyzeror field parameter overrides this value. If no index is specified or the index does not have a default analyzer, the analyze API uses the standard analyzer.
Body
Required
-
The name of the analyzer that should be applied to the provided
text. This could be a built-in analyzer, or an analyzer that’s been configured in the index. -
Array of token attributes used to filter the output of the
explainparameter. -
Array of character filters used to preprocess characters before the tokenizer.
External documentation -
If
true, the response includes token attributes and additional details.Default value is
false. -
Field used to derive the analyzer. To use this parameter, you must specify an index. If specified, the
analyzerparameter overrides this value. -
Array of token filters used to apply after the tokenizer.
External documentation -
Normalizer to use to convert text into a single token.
-
Tokenizer to use to convert text into tokens.
External documentation
GET /_analyze
{
"analyzer": "standard",
"text": "this is a test"
}
resp = client.indices.analyze(
analyzer="standard",
text="this is a test",
)
const response = await client.indices.analyze({
analyzer: "standard",
text: "this is a test",
});
response = client.indices.analyze(
body: {
"analyzer": "standard",
"text": "this is a test"
}
)
$resp = $client->indices()->analyze([
"body" => [
"analyzer" => "standard",
"text" => "this is a test",
],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"analyzer":"standard","text":"this is a test"}' "$ELASTICSEARCH_URL/_analyze"
client.indices().analyze(a -> a
.analyzer("standard")
.text("this is a test")
);
{
"analyzer": "standard",
"text": "this is a test"
}
{
"analyzer": "standard",
"text": [
"this is a test",
"the second text"
]
}
{
"tokenizer": "keyword",
"filter": [
"lowercase"
],
"char_filter": [
"html_strip"
],
"text": "this is a <b>test</b>"
}
{
"tokenizer": "whitespace",
"filter": [
"lowercase",
{
"type": "stop",
"stopwords": [
"a",
"is",
"this"
]
}
],
"text": "this is a test"
}
{
"field": "obj1.field1",
"text": "this is a test"
}
{
"normalizer": "my_normalizer",
"text": "BaR"
}
{
"tokenizer": "standard",
"filter": [
"snowball"
],
"text": "detailed output",
"explain": true,
"attributes": [
"keyword"
]
}
{
"detail": {
"custom_analyzer": true,
"charfilters": [],
"tokenizer": {
"name": "standard",
"tokens": [
{
"token": "detailed",
"start_offset": 0,
"end_offset": 8,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "output",
"start_offset": 9,
"end_offset": 15,
"type": "<ALPHANUM>",
"position": 1
}
]
},
"tokenfilters": [
{
"name": "snowball",
"tokens": [
{
"token": "detail",
"start_offset": 0,
"end_offset": 8,
"type": "<ALPHANUM>",
"position": 0,
"keyword": false
},
{
"token": "output",
"start_offset": 9,
"end_offset": 15,
"type": "<ALPHANUM>",
"position": 1,
"keyword": false
}
]
}
]
}
}
Update index settings
Generally available
All methods and paths for this operation:
Changes dynamic index settings in real time. For data streams, index setting changes are applied to all backing indices by default.
To revert a setting to the default value, use a null value.
The list of per-index settings that can be updated dynamically on live indices can be found in index settings documentation.
To preserve existing settings from being updated, set the preserve_existing parameter to true.
There are multiple valid ways to represent index settings in the request body. You can specify only the setting, for example:
{
"number_of_replicas": 1
}
Or you can use an index setting object:
{
"index": {
"number_of_replicas": 1
}
}
Or you can use dot annotation:
{
"index.number_of_replicas": 1
}
Or you can embed any of the aforementioned options in a settings object. For example:
{
"settings": {
"index": {
"number_of_replicas": 1
}
}
}
NOTE: You can only define new analyzers on closed indices. To add an analyzer, you must close the index, define the analyzer, and reopen the index. You cannot close the write index of a data stream. To update the analyzer for a data stream's write index and future backing indices, update the analyzer in the index template used by the stream. Then roll over the data stream to apply the new analyzer to the stream's write index and future backing indices. This affects searches and any new data added to the stream after the rollover. However, it does not affect the data stream's backing indices or their existing data. To change the analyzer for existing backing indices, you must create a new data stream and reindex your data into it.
Required authorization
- Index privileges:
manage
Path parameters
-
Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (
*). To target all data streams and indices, omit this parameter or use*or_all.
Query parameters
-
If
false, the request returns an error if any wildcard expression, index alias, or_allvalue targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*returns an error if an index starts withfoobut no index starts withbar. -
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden.Supported values include:
all: Match any data stream or index, including hidden ones.open: Match open, non-hidden indices. Also matches any non-hidden data stream.closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden: Match hidden data streams and hidden indices. Must be combined withopen,closed, orboth.none: Wildcard expressions are not accepted.
Values are
all,open,closed,hidden, ornone. -
If
true, returns settings in flat format. -
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation -
If
true, existing index settings remain unchanged. -
Whether to close and reopen the index to apply non-dynamic settings. If set to
truethe indices to which the settings are being applied will be closed temporarily and then reopened in order to apply the changes. -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
Body
Required
PUT /my-index-000001/_settings
{
"index" : {
"number_of_replicas" : 2
}
}
resp = client.indices.put_settings(
index="my-index-000001",
settings={
"index": {
"number_of_replicas": 2
}
},
)
const response = await client.indices.putSettings({
index: "my-index-000001",
settings: {
index: {
number_of_replicas: 2,
},
},
});
response = client.indices.put_settings(
index: "my-index-000001",
body: {
"index": {
"number_of_replicas": 2
}
}
)
$resp = $client->indices()->putSettings([
"index" => "my-index-000001",
"body" => [
"index" => [
"number_of_replicas" => 2,
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index":{"number_of_replicas":2}}' "$ELASTICSEARCH_URL/my-index-000001/_settings"
client.indices().putSettings(p -> p
.index("my-index-000001")
.settings(s -> s
.index(i -> i
.numberOfReplicas("2")
)
)
);
{
"index" : {
"number_of_replicas" : 2
}
}
{
"index" : {
"refresh_interval" : null
}
}
{
"analysis": {
"analyzer": {
"content": {
"type": "custom",
"tokenizer": "whitespace"
}
}
}
}
Refresh an index
Generally available
All methods and paths for this operation:
A refresh makes recent operations performed on one or more indices available for search. For data streams, the API runs the refresh operation on the stream’s backing indices.
By default, Elasticsearch periodically refreshes indices every second, but only on indices that have received one search request or more in the last 30 seconds.
You can change this default interval with the index.refresh_interval setting.
In Elastic Cloud Serverless, the default refresh interval is 5 seconds across all indices.
Refresh requests are synchronous and do not return a response until the refresh operation completes.
Refreshes are resource-intensive. To ensure good cluster performance, it's recommended to wait for Elasticsearch's periodic refresh rather than performing an explicit refresh when possible.
If your application workflow indexes documents and then runs a search to retrieve the indexed document, it's recommended to use the index API's refresh=wait_for query parameter option.
This option ensures the indexing operation waits for a periodic refresh before running the search.
Required authorization
- Index privileges:
maintenance
Path parameters
-
Comma-separated list of data streams, indices, and aliases used to limit the request. Supports wildcards (
*). To target all data streams and indices, omit this parameter or use*or_all.
Query parameters
-
If
false, the request returns an error if any wildcard expression, index alias, or_allvalue targets only missing or closed indices. This behavior applies even if the request targets other open indices. -
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden.Supported values include:
all: Match any data stream or index, including hidden ones.open: Match open, non-hidden indices. Also matches any non-hidden data stream.closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden: Match hidden data streams and hidden indices. Must be combined withopen,closed, orboth.none: Wildcard expressions are not accepted.
Values are
all,open,closed,hidden, ornone.
GET _refresh
resp = client.indices.refresh()
const response = await client.indices.refresh();
response = client.indices.refresh
$resp = $client->indices()->refresh();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_refresh"
client.indices().refresh(r -> r);
Resolve the cluster
Generally available; Added in 8.13.0
All methods and paths for this operation:
Resolve the specified index expressions to return information about each cluster, including the local "querying" cluster, if included. If no index expression is provided, the API will return information about all the remote clusters that are configured on the querying cluster.
This endpoint is useful before doing a cross-cluster search in order to determine which remote clusters should be included in a search.
You use the same index expression with this endpoint as you would for cross-cluster search. Index and cluster exclusions are also supported with this endpoint.
For each cluster in the index expression, information is returned about:
- Whether the querying ("local") cluster is currently connected to each remote cluster specified in the index expression. Note that this endpoint actively attempts to contact the remote clusters, unlike the
remote/infoendpoint. - Whether each remote cluster is configured with
skip_unavailableastrueorfalse. - Whether there are any indices, aliases, or data streams on that cluster that match the index expression.
- Whether the search is likely to have errors returned when you do the cross-cluster search (including any authorization errors if you do not have permission to query the index).
- Cluster version information, including the Elasticsearch server version.
For example, GET /_resolve/cluster/my-index-*,cluster*:my-index-* returns information about the local cluster and all remotely configured clusters that start with the alias cluster*.
Each cluster returns information about whether it has any indices, aliases or data streams that match my-index-*.
Note on backwards compatibility
The ability to query without an index expression was added in version 8.18, so when
querying remote clusters older than that, the local cluster will send the index
expression dummy* to those remote clusters. Thus, if an errors occur, you may see a reference
to that index expression even though you didn't request it. If it causes a problem, you can
instead include an index expression like *:* to bypass the issue.
Advantages of using this endpoint before a cross-cluster search
You may want to exclude a cluster or index from a search when:
- A remote cluster is not currently connected and is configured with
skip_unavailable=false. Running a cross-cluster search under those conditions will cause the entire search to fail. - A cluster has no matching indices, aliases or data streams for the index expression (or your user does not have permissions to search them). For example, suppose your index expression is
logs*,remote1:logs*and the remote1 cluster has no indices, aliases or data streams that matchlogs*. In that case, that cluster will return no results from that cluster if you include it in a cross-cluster search. - The index expression (combined with any query parameters you specify) will likely cause an exception to be thrown when you do the search. In these cases, the "error" field in the
_resolve/clusterresponse will be present. (This is also where security/permission errors will be shown.) - A remote cluster is an older version that does not support the feature you want to use in your search.
Test availability of remote clusters
The remote/info endpoint is commonly used to test whether the "local" cluster (the cluster being queried) is connected to its remote clusters, but it does not necessarily reflect whether the remote cluster is available or not.
The remote cluster may be available, while the local cluster is not currently connected to it.
You can use the _resolve/cluster API to attempt to reconnect to remote clusters.
For example with GET _resolve/cluster or GET _resolve/cluster/*:*.
The connected field in the response will indicate whether it was successful.
If a connection was (re-)established, this will also cause the remote/info endpoint to now indicate a connected status.
Required authorization
- Index privileges:
view_index_metadata
Path parameters
-
A comma-separated list of names or index patterns for the indices, aliases, and data streams to resolve. Resources on remote clusters can be specified using the
<cluster>:<name>syntax. Index and cluster exclusions (e.g.,-cluster1:*) are also supported. If no index expression is specified, information about all remote clusters configured on the local cluster is returned without doing any index matching
Query parameters
-
If false, the request returns an error if any wildcard expression, index alias, or
_allvalue targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*returns an error if an index starts withfoobut no index starts withbar. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the_resolve/clusterAPI endpoint that takes no index expression. -
Type of index that wildcard patterns can match. If the request can target data streams, this argument determines whether wildcard expressions match hidden data streams. Supports comma-separated values, such as
open,hidden. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the_resolve/clusterAPI endpoint that takes no index expression.Supported values include:
all: Match any data stream or index, including hidden ones.open: Match open, non-hidden indices. Also matches any non-hidden data stream.closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden: Match hidden data streams and hidden indices. Must be combined withopen,closed, orboth.none: Wildcard expressions are not accepted.
Values are
all,open,closed,hidden, ornone. -
If true, concrete, expanded, or aliased indices are ignored when frozen. NOTE: This option is only supported when specifying an index expression. You will get an error if you specify index options to the
_resolve/clusterAPI endpoint that takes no index expression. -
The maximum time to wait for remote clusters to respond. If a remote cluster does not respond within this timeout period, the API response will show the cluster as not connected and include an error message that the request timed out.
The default timeout is unset and the query can take as long as the networking layer is configured to wait for remote clusters that are not responding (typically 30 seconds).
External documentation
GET /_resolve/cluster/not-present,clust*:my-index*,oldcluster:*?ignore_unavailable=false&timeout=5s
resp = client.indices.resolve_cluster(
name="not-present,clust*:my-index*,oldcluster:*",
ignore_unavailable=False,
timeout="5s",
)
const response = await client.indices.resolveCluster({
name: "not-present,clust*:my-index*,oldcluster:*",
ignore_unavailable: "false",
timeout: "5s",
});
response = client.indices.resolve_cluster(
name: "not-present,clust*:my-index*,oldcluster:*",
ignore_unavailable: "false",
timeout: "5s"
)
$resp = $client->indices()->resolveCluster([
"name" => "not-present,clust*:my-index*,oldcluster:*",
"ignore_unavailable" => "false",
"timeout" => "5s",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_resolve/cluster/not-present,clust*:my-index*,oldcluster:*?ignore_unavailable=false&timeout=5s"
client.indices().resolveCluster(r -> r
.ignoreUnavailable(false)
.name(List.of("not-present","clust*:my-index*","oldcluster:*"))
.timeout(t -> t
.offset(5)
)
);
{
"(local)": {
"connected": true,
"skip_unavailable": false,
"matching_indices": true,
"version": {
"number": "8.13.0",
"build_flavor": "default",
"minimum_wire_compatibility_version": "7.17.0",
"minimum_index_compatibility_version": "7.0.0"
}
},
"cluster_one": {
"connected": true,
"skip_unavailable": true,
"matching_indices": true,
"version": {
"number": "8.13.0",
"build_flavor": "default",
"minimum_wire_compatibility_version": "7.17.0",
"minimum_index_compatibility_version": "7.0.0"
}
},
"cluster_two": {
"connected": true,
"skip_unavailable": false,
"matching_indices": true,
"version": {
"number": "8.13.0",
"build_flavor": "default",
"minimum_wire_compatibility_version": "7.17.0",
"minimum_index_compatibility_version": "7.0.0"
}
}
}
{
"(local)": {
"connected": true,
"skip_unavailable": false,
"error": "no such index [not_present]"
},
"cluster_one": {
"connected": true,
"skip_unavailable": true,
"matching_indices": false,
"version": {
"number": "8.13.0",
"build_flavor": "default",
"minimum_wire_compatibility_version": "7.17.0",
"minimum_index_compatibility_version": "7.0.0"
}
},
"cluster_two": {
"connected": false,
"skip_unavailable": false
},
"cluster_three": {
"connected": false,
"skip_unavailable": false,
"error": "Request timed out before receiving a response from the remote cluster"
},
"oldcluster": {
"connected": true,
"skip_unavailable": false,
"matching_indices": true
}
}
Create or update a lifecycle policy
Generally available; Added in 6.6.0
If the specified policy exists, it is replaced and the policy version is incremented.
NOTE: Only the latest version of the policy is stored, you cannot revert to previous versions.
Required authorization
- Index privileges:
manage - Cluster privileges:
manage_ilm
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
PUT _ilm/policy/my_policy
{
"policy": {
"_meta": {
"description": "used for nginx log",
"project": {
"name": "myProject",
"department": "myDepartment"
}
},
"phases": {
"warm": {
"min_age": "10d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
resp = client.ilm.put_lifecycle(
name="my_policy",
policy={
"_meta": {
"description": "used for nginx log",
"project": {
"name": "myProject",
"department": "myDepartment"
}
},
"phases": {
"warm": {
"min_age": "10d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
},
)
const response = await client.ilm.putLifecycle({
name: "my_policy",
policy: {
_meta: {
description: "used for nginx log",
project: {
name: "myProject",
department: "myDepartment",
},
},
phases: {
warm: {
min_age: "10d",
actions: {
forcemerge: {
max_num_segments: 1,
},
},
},
delete: {
min_age: "30d",
actions: {
delete: {},
},
},
},
},
});
response = client.ilm.put_lifecycle(
policy: "my_policy",
body: {
"policy": {
"_meta": {
"description": "used for nginx log",
"project": {
"name": "myProject",
"department": "myDepartment"
}
},
"phases": {
"warm": {
"min_age": "10d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
)
$resp = $client->ilm()->putLifecycle([
"policy" => "my_policy",
"body" => [
"policy" => [
"_meta" => [
"description" => "used for nginx log",
"project" => [
"name" => "myProject",
"department" => "myDepartment",
],
],
"phases" => [
"warm" => [
"min_age" => "10d",
"actions" => [
"forcemerge" => [
"max_num_segments" => 1,
],
],
],
"delete" => [
"min_age" => "30d",
"actions" => [
"delete" => new ArrayObject([]),
],
],
],
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"policy":{"_meta":{"description":"used for nginx log","project":{"name":"myProject","department":"myDepartment"}},"phases":{"warm":{"min_age":"10d","actions":{"forcemerge":{"max_num_segments":1}}},"delete":{"min_age":"30d","actions":{"delete":{}}}}}}' "$ELASTICSEARCH_URL/_ilm/policy/my_policy"
client.ilm().putLifecycle(p -> p
.name("my_policy")
.policy(po -> po
.phases(ph -> ph
.delete(d -> d
.actions(a -> a
.delete(de -> de)
)
.minAge(m -> m
.time("30d")
)
)
.warm(w -> w
.actions(a -> a
.forcemerge(f -> f
.maxNumSegments(1)
)
)
.minAge(m -> m
.time("10d")
)
)
)
.meta(Map.of("description", JsonData.fromJson("\"used for nginx log\""),"project", JsonData.fromJson("{\"name\":\"myProject\",\"department\":\"myDepartment\"}")))
)
);
{
"policy": {
"_meta": {
"description": "used for nginx log",
"project": {
"name": "myProject",
"department": "myDepartment"
}
},
"phases": {
"warm": {
"min_age": "10d",
"actions": {
"forcemerge": {
"max_num_segments": 1
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
{
"acknowledged": true
}
Explain the lifecycle state
Generally available; Added in 6.6.0
Get the current lifecycle status for one or more indices. For data streams, the API retrieves the current lifecycle status for the stream's backing indices.
The response indicates when the index entered each lifecycle state, provides the definition of the running phase, and information about any failures.
Required authorization
- Index privileges:
view_index_metadata,manage_ilm
Path parameters
-
Comma-separated list of data streams, indices, and aliases to target. Supports wildcards (
*). To target all data streams and indices, use*or_all.
Query parameters
-
Filters the returned indices to only indices that are managed by ILM and are in an error state, either due to an encountering an error while executing the policy, or attempting to use a policy that does not exist.
-
Filters the returned indices to only indices that are managed by ILM.
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
GET .ds-timeseries-*/_ilm/explain
resp = client.ilm.explain_lifecycle(
index=".ds-timeseries-*",
)
const response = await client.ilm.explainLifecycle({
index: ".ds-timeseries-*",
});
response = client.ilm.explain_lifecycle(
index: ".ds-timeseries-*"
)
$resp = $client->ilm()->explainLifecycle([
"index" => ".ds-timeseries-*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/.ds-timeseries-*/_ilm/explain"
client.ilm().explainLifecycle(e -> e
.index(".ds-timeseries-*")
);
{
"indices": {
"my-index-000001": {
"index": "my-index-000001",
"index_creation_date_millis": 1538475653281,
"index_creation_date": "2018-10-15T13:45:21.981Z",
"time_since_index_creation": "15s",
"managed": true,
"policy": "my_policy",
"lifecycle_date_millis": 1538475653281,
"lifecycle_date": "2018-10-15T13:45:21.981Z",
"age": "15s",
"phase": "new",
"phase_time_millis": 1538475653317,
"phase_time": "2018-10-15T13:45:22.577Z",
"action": "complete"
"action_time_millis": 1538475653317,
"action_time": "2018-10-15T13:45:22.577Z",
"step": "complete",
"step_time_millis": 1538475653317,
"step_time": "2018-10-15T13:45:22.577Z"
}
}
}
Stop the ILM plugin
Generally available; Added in 6.6.0
Halt all lifecycle management operations and stop the index lifecycle management plugin. This is useful when you are performing maintenance on the cluster and need to prevent ILM from performing any actions on your indices.
The API returns as soon as the stop request has been acknowledged, but the plugin might continue to run until in-progress operations complete and the plugin can be safely stopped. Use the get ILM status API to check whether ILM is running.
Required authorization
- Cluster privileges:
manage_ilm
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
POST _ilm/stop
resp = client.ilm.stop()
const response = await client.ilm.stop();
response = client.ilm.stop
$resp = $client->ilm()->stop();
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ilm/stop"
client.ilm().stop(s -> s);
{
"acknowledged": true
}
Get an inference endpoint
Generally available; Added in 8.11.0
All methods and paths for this operation:
GET _inference/sparse_embedding/my-elser-model
resp = client.inference.get(
task_type="sparse_embedding",
inference_id="my-elser-model",
)
const response = await client.inference.get({
task_type: "sparse_embedding",
inference_id: "my-elser-model",
});
response = client.inference.get(
task_type: "sparse_embedding",
inference_id: "my-elser-model"
)
$resp = $client->inference()->get([
"task_type" => "sparse_embedding",
"inference_id" => "my-elser-model",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_inference/sparse_embedding/my-elser-model"
client.inference().get(g -> g
.inferenceId("my-elser-model")
.taskType(TaskType.SparseEmbedding)
);
Create an inference endpoint
Generally available; Added in 8.11.0
All methods and paths for this operation:
IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
The following integrations are available through the inference API. You can find the available task types next to the integration name:
- AlibabaCloud AI Search (
completion,rerank,sparse_embedding,text_embedding) - Amazon Bedrock (
completion,text_embedding) - Amazon SageMaker (
chat_completion,completion,rerank,sparse_embedding,text_embedding) - Anthropic (
completion) - Azure AI Studio (
completion,text_embedding) - Azure OpenAI (
completion,text_embedding) - Cohere (
completion,rerank,text_embedding) - DeepSeek (
chat_completion,completion) - Elasticsearch (
rerank,sparse_embedding,text_embedding- this service is for built-in models and models uploaded through Eland) - ELSER (
sparse_embedding) - Google AI Studio (
completion,text_embedding) - Google Vertex AI (
chat_completion,completion,rerank,text_embedding) - Hugging Face (
chat_completion,completion,rerank,text_embedding) - JinaAI (
rerank,text_embedding) - Llama (
chat_completion,completion,text_embedding) - Mistral (
chat_completion,completion,text_embedding) - OpenAI (
chat_completion,completion,text_embedding) - VoyageAI (
rerank,text_embedding) - Watsonx inference integration (
text_embedding)
Required authorization
- Cluster privileges:
manage_inference
Path parameters
-
The task type. Refer to the integration list in the API description for the available task types.
Values are
sparse_embedding,text_embedding,rerank,completion, orchat_completion. -
The inference Id
Query parameters
-
Specifies the amount of time to wait for the inference endpoint to be created.
External documentation
PUT _inference/rerank/my-rerank-model
{
"service": "cohere",
"service_settings": {
"model_id": "rerank-english-v3.0",
"api_key": "{{COHERE_API_KEY}}"
}
}
resp = client.inference.put(
task_type="rerank",
inference_id="my-rerank-model",
inference_config={
"service": "cohere",
"service_settings": {
"model_id": "rerank-english-v3.0",
"api_key": "{{COHERE_API_KEY}}"
}
},
)
const response = await client.inference.put({
task_type: "rerank",
inference_id: "my-rerank-model",
inference_config: {
service: "cohere",
service_settings: {
model_id: "rerank-english-v3.0",
api_key: "{{COHERE_API_KEY}}",
},
},
});
response = client.inference.put(
task_type: "rerank",
inference_id: "my-rerank-model",
body: {
"service": "cohere",
"service_settings": {
"model_id": "rerank-english-v3.0",
"api_key": "{{COHERE_API_KEY}}"
}
}
)
$resp = $client->inference()->put([
"task_type" => "rerank",
"inference_id" => "my-rerank-model",
"body" => [
"service" => "cohere",
"service_settings" => [
"model_id" => "rerank-english-v3.0",
"api_key" => "{{COHERE_API_KEY}}",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"cohere","service_settings":{"model_id":"rerank-english-v3.0","api_key":"{{COHERE_API_KEY}}"}}' "$ELASTICSEARCH_URL/_inference/rerank/my-rerank-model"
client.inference().put(p -> p
.inferenceId("my-rerank-model")
.taskType(TaskType.Rerank)
.inferenceConfig(i -> i
.service("cohere")
.serviceSettings(JsonData.fromJson("{\"model_id\":\"rerank-english-v3.0\",\"api_key\":\"{{COHERE_API_KEY}}\"}"))
)
);
{
"service": "cohere",
"service_settings": {
"model_id": "rerank-english-v3.0",
"api_key": "{{COHERE_API_KEY}}"
}
}
Create an AlibabaCloud AI Search inference endpoint
Generally available; Added in 8.16.0
Path parameters
-
The type of the inference task that the model will perform.
Values are
completion,rerank,space_embedding, ortext_embedding. -
The unique identifier of the inference endpoint.
Query parameters
-
Specifies the amount of time to wait for the inference endpoint to be created.
External documentation
Body
Required
-
The chunking configuration object.
External documentation -
The type of service supported for the specified task type. In this case,
alibabacloud-ai-search.Value is
alibabacloud-ai-search. -
Settings used to install the inference model. These settings are specific to the
alibabacloud-ai-searchservice. -
Settings to configure the inference task. These settings are specific to the task type you specified.
PUT _inference/completion/alibabacloud_ai_search_completion
{
"service": "alibabacloud-ai-search",
"service_settings": {
"host" : "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-qwen-turbo",
"workspace" : "default"
}
}
resp = client.inference.put(
task_type="completion",
inference_id="alibabacloud_ai_search_completion",
inference_config={
"service": "alibabacloud-ai-search",
"service_settings": {
"host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-qwen-turbo",
"workspace": "default"
}
},
)
const response = await client.inference.put({
task_type: "completion",
inference_id: "alibabacloud_ai_search_completion",
inference_config: {
service: "alibabacloud-ai-search",
service_settings: {
host: "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
api_key: "AlibabaCloud-API-Key",
service_id: "ops-qwen-turbo",
workspace: "default",
},
},
});
response = client.inference.put(
task_type: "completion",
inference_id: "alibabacloud_ai_search_completion",
body: {
"service": "alibabacloud-ai-search",
"service_settings": {
"host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-qwen-turbo",
"workspace": "default"
}
}
)
$resp = $client->inference()->put([
"task_type" => "completion",
"inference_id" => "alibabacloud_ai_search_completion",
"body" => [
"service" => "alibabacloud-ai-search",
"service_settings" => [
"host" => "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"api_key" => "AlibabaCloud-API-Key",
"service_id" => "ops-qwen-turbo",
"workspace" => "default",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"alibabacloud-ai-search","service_settings":{"host":"default-j01.platform-cn-shanghai.opensearch.aliyuncs.com","api_key":"AlibabaCloud-API-Key","service_id":"ops-qwen-turbo","workspace":"default"}}' "$ELASTICSEARCH_URL/_inference/completion/alibabacloud_ai_search_completion"
client.inference().put(p -> p
.inferenceId("alibabacloud_ai_search_completion")
.taskType(TaskType.Completion)
.inferenceConfig(i -> i
.service("alibabacloud-ai-search")
.serviceSettings(JsonData.fromJson("{\"host\":\"default-j01.platform-cn-shanghai.opensearch.aliyuncs.com\",\"api_key\":\"AlibabaCloud-API-Key\",\"service_id\":\"ops-qwen-turbo\",\"workspace\":\"default\"}"))
)
);
{
"service": "alibabacloud-ai-search",
"service_settings": {
"host" : "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-qwen-turbo",
"workspace" : "default"
}
}
{
"service": "alibabacloud-ai-search",
"service_settings": {
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-bge-reranker-larger",
"host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"workspace": "default"
}
}
{
"service": "alibabacloud-ai-search",
"service_settings": {
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-text-sparse-embedding-001",
"host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"workspace": "default"
}
}
{
"service": "alibabacloud-ai-search",
"service_settings": {
"api_key": "AlibabaCloud-API-Key",
"service_id": "ops-text-embedding-001",
"host": "default-j01.platform-cn-shanghai.opensearch.aliyuncs.com",
"workspace": "default"
}
}
Create an JinaAI inference endpoint
Generally available; Added in 8.18.0
Create an inference endpoint to perform an inference task with the jinaai service.
To review the available rerank models, refer to https://jina.ai/reranker.
To review the available text_embedding models, refer to the https://jina.ai/embeddings/.
Required authorization
- Cluster privileges:
manage_inference
Path parameters
-
The type of the inference task that the model will perform.
Values are
rerankortext_embedding. -
The unique identifier of the inference endpoint.
Query parameters
-
Specifies the amount of time to wait for the inference endpoint to be created.
External documentation
Body
Required
-
The chunking configuration object.
External documentation -
The type of service supported for the specified task type. In this case,
jinaai.Value is
jinaai. -
Settings used to install the inference model. These settings are specific to the
jinaaiservice. -
Settings to configure the inference task. These settings are specific to the task type you specified.
PUT _inference/text_embedding/jinaai-embeddings
{
"service": "jinaai",
"service_settings": {
"model_id": "jina-embeddings-v3",
"api_key": "JinaAi-Api-key"
}
}
resp = client.inference.put(
task_type="text_embedding",
inference_id="jinaai-embeddings",
inference_config={
"service": "jinaai",
"service_settings": {
"model_id": "jina-embeddings-v3",
"api_key": "JinaAi-Api-key"
}
},
)
const response = await client.inference.put({
task_type: "text_embedding",
inference_id: "jinaai-embeddings",
inference_config: {
service: "jinaai",
service_settings: {
model_id: "jina-embeddings-v3",
api_key: "JinaAi-Api-key",
},
},
});
response = client.inference.put(
task_type: "text_embedding",
inference_id: "jinaai-embeddings",
body: {
"service": "jinaai",
"service_settings": {
"model_id": "jina-embeddings-v3",
"api_key": "JinaAi-Api-key"
}
}
)
$resp = $client->inference()->put([
"task_type" => "text_embedding",
"inference_id" => "jinaai-embeddings",
"body" => [
"service" => "jinaai",
"service_settings" => [
"model_id" => "jina-embeddings-v3",
"api_key" => "JinaAi-Api-key",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"jinaai","service_settings":{"model_id":"jina-embeddings-v3","api_key":"JinaAi-Api-key"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/jinaai-embeddings"
client.inference().put(p -> p
.inferenceId("jinaai-embeddings")
.taskType(TaskType.TextEmbedding)
.inferenceConfig(i -> i
.service("jinaai")
.serviceSettings(JsonData.fromJson("{\"model_id\":\"jina-embeddings-v3\",\"api_key\":\"JinaAi-Api-key\"}"))
)
);
{
"service": "jinaai",
"service_settings": {
"model_id": "jina-embeddings-v3",
"api_key": "JinaAi-Api-key"
}
}
{
"service": "jinaai",
"service_settings": {
"api_key": "JinaAI-Api-key",
"model_id": "jina-reranker-v2-base-multilingual"
},
"task_settings": {
"top_n": 10,
"return_documents": true
}
}
Create a Mistral inference endpoint
Generally available; Added in 8.15.0
Path parameters
-
The type of the inference task that the model will perform.
Values are
text_embedding,completion, orchat_completion. -
The unique identifier of the inference endpoint.
Query parameters
-
Specifies the amount of time to wait for the inference endpoint to be created.
External documentation
Body
Required
-
The chunking configuration object.
External documentation -
The type of service supported for the specified task type. In this case,
mistral.Value is
mistral. -
Settings used to install the inference model. These settings are specific to the
mistralservice.
PUT _inference/text_embedding/mistral-embeddings-test
{
"service": "mistral",
"service_settings": {
"api_key": "Mistral-API-Key",
"model": "mistral-embed"
}
}
resp = client.inference.put(
task_type="text_embedding",
inference_id="mistral-embeddings-test",
inference_config={
"service": "mistral",
"service_settings": {
"api_key": "Mistral-API-Key",
"model": "mistral-embed"
}
},
)
const response = await client.inference.put({
task_type: "text_embedding",
inference_id: "mistral-embeddings-test",
inference_config: {
service: "mistral",
service_settings: {
api_key: "Mistral-API-Key",
model: "mistral-embed",
},
},
});
response = client.inference.put(
task_type: "text_embedding",
inference_id: "mistral-embeddings-test",
body: {
"service": "mistral",
"service_settings": {
"api_key": "Mistral-API-Key",
"model": "mistral-embed"
}
}
)
$resp = $client->inference()->put([
"task_type" => "text_embedding",
"inference_id" => "mistral-embeddings-test",
"body" => [
"service" => "mistral",
"service_settings" => [
"api_key" => "Mistral-API-Key",
"model" => "mistral-embed",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"mistral","service_settings":{"api_key":"Mistral-API-Key","model":"mistral-embed"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/mistral-embeddings-test"
client.inference().put(p -> p
.inferenceId("mistral-embeddings-test")
.taskType(TaskType.TextEmbedding)
.inferenceConfig(i -> i
.service("mistral")
.serviceSettings(JsonData.fromJson("{\"api_key\":\"Mistral-API-Key\",\"model\":\"mistral-embed\"}"))
)
);
{
"service": "mistral",
"service_settings": {
"api_key": "Mistral-API-Key",
"model": "mistral-embed"
}
}
Perform sparse embedding inference on the service
Generally available; Added in 8.11.0
Query parameters
-
Specifies the amount of time to wait for the inference request to complete.
External documentation
POST _inference/sparse_embedding/my-elser-model
{
"input": "The sky above the port was the color of television tuned to a dead channel."
}
resp = client.inference.sparse_embedding(
inference_id="my-elser-model",
input="The sky above the port was the color of television tuned to a dead channel.",
)
const response = await client.inference.sparseEmbedding({
inference_id: "my-elser-model",
input:
"The sky above the port was the color of television tuned to a dead channel.",
});
response = client.inference.sparse_embedding(
inference_id: "my-elser-model",
body: {
"input": "The sky above the port was the color of television tuned to a dead channel."
}
)
$resp = $client->inference()->sparseEmbedding([
"inference_id" => "my-elser-model",
"body" => [
"input" => "The sky above the port was the color of television tuned to a dead channel.",
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"input":"The sky above the port was the color of television tuned to a dead channel."}' "$ELASTICSEARCH_URL/_inference/sparse_embedding/my-elser-model"
client.inference().sparseEmbedding(s -> s
.inferenceId("my-elser-model")
.input("The sky above the port was the color of television tuned to a dead channel.")
);
{
"input": "The sky above the port was the color of television tuned to a dead channel."
}
{
"sparse_embedding": [
{
"port": 2.1259406,
"sky": 1.7073475,
"color": 1.6922266,
"dead": 1.6247464,
"television": 1.3525393,
"above": 1.2425821,
"tuned": 1.1440028,
"colors": 1.1218185,
"tv": 1.0111054,
"ports": 1.0067928,
"poem": 1.0042328,
"channel": 0.99471164,
"tune": 0.96235967,
"scene": 0.9020516
}
]
}
Update an inference endpoint
Generally available; Added in 8.17.0
All methods and paths for this operation:
Modify task_settings, secrets (within service_settings), or num_allocations for an inference endpoint, depending on the specific endpoint service and task_type.
IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
Required authorization
- Cluster privileges:
manage_inference
Path parameters
-
The type of inference task that the model performs.
Values are
sparse_embedding,text_embedding,rerank,completion, orchat_completion. -
The unique identifier of the inference endpoint.
PUT _inference/my-inference-endpoint/_update
{
"service_settings": {
"api_key": "<API_KEY>"
}
}
resp = client.inference.update(
inference_id="my-inference-endpoint",
inference_config={
"service_settings": {
"api_key": "<API_KEY>"
}
},
)
const response = await client.inference.update({
inference_id: "my-inference-endpoint",
inference_config: {
service_settings: {
api_key: "<API_KEY>",
},
},
});
response = client.inference.update(
inference_id: "my-inference-endpoint",
body: {
"service_settings": {
"api_key": "<API_KEY>"
}
}
)
$resp = $client->inference()->update([
"inference_id" => "my-inference-endpoint",
"body" => [
"service_settings" => [
"api_key" => "<API_KEY>",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service_settings":{"api_key":"<API_KEY>"}}' "$ELASTICSEARCH_URL/_inference/my-inference-endpoint/_update"
{
"service_settings": {
"api_key": "<API_KEY>"
}
}
Create or update a GeoIP database configuration
Generally available; Added in 8.15.0
Refer to the create or update IP geolocation database configuration API.
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
Body
Required
-
The provider-assigned name of the IP geolocation database to download.
-
The configuration necessary to identify which IP geolocation provider to use to download the database, as well as any provider-specific configuration necessary for such downloading. At present, the only supported provider is maxmind, and the maxmind provider requires that an account_id (string) is configured.
curl \
--request PUT 'http://api.example.com/_ingest/geoip/database/{id}' \
--header "Authorization: $API_KEY" \
--header "Content-Type: application/json" \
--data '{"name":"string","maxmind":{"account_id":"string"}}'
Create or update an IP geolocation database configuration
Generally available; Added in 8.15.0
Query parameters
-
The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. A value of
-1indicates that the request should never time out.External documentation -
The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response indicates that it was not completely acknowledged. A value of
-1indicates that the request should never time out.External documentation
Body
Required
The configuration necessary to identify which IP geolocation provider to use to download a database, as well as any provider-specific configuration necessary for such downloading.
At present, the only supported providers are maxmind and ipinfo, and the maxmind provider requires that an account_id (string) is configured.
A provider (either maxmind or ipinfo) must be specified. The web and local providers can be returned as read only configurations.
-
The provider-assigned name of the IP geolocation database to download.
PUT _ingest/ip_location/database/my-database-1
{
"name": "GeoIP2-Domain",
"maxmind": {
"account_id": "1234567"
}
}
resp = client.ingest.put_ip_location_database(
id="my-database-1",
configuration={
"name": "GeoIP2-Domain",
"maxmind": {
"account_id": "1234567"
}
},
)
const response = await client.ingest.putIpLocationDatabase({
id: "my-database-1",
configuration: {
name: "GeoIP2-Domain",
maxmind: {
account_id: "1234567",
},
},
});
response = client.ingest.put_ip_location_database(
id: "my-database-1",
body: {
"name": "GeoIP2-Domain",
"maxmind": {
"account_id": "1234567"
}
}
)
$resp = $client->ingest()->putIpLocationDatabase([
"id" => "my-database-1",
"body" => [
"name" => "GeoIP2-Domain",
"maxmind" => [
"account_id" => "1234567",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"name":"GeoIP2-Domain","maxmind":{"account_id":"1234567"}}' "$ELASTICSEARCH_URL/_ingest/ip_location/database/my-database-1"
client.ingest().putIpLocationDatabase(p -> p
.id("my-database-1")
.configuration(c -> c
.maxmind(m -> m
.accountId("1234567")
)
.name("GeoIP2-Domain")
)
);
{
"name": "GeoIP2-Domain",
"maxmind": {
"account_id": "1234567"
}
}
Get pipelines
Generally available; Added in 5.0.0
All methods and paths for this operation:
Get information about one or more ingest pipelines. This API returns a local reference of the pipeline.
Path parameters
-
Comma-separated list of pipeline IDs to retrieve. Wildcard (
*) expressions are supported. To get all ingest pipelines, omit this parameter or use*.
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation -
Return pipelines without their definitions (default: false)
GET /_ingest/pipeline/my-pipeline-id
resp = client.ingest.get_pipeline(
id="my-pipeline-id",
)
const response = await client.ingest.getPipeline({
id: "my-pipeline-id",
});
response = client.ingest.get_pipeline(
id: "my-pipeline-id"
)
$resp = $client->ingest()->getPipeline([
"id" => "my-pipeline-id",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ingest/pipeline/my-pipeline-id"
client.ingest().getPipeline(g -> g
.id("my-pipeline-id")
);
{
"my-pipeline-id" : {
"description" : "describe pipeline",
"version" : 123,
"processors" : [
{
"set" : {
"field" : "foo",
"value" : "bar"
}
}
]
}
}
Get GeoIP statistics
Generally available; Added in 7.13.0
Get download statistics for GeoIP2 databases that are used with the GeoIP processor.
GET _ingest/geoip/stats
resp = client.ingest.geo_ip_stats()
const response = await client.ingest.geoIpStats();
response = client.ingest.geo_ip_stats
$resp = $client->ingest()->geoIpStats();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ingest/geoip/stats"
client.ingest().geoIpStats();
Run a grok processor
Generally available; Added in 6.1.0
Extract structured fields out of a single text field within a document. You must choose which field to extract matched fields from, as well as the grok pattern you expect will match. A grok pattern is like a regular expression that supports aliased expressions that can be reused.
GET _ingest/processor/grok
resp = client.ingest.processor_grok()
const response = await client.ingest.processorGrok();
response = client.ingest.processor_grok
$resp = $client->ingest()->processorGrok();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ingest/processor/grok"
client.ingest().processorGrok();
Delete the license
Generally available
When the license expires, your subscription level reverts to Basic.
If the operator privileges feature is enabled, only operator users can use this API.
Required authorization
- Cluster privileges:
manage
Query parameters
-
The period to wait for a connection to the master node.
External documentation -
The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
DELETE /_license
resp = client.license.delete()
const response = await client.license.delete();
response = client.license.delete
$resp = $client->license()->delete();
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_license"
client.license().delete(d -> d);
Start a basic license
Generally available; Added in 6.3.0
Start an indefinite basic license, which gives access to all the basic features.
NOTE: In order to start a basic license, you must not currently have a basic license.
If the basic license does not support all of the features that are available with your current license, however, you are notified in the response.
You must then re-submit the API request with the acknowledge parameter set to true.
To check the status of your basic license, use the get basic license API.
Required authorization
- Cluster privileges:
manage
Query parameters
-
whether the user has acknowledged acknowledge messages (default: false)
-
Period to wait for a connection to the master node.
External documentation -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
POST /_license/start_basic?acknowledge=true
resp = client.license.post_start_basic(
acknowledge=True,
)
const response = await client.license.postStartBasic({
acknowledge: "true",
});
response = client.license.post_start_basic(
acknowledge: "true"
)
$resp = $client->license()->postStartBasic([
"acknowledge" => "true",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_license/start_basic?acknowledge=true"
client.license().postStartBasic(p -> p
.acknowledge(true)
);
{
"basic_was_started": true,
"acknowledged": true
}
Get calendar configuration info
Generally available; Added in 6.2.0
Path parameters
-
A string that uniquely identifies a calendar. You can get information for multiple calendars by using a comma-separated list of ids or a wildcard expression. You can get information for all calendars by using
_allor*or by omitting the calendar identifier.
Query parameters
-
Skips the specified number of calendars. This parameter is supported only when you omit the calendar identifier.
-
Specifies the maximum number of calendars to obtain. This parameter is supported only when you omit the calendar identifier.
GET _ml/calendars/planned-outages
resp = client.ml.get_calendars(
calendar_id="planned-outages",
)
const response = await client.ml.getCalendars({
calendar_id: "planned-outages",
});
response = client.ml.get_calendars(
calendar_id: "planned-outages"
)
$resp = $client->ml()->getCalendars([
"calendar_id" => "planned-outages",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/calendars/planned-outages"
client.ml().getCalendars(g -> g
.calendarId("planned-outages")
);
DELETE _ml/calendars/planned-outages
resp = client.ml.delete_calendar(
calendar_id="planned-outages",
)
const response = await client.ml.deleteCalendar({
calendar_id: "planned-outages",
});
response = client.ml.delete_calendar(
calendar_id: "planned-outages"
)
$resp = $client->ml()->deleteCalendar([
"calendar_id" => "planned-outages",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/calendars/planned-outages"
client.ml().deleteCalendar(d -> d
.calendarId("planned-outages")
);
{
"acknowledged": true
}
Path parameters
-
A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
DELETE _ml/datafeeds/datafeed-total-requests
resp = client.ml.delete_datafeed(
datafeed_id="datafeed-total-requests",
)
const response = await client.ml.deleteDatafeed({
datafeed_id: "datafeed-total-requests",
});
response = client.ml.delete_datafeed(
datafeed_id: "datafeed-total-requests"
)
$resp = $client->ml()->deleteDatafeed([
"datafeed_id" => "datafeed-total-requests",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-total-requests"
client.ml().deleteDatafeed(d -> d
.datafeedId("datafeed-total-requests")
);
{
"acknowledged": true
}
Delete forecasts from a job
Generally available; Added in 6.5.0
All methods and paths for this operation:
By default, forecasts are retained for 14 days. You can specify a
different retention period with the expires_in parameter in the forecast
jobs API. The delete forecast API enables you to delete one or more
forecasts before they expire.
Required authorization
- Cluster privileges:
manage_ml
Path parameters
-
Identifier for the anomaly detection job.
-
A comma-separated list of forecast identifiers. If you do not specify this optional parameter or if you specify
_allor*the API deletes all forecasts from the job.
Query parameters
-
Specifies whether an error occurs when there are no forecasts. In particular, if this parameter is set to
falseand there are no forecasts associated with the job, attempts to delete all forecasts return an error. -
Specifies the period of time to wait for the completion of the delete operation. When this period of time elapses, the API fails and returns an error.
External documentation
DELETE _ml/anomaly_detectors/total-requests/_forecast/_all
resp = client.ml.delete_forecast(
job_id="total-requests",
forecast_id="_all",
)
const response = await client.ml.deleteForecast({
job_id: "total-requests",
forecast_id: "_all",
});
response = client.ml.delete_forecast(
job_id: "total-requests",
forecast_id: "_all"
)
$resp = $client->ml()->deleteForecast([
"job_id" => "total-requests",
"forecast_id" => "_all",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/total-requests/_forecast/_all"
client.ml().deleteForecast(d -> d
.forecastId("_all")
.jobId("total-requests")
);
{
"acknowledged": true
}
Get model snapshots info
Generally available; Added in 5.4.0
All methods and paths for this operation:
Required authorization
- Cluster privileges:
monitor_ml
Path parameters
-
Identifier for the anomaly detection job.
-
A numerical character string that uniquely identifies the model snapshot. You can get information for multiple snapshots by using a comma-separated list or a wildcard expression. You can get all snapshots by using
_all, by specifying*as the snapshot ID, or by omitting the snapshot ID.
Query parameters
-
If true, the results are sorted in descending order.
-
Returns snapshots with timestamps earlier than this time.
-
Skips the specified number of snapshots.
-
Specifies the maximum number of snapshots to obtain.
-
Specifies the sort field for the requested snapshots. By default, the snapshots are sorted by their timestamp.
-
Returns snapshots with timestamps after this time.
Body
-
Refer to the description for the
descquery parameter.Default value is
false. -
Refer to the description for the
sortquery parameter.
GET _ml/anomaly_detectors/high_sum_total_sales/model_snapshots
{
"start": "1575402236000"
}
resp = client.ml.get_model_snapshots(
job_id="high_sum_total_sales",
start="1575402236000",
)
const response = await client.ml.getModelSnapshots({
job_id: "high_sum_total_sales",
start: 1575402236000,
});
response = client.ml.get_model_snapshots(
job_id: "high_sum_total_sales",
body: {
"start": "1575402236000"
}
)
$resp = $client->ml()->getModelSnapshots([
"job_id" => "high_sum_total_sales",
"body" => [
"start" => "1575402236000",
],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"start":"1575402236000"}' "$ELASTICSEARCH_URL/_ml/anomaly_detectors/high_sum_total_sales/model_snapshots"
client.ml().getModelSnapshots(g -> g
.jobId("high_sum_total_sales")
.start(DateTime.of("1575402236000"))
);
{
"start": "1575402236000"
}
Delete a model snapshot
Generally available; Added in 5.4.0
DELETE _ml/anomaly_detectors/farequote/model_snapshots/1491948163
resp = client.ml.delete_model_snapshot(
job_id="farequote",
snapshot_id="1491948163",
)
const response = await client.ml.deleteModelSnapshot({
job_id: "farequote",
snapshot_id: 1491948163,
});
response = client.ml.delete_model_snapshot(
job_id: "farequote",
snapshot_id: "1491948163"
)
$resp = $client->ml()->deleteModelSnapshot([
"job_id" => "farequote",
"snapshot_id" => "1491948163",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/farequote/model_snapshots/1491948163"
client.ml().deleteModelSnapshot(d -> d
.jobId("farequote")
.snapshotId("1491948163")
);
{
"acknowledged": true
}
Force buffered data to be processed
Generally available; Added in 5.4.0
The flush jobs API is only applicable when sending data for analysis using the post data API. Depending on the content of the buffer, then it might additionally calculate new results. Both flush and close operations are similar, however the flush is more efficient if you are expecting to send more data for analysis. When flushing, the job remains open and is available to continue analyzing data. A close operation additionally prunes and persists the model state to disk and the job must be opened again before analyzing further data.
Required authorization
- Cluster privileges:
manage_ml
Query parameters
-
Specifies to advance to a particular time value. Results are generated and the model is updated for data from the specified time interval.
-
If true, calculates the interim results for the most recent bucket or all buckets within the latency period.
-
When used in conjunction with
calc_interimandstart, specifies the range of buckets on which to calculate interim results. -
Specifies to skip to a particular time value. Results are not generated and the model is not updated for data from the specified time interval.
-
When used in conjunction with
calc_interim, specifies the range of buckets on which to calculate interim results.
Body
-
Refer to the description for the
calc_interimquery parameter.
POST _ml/anomaly_detectors/low_request_rate/_flush
{
"calc_interim": true
}
resp = client.ml.flush_job(
job_id="low_request_rate",
calc_interim=True,
)
const response = await client.ml.flushJob({
job_id: "low_request_rate",
calc_interim: true,
});
response = client.ml.flush_job(
job_id: "low_request_rate",
body: {
"calc_interim": true
}
)
$resp = $client->ml()->flushJob([
"job_id" => "low_request_rate",
"body" => [
"calc_interim" => true,
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"calc_interim":true}' "$ELASTICSEARCH_URL/_ml/anomaly_detectors/low_request_rate/_flush"
client.ml().flushJob(f -> f
.calcInterim(true)
.jobId("low_request_rate")
);
{
"calc_interim": true
}
Get anomaly detection job model snapshot upgrade usage info
Generally available; Added in 7.16.0
Path parameters
-
Identifier for the anomaly detection job.
-
A numerical character string that uniquely identifies the model snapshot. You can get information for multiple snapshots by using a comma-separated list or a wildcard expression. You can get all snapshots by using
_all, by specifying*as the snapshot ID, or by omitting the snapshot ID.
Query parameters
-
Specifies what to do when the request:
- Contains wildcard expressions and there are no jobs that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
The default value is true, which returns an empty jobs array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.
GET _ml/anomaly_detectors/low_request_rate/model_snapshots/_all/_upgrade/_stats
resp = client.ml.get_model_snapshot_upgrade_stats(
job_id="low_request_rate",
snapshot_id="_all",
)
const response = await client.ml.getModelSnapshotUpgradeStats({
job_id: "low_request_rate",
snapshot_id: "_all",
});
response = client.ml.get_model_snapshot_upgrade_stats(
job_id: "low_request_rate",
snapshot_id: "_all"
)
$resp = $client->ml()->getModelSnapshotUpgradeStats([
"job_id" => "low_request_rate",
"snapshot_id" => "_all",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/anomaly_detectors/low_request_rate/model_snapshots/_all/_upgrade/_stats"
client.ml().getModelSnapshotUpgradeStats(g -> g
.jobId("low_request_rate")
.snapshotId("_all")
);
Start datafeeds
Generally available; Added in 5.5.0
A datafeed must be started in order to retrieve data from Elasticsearch. A datafeed can be started and stopped multiple times throughout its lifecycle.
Before you can start a datafeed, the anomaly detection job must be open. Otherwise, an error occurs.
If you restart a stopped datafeed, it continues processing input data from the next millisecond after it was stopped. If new data was indexed for that exact millisecond between stopping and starting, it will be ignored.
When Elasticsearch security features are enabled, your datafeed remembers which roles the last user to create or update it had at the time of creation or update and runs the query using those same roles. If you provided secondary authorization headers when you created or updated the datafeed, those credentials are used instead.
Required authorization
- Cluster privileges:
manage_ml
Path parameters
-
A numerical character string that uniquely identifies the datafeed. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Query parameters
-
The time that the datafeed should end, which can be specified by using one of the following formats:
- ISO 8601 format with milliseconds, for example
2017-01-22T06:00:00.000Z - ISO 8601 format without milliseconds, for example
2017-01-22T06:00:00+00:00 - Milliseconds since the epoch, for example
1485061200000
Date-time arguments using either of the ISO 8601 formats must have a time zone designator, where
Zis accepted as an abbreviation for UTC time. When a URL is expected (for example, in browsers), the+used in time zone designators must be encoded as%2B. The end time value is exclusive. If you do not specify an end time, the datafeed runs continuously. - ISO 8601 format with milliseconds, for example
-
The time that the datafeed should begin, which can be specified by using the same formats as the
endparameter. This value is inclusive. If you do not specify a start time and the datafeed is associated with a new anomaly detection job, the analysis starts from the earliest time for which data is available. If you restart a stopped datafeed and specify a start value that is earlier than the timestamp of the latest processed record, the datafeed continues from 1 millisecond after the timestamp of the latest processed record. -
Specifies the amount of time to wait until a datafeed starts.
External documentation
Body
-
Refer to the description for the
timeoutquery parameter.External documentation
POST _ml/datafeeds/datafeed-low_request_rate/_start
{
"start": "2019-04-07T18:22:16Z"
}
resp = client.ml.start_datafeed(
datafeed_id="datafeed-low_request_rate",
start="2019-04-07T18:22:16Z",
)
const response = await client.ml.startDatafeed({
datafeed_id: "datafeed-low_request_rate",
start: "2019-04-07T18:22:16Z",
});
response = client.ml.start_datafeed(
datafeed_id: "datafeed-low_request_rate",
body: {
"start": "2019-04-07T18:22:16Z"
}
)
$resp = $client->ml()->startDatafeed([
"datafeed_id" => "datafeed-low_request_rate",
"body" => [
"start" => "2019-04-07T18:22:16Z",
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"start":"2019-04-07T18:22:16Z"}' "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-low_request_rate/_start"
client.ml().startDatafeed(s -> s
.datafeedId("datafeed-low_request_rate")
.start(DateTime.of("2019-04-07T18:22:16Z"))
);
{
"start": "2019-04-07T18:22:16Z"
}
Update a snapshot
Generally available; Added in 5.4.0
POST
_ml/anomaly_detectors/it_ops_new_logs/model_snapshots/1491852978/_update
{
"description": "Snapshot 1",
"retain": true
}
resp = client.ml.update_model_snapshot(
job_id="it_ops_new_logs",
snapshot_id="1491852978",
description="Snapshot 1",
retain=True,
)
const response = await client.ml.updateModelSnapshot({
job_id: "it_ops_new_logs",
snapshot_id: 1491852978,
description: "Snapshot 1",
retain: true,
});
response = client.ml.update_model_snapshot(
job_id: "it_ops_new_logs",
snapshot_id: "1491852978",
body: {
"description": "Snapshot 1",
"retain": true
}
)
$resp = $client->ml()->updateModelSnapshot([
"job_id" => "it_ops_new_logs",
"snapshot_id" => "1491852978",
"body" => [
"description" => "Snapshot 1",
"retain" => true,
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"description":"Snapshot 1","retain":true}' "$ELASTICSEARCH_URL/_ml/anomaly_detectors/it_ops_new_logs/model_snapshots/1491852978/_update"
client.ml().updateModelSnapshot(u -> u
.description("Snapshot 1")
.jobId("it_ops_new_logs")
.retain(true)
.snapshotId("1491852978")
);
_ml/anomaly_detectors/it_ops_new_logs/model_snapshots/1491852978/_update
{
"description": "Snapshot 1",
"retain": true
}
Create a data frame analytics job
Generally available; Added in 7.3.0
This API creates a data frame analytics job that performs an analysis on the
source indices and stores the outcome in a destination index.
By default, the query used in the source configuration is {"match_all": {}}.
If the destination index does not exist, it is created automatically when you start the job.
If you supply only a subset of the regression or classification parameters, hyperparameter optimization occurs. It determines a value for each of the undefined parameters.
Required authorization
- Index privileges:
create_index,index,manage,read,view_index_metadata - Cluster privileges:
manage_ml
Path parameters
-
Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Body
Required
-
Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node. If set to
falseand a machine learning node with capacity to run the job cannot be immediately found, the API returns an error. If set totrue, the API does not return an error; the job waits in thestartingstate until sufficient machine learning node capacity is available. This behavior is also affected by the cluster-widexpack.ml.max_lazy_ml_nodessetting.Default value is
false. -
The analysis configuration, which contains the information necessary to perform one of the following types of analysis: classification, outlier detection, or regression.
-
Specifies
includesand/orexcludespatterns to select which fields will be included in the analysis. The patterns specified inexcludesare applied last, thereforeexcludestakes precedence. In other words, if the same field is specified in bothincludesandexcludes, then the field will not be included in the analysis. Ifanalyzed_fieldsis not set, only the relevant fields will be included. For example, all the numeric fields for outlier detection. The supported fields vary for each type of analysis. Outlier detection requires numeric orbooleandata to analyze. The algorithms don’t support missing values therefore fields that have data types other than numeric or boolean are ignored. Documents where included fields contain missing values, null values, or an array are also ignored. Therefore thedestindex may contain documents that don’t have an outlier score. Regression supports fields that are numeric,boolean,text,keyword, andipdata types. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedestindex that don’t contain a results field are not included in the regression analysis. Classification supports fields that are numeric,boolean,text,keyword, andipdata types. It is also tolerant of missing values. Fields that are supported are included in the analysis, other fields are ignored. Documents where included fields contain an array with two or more values are also ignored. Documents in thedestindex that don’t contain a results field are not included in the classification analysis. Classification analysis can be improved by mapping ordinal variable values to a single number. For example, in case of age ranges, you can model the values as0-14 = 0,15-24 = 1,25-34 = 2, and so on. -
A description of the job.
-
The destination configuration.
-
The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself.
Default value is
1. -
The approximate maximum amount of memory resources that are permitted for analytical processing. If your
elasticsearch.ymlfile contains anxpack.ml.max_model_memory_limitsetting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limitvalues greater than that setting.Default value is
1gb. -
The configuration of how to source the analysis data.
PUT _ml/data_frame/analytics/model-flight-delays-pre
{
"source": {
"index": [
"kibana_sample_data_flights"
],
"query": {
"range": {
"DistanceKilometers": {
"gt": 0
}
}
},
"_source": {
"includes": [],
"excludes": [
"FlightDelay",
"FlightDelayType"
]
}
},
"dest": {
"index": "df-flight-delays",
"results_field": "ml-results"
},
"analysis": {
"regression": {
"dependent_variable": "FlightDelayMin",
"training_percent": 90
}
},
"analyzed_fields": {
"includes": [],
"excludes": [
"FlightNum"
]
},
"model_memory_limit": "100mb"
}
resp = client.ml.put_data_frame_analytics(
id="model-flight-delays-pre",
source={
"index": [
"kibana_sample_data_flights"
],
"query": {
"range": {
"DistanceKilometers": {
"gt": 0
}
}
},
"_source": {
"includes": [],
"excludes": [
"FlightDelay",
"FlightDelayType"
]
}
},
dest={
"index": "df-flight-delays",
"results_field": "ml-results"
},
analysis={
"regression": {
"dependent_variable": "FlightDelayMin",
"training_percent": 90
}
},
analyzed_fields={
"includes": [],
"excludes": [
"FlightNum"
]
},
model_memory_limit="100mb",
)
const response = await client.ml.putDataFrameAnalytics({
id: "model-flight-delays-pre",
source: {
index: ["kibana_sample_data_flights"],
query: {
range: {
DistanceKilometers: {
gt: 0,
},
},
},
_source: {
includes: [],
excludes: ["FlightDelay", "FlightDelayType"],
},
},
dest: {
index: "df-flight-delays",
results_field: "ml-results",
},
analysis: {
regression: {
dependent_variable: "FlightDelayMin",
training_percent: 90,
},
},
analyzed_fields: {
includes: [],
excludes: ["FlightNum"],
},
model_memory_limit: "100mb",
});
response = client.ml.put_data_frame_analytics(
id: "model-flight-delays-pre",
body: {
"source": {
"index": [
"kibana_sample_data_flights"
],
"query": {
"range": {
"DistanceKilometers": {
"gt": 0
}
}
},
"_source": {
"includes": [],
"excludes": [
"FlightDelay",
"FlightDelayType"
]
}
},
"dest": {
"index": "df-flight-delays",
"results_field": "ml-results"
},
"analysis": {
"regression": {
"dependent_variable": "FlightDelayMin",
"training_percent": 90
}
},
"analyzed_fields": {
"includes": [],
"excludes": [
"FlightNum"
]
},
"model_memory_limit": "100mb"
}
)
$resp = $client->ml()->putDataFrameAnalytics([
"id" => "model-flight-delays-pre",
"body" => [
"source" => [
"index" => array(
"kibana_sample_data_flights",
),
"query" => [
"range" => [
"DistanceKilometers" => [
"gt" => 0,
],
],
],
"_source" => [
"includes" => array(
),
"excludes" => array(
"FlightDelay",
"FlightDelayType",
),
],
],
"dest" => [
"index" => "df-flight-delays",
"results_field" => "ml-results",
],
"analysis" => [
"regression" => [
"dependent_variable" => "FlightDelayMin",
"training_percent" => 90,
],
],
"analyzed_fields" => [
"includes" => array(
),
"excludes" => array(
"FlightNum",
),
],
"model_memory_limit" => "100mb",
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"source":{"index":["kibana_sample_data_flights"],"query":{"range":{"DistanceKilometers":{"gt":0}}},"_source":{"includes":[],"excludes":["FlightDelay","FlightDelayType"]}},"dest":{"index":"df-flight-delays","results_field":"ml-results"},"analysis":{"regression":{"dependent_variable":"FlightDelayMin","training_percent":90}},"analyzed_fields":{"includes":[],"excludes":["FlightNum"]},"model_memory_limit":"100mb"}' "$ELASTICSEARCH_URL/_ml/data_frame/analytics/model-flight-delays-pre"
client.ml().putDataFrameAnalytics(p -> p
.analysis(a -> a
.regression(r -> r
.dependentVariable("FlightDelayMin")
.trainingPercent("90")
)
)
.analyzedFields(an -> an
.excludes("FlightNum")
)
.dest(d -> d
.index("df-flight-delays")
.resultsField("ml-results")
)
.id("model-flight-delays-pre")
.modelMemoryLimit("100mb")
.source(s -> s
.index("kibana_sample_data_flights")
.query(q -> q
.range(r -> r
.untyped(u -> u
.field("DistanceKilometers")
.gt(JsonData.fromJson("0"))
)
)
)
.source(so -> so
.excludes(List.of("FlightDelay","FlightDelayType"))
)
)
);
{
"source": {
"index": [
"kibana_sample_data_flights"
],
"query": {
"range": {
"DistanceKilometers": {
"gt": 0
}
}
},
"_source": {
"includes": [],
"excludes": [
"FlightDelay",
"FlightDelayType"
]
}
},
"dest": {
"index": "df-flight-delays",
"results_field": "ml-results"
},
"analysis": {
"regression": {
"dependent_variable": "FlightDelayMin",
"training_percent": 90
}
},
"analyzed_fields": {
"includes": [],
"excludes": [
"FlightNum"
]
},
"model_memory_limit": "100mb"
}
Explain data frame analytics config
Generally available; Added in 7.3.0
All methods and paths for this operation:
This API provides explanations for a data frame analytics config that either exists already or one that has not been created yet. The following explanations are provided:
- which fields are included or not in the analysis and why,
- how much memory is estimated to be required. The estimate can be used when deciding the appropriate value for model_memory_limit setting later on. If you have object fields or fields that are excluded via source filtering, they are not included in the explanation.
Required authorization
- Cluster privileges:
monitor_ml
Path parameters
-
Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Body
-
The configuration of how to source the analysis data. It requires an index. Optionally, query and _source may be specified.
-
The destination configuration, consisting of index and optionally results_field (ml by default).
-
The analysis configuration, which contains the information necessary to perform one of the following types of analysis: classification, outlier detection, or regression.
-
A description of the job.
-
The approximate maximum amount of memory resources that are permitted for analytical processing. If your
elasticsearch.ymlfile contains anxpack.ml.max_model_memory_limitsetting, an error occurs when you try to create data frame analytics jobs that havemodel_memory_limitvalues greater than that setting.Default value is
1gb. -
The maximum number of threads to be used by the analysis. Using more threads may decrease the time necessary to complete the analysis at the cost of using more CPU. Note that the process may use additional threads for operational functionality other than the analysis itself.
Default value is
1. -
Specify includes and/or excludes patterns to select which fields will be included in the analysis. The patterns specified in excludes are applied last, therefore excludes takes precedence. In other words, if the same field is specified in both includes and excludes, then the field will not be included in the analysis.
-
Specifies whether this job can start when there is insufficient machine learning node capacity for it to be immediately assigned to a node.
Default value is
false.
POST _ml/data_frame/analytics/_explain
{
"source": {
"index": "houses_sold_last_10_yrs"
},
"analysis": {
"regression": {
"dependent_variable": "price"
}
}
}
resp = client.ml.explain_data_frame_analytics(
source={
"index": "houses_sold_last_10_yrs"
},
analysis={
"regression": {
"dependent_variable": "price"
}
},
)
const response = await client.ml.explainDataFrameAnalytics({
source: {
index: "houses_sold_last_10_yrs",
},
analysis: {
regression: {
dependent_variable: "price",
},
},
});
response = client.ml.explain_data_frame_analytics(
body: {
"source": {
"index": "houses_sold_last_10_yrs"
},
"analysis": {
"regression": {
"dependent_variable": "price"
}
}
}
)
$resp = $client->ml()->explainDataFrameAnalytics([
"body" => [
"source" => [
"index" => "houses_sold_last_10_yrs",
],
"analysis" => [
"regression" => [
"dependent_variable" => "price",
],
],
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"source":{"index":"houses_sold_last_10_yrs"},"analysis":{"regression":{"dependent_variable":"price"}}}' "$ELASTICSEARCH_URL/_ml/data_frame/analytics/_explain"
client.ml().explainDataFrameAnalytics(e -> e
.analysis(a -> a
.regression(r -> r
.dependentVariable("price")
)
)
.source(s -> s
.index("houses_sold_last_10_yrs")
)
);
{
"source": {
"index": "houses_sold_last_10_yrs"
},
"analysis": {
"regression": {
"dependent_variable": "price"
}
}
}
{
"field_selection": [
{
"field": "number_of_bedrooms",
"mappings_types": [
"integer"
],
"is_included": true,
"is_required": false,
"feature_type": "numerical"
},
{
"field": "postcode",
"mappings_types": [
"text"
],
"is_included": false,
"is_required": false,
"reason": "[postcode.keyword] is preferred because it is aggregatable"
},
{
"field": "postcode.keyword",
"mappings_types": [
"keyword"
],
"is_included": true,
"is_required": false,
"feature_type": "categorical"
},
{
"field": "price",
"mappings_types": [
"float"
],
"is_included": true,
"is_required": true,
"feature_type": "numerical"
}
],
"memory_estimation": {
"expected_memory_without_disk": "128MB",
"expected_memory_with_disk": "32MB"
}
}
Get data frame analytics job stats
Generally available; Added in 7.3.0
Path parameters
-
Identifier for the data frame analytics job. If you do not specify this option, the API returns information for the first hundred data frame analytics jobs.
Query parameters
-
Specifies what to do when the request:
- Contains wildcard expressions and there are no data frame analytics jobs that match.
- Contains the
_allstring or no identifiers and there are no matches. - Contains wildcard expressions and there are only partial matches.
The default value returns an empty data_frame_analytics array when there are no matches and the subset of results when there are partial matches. If this parameter is
false, the request returns a 404 status code when there are no matches or only partial matches. -
Skips the specified number of data frame analytics jobs.
-
Specifies the maximum number of data frame analytics jobs to obtain.
-
Defines whether the stats response should be verbose.
GET _ml/data_frame/analytics/weblog-outliers/_stats
resp = client.ml.get_data_frame_analytics_stats(
id="weblog-outliers",
)
const response = await client.ml.getDataFrameAnalyticsStats({
id: "weblog-outliers",
});
response = client.ml.get_data_frame_analytics_stats(
id: "weblog-outliers"
)
$resp = $client->ml()->getDataFrameAnalyticsStats([
"id" => "weblog-outliers",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/data_frame/analytics/weblog-outliers/_stats"
client.ml().getDataFrameAnalyticsStats(g -> g
.id("weblog-outliers")
);
Start a data frame analytics job
Generally available; Added in 7.3.0
A data frame analytics job can be started and stopped multiple times
throughout its lifecycle.
If the destination index does not exist, it is created automatically the
first time you start the data frame analytics job. The
index.number_of_shards and index.number_of_replicas settings for the
destination index are copied from the source index. If there are multiple
source indices, the destination index copies the highest setting values. The
mappings for the destination index are also copied from the source indices.
If there are any mapping conflicts, the job fails to start.
If the destination index exists, it is used as is. You can therefore set up
the destination index in advance with custom settings and mappings.
Required authorization
- Index privileges:
create_index,index,manage,read,view_index_metadata - Cluster privileges:
manage_ml
Path parameters
-
Identifier for the data frame analytics job. This identifier can contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and underscores. It must start and end with alphanumeric characters.
Query parameters
-
Controls the amount of time to wait until the data frame analytics job starts.
External documentation
POST _ml/data_frame/analytics/loganalytics/_start
resp = client.ml.start_data_frame_analytics(
id="loganalytics",
)
const response = await client.ml.startDataFrameAnalytics({
id: "loganalytics",
});
response = client.ml.start_data_frame_analytics(
id: "loganalytics"
)
$resp = $client->ml()->startDataFrameAnalytics([
"id" => "loganalytics",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/data_frame/analytics/loganalytics/_start"
client.ml().startDataFrameAnalytics(s -> s
.id("loganalytics")
);
Get trained model configuration info
Generally available; Added in 7.10.0
Path parameters
-
The unique identifier of the trained model or a model alias.
You can get information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.
Query parameters
-
Specifies what to do when the request:
- Contains wildcard expressions and there are no models that match.
- Contains the _all string or no identifiers and there are no matches.
- Contains wildcard expressions and there are only partial matches.
If true, it returns an empty array when there are no matches and the subset of results when there are partial matches.
-
Specifies whether the included model definition should be returned as a JSON map (true) or in a custom compressed format (false).
-
Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.
-
Skips the specified number of models.
-
A comma delimited string of optional fields to include in the response body.
Supported values include:
definition: Includes the model definition.feature_importance_baseline: Includes the baseline for feature importance values.hyperparameters: Includes the information about hyperparameters used to train the model. This information consists of the value, the absolute and relative importance of the hyperparameter as well as an indicator of whether it was specified by the user or tuned during hyperparameter optimization.total_feature_importance: Includes the total feature importance for the training data set. The baseline and total feature importance values are returned in the metadata field in the response body.definition_status: Includes the model definition status.
Values are
definition,feature_importance_baseline,hyperparameters,total_feature_importance, ordefinition_status. -
parameter is deprecated! Use [include=definition] instead
-
Specifies the maximum number of models to obtain.
GET _ml/trained_models/
resp = client.ml.get_trained_models()
const response = await client.ml.getTrainedModels();
response = client.ml.get_trained_models
$resp = $client->ml()->getTrainedModels();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/trained_models/"
client.ml().getTrainedModels(g -> g);
Create or update a trained model alias
Generally available; Added in 7.13.0
A trained model alias is a logical name used to reference a single trained model. You can use aliases instead of trained model identifiers to make it easier to reference your models. For example, you can use aliases in inference aggregations and processors. An alias must be unique and refer to only a single trained model. However, you can have multiple aliases for each trained model. If you use this API to update an alias such that it references a different trained model ID and the model uses a different type of data frame analytics, an error occurs. For example, this situation occurs if you have a trained model for regression analysis and a trained model for classification analysis; you cannot reassign an alias from one type of trained model to another. If you use this API to update an alias and there are very few input fields in common between the old and new trained models for the model alias, the API returns a warning.
Required authorization
- Cluster privileges:
manage_ml
Path parameters
-
The identifier for the trained model that the alias refers to.
-
The alias to create or update. This value cannot end in numbers.
PUT _ml/trained_models/flight-delay-prediction-1574775339910/model_aliases/flight_delay_model
resp = client.ml.put_trained_model_alias(
model_id="flight-delay-prediction-1574775339910",
model_alias="flight_delay_model",
)
const response = await client.ml.putTrainedModelAlias({
model_id: "flight-delay-prediction-1574775339910",
model_alias: "flight_delay_model",
});
response = client.ml.put_trained_model_alias(
model_id: "flight-delay-prediction-1574775339910",
model_alias: "flight_delay_model"
)
$resp = $client->ml()->putTrainedModelAlias([
"model_id" => "flight-delay-prediction-1574775339910",
"model_alias" => "flight_delay_model",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/trained_models/flight-delay-prediction-1574775339910/model_aliases/flight_delay_model"
client.ml().putTrainedModelAlias(p -> p
.modelAlias("flight_delay_model")
.modelId("flight-delay-prediction-1574775339910")
);
Create a trained model vocabulary
Generally available; Added in 8.0.0
PUT _ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/vocabulary
{
"vocabulary": [
"[PAD]",
"[unused0]",
]
}
resp = client.ml.put_trained_model_vocabulary(
model_id="elastic__distilbert-base-uncased-finetuned-conll03-english",
vocabulary=[
"[PAD]",
"[unused0]"
],
)
const response = await client.ml.putTrainedModelVocabulary({
model_id: "elastic__distilbert-base-uncased-finetuned-conll03-english",
vocabulary: ["[PAD]", "[unused0]"],
});
response = client.ml.put_trained_model_vocabulary(
model_id: "elastic__distilbert-base-uncased-finetuned-conll03-english",
body: {
"vocabulary": [
"[PAD]",
"[unused0]"
]
}
)
$resp = $client->ml()->putTrainedModelVocabulary([
"model_id" => "elastic__distilbert-base-uncased-finetuned-conll03-english",
"body" => [
"vocabulary" => array(
"[PAD]",
"[unused0]",
),
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"vocabulary":["[PAD]","[unused0]"]}' "$ELASTICSEARCH_URL/_ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/vocabulary"
client.ml().putTrainedModelVocabulary(p -> p
.modelId("elastic__distilbert-base-uncased-finetuned-conll03-english")
.vocabulary(List.of("[PAD]","[unused0]"))
);
{
"vocabulary": [
"[PAD]",
"[unused0]",
]
}
POST /_migration/reindex/my-data-stream/_cancel
resp = client.perform_request(
"POST",
"/_migration/reindex/my-data-stream/_cancel",
)
const response = await client.transport.request({
method: "POST",
path: "/_migration/reindex/my-data-stream/_cancel",
});
response = client.perform_request(
"POST",
"/_migration/reindex/my-data-stream/_cancel",
{},
)
$requestFactory = Psr17FactoryDiscovery::findRequestFactory();
$request = $requestFactory->createRequest(
"POST",
"/_migration/reindex/my-data-stream/_cancel",
);
$resp = $client->sendRequest($request);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_migration/reindex/my-data-stream/_cancel"
client.indices().cancelMigrateReindex(c -> c
.index("my-data-stream")
);
Get the shutdown status
Generally available; Added in 7.13.0
All methods and paths for this operation:
Get information about nodes that are ready to be shut down, have shut down preparations still in progress, or have stalled. The API returns status information for each part of the shut down process.
NOTE: This feature is designed for indirect use by Elasticsearch Service, Elastic Cloud Enterprise, and Elastic Cloud on Kubernetes. Direct use is not supported.
If the operator privileges feature is enabled, you must be an operator to use this API.
Required authorization
- Cluster privileges:
manage
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
GET /_nodes/USpTGYaBSIKbgSUJR2Z9lg/shutdown
resp = client.shutdown.get_node(
node_id="USpTGYaBSIKbgSUJR2Z9lg",
)
const response = await client.shutdown.getNode({
node_id: "USpTGYaBSIKbgSUJR2Z9lg",
});
response = client.shutdown.get_node(
node_id: "USpTGYaBSIKbgSUJR2Z9lg"
)
$resp = $client->shutdown()->getNode([
"node_id" => "USpTGYaBSIKbgSUJR2Z9lg",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_nodes/USpTGYaBSIKbgSUJR2Z9lg/shutdown"
client.shutdown().getNode(g -> g
.nodeId("USpTGYaBSIKbgSUJR2Z9lg")
);
{
"nodes": [
{
"node_id": "USpTGYaBSIKbgSUJR2Z9lg",
"type": "RESTART",
"reason": "Demonstrating how the node shutdown API works",
"shutdown_startedmillis": 1624406108685,
"allocation_delay": "10m",
"status": "COMPLETE",
"shard_migration": {
"status": "COMPLETE",
"shard_migrations_remaining": 0,
"explanation": "no shard relocation is necessary for a node restart"
},
"persistent_tasks": {
"status": "COMPLETE"
},
"plugins": {
"status": "COMPLETE"
}
}
]
}
Create or update a query ruleset
Generally available; Added in 8.10.0
There is a limit of 100 rules per ruleset.
This limit can be increased by using the xpack.applications.rules.max_rules_per_ruleset cluster setting.
IMPORTANT: Due to limitations within pinned queries, you can only select documents using ids or docs, but cannot use both in single rule.
It is advised to use one or the other in query rulesets, to avoid errors.
Additionally, pinned queries have a maximum limit of 100 pinned hits.
If multiple matching rules pin more than 100 documents, only the first 100 documents are pinned in the order they are specified in the ruleset.
Required authorization
- Cluster privileges:
manage_search_query_rules
PUT _query_rules/my-ruleset
{
"rules": [
{
"rule_id": "my-rule1",
"type": "pinned",
"criteria": [
{
"type": "contains",
"metadata": "user_query",
"values": [ "pugs", "puggles" ]
},
{
"type": "exact",
"metadata": "user_country",
"values": [ "us" ]
}
],
"actions": {
"ids": [
"id1",
"id2"
]
}
},
{
"rule_id": "my-rule2",
"type": "pinned",
"criteria": [
{
"type": "fuzzy",
"metadata": "user_query",
"values": [ "rescue dogs" ]
}
],
"actions": {
"docs": [
{
"_index": "index1",
"_id": "id3"
},
{
"_index": "index2",
"_id": "id4"
}
]
}
}
]
}
resp = client.query_rules.put_ruleset(
ruleset_id="my-ruleset",
rules=[
{
"rule_id": "my-rule1",
"type": "pinned",
"criteria": [
{
"type": "contains",
"metadata": "user_query",
"values": [
"pugs",
"puggles"
]
},
{
"type": "exact",
"metadata": "user_country",
"values": [
"us"
]
}
],
"actions": {
"ids": [
"id1",
"id2"
]
}
},
{
"rule_id": "my-rule2",
"type": "pinned",
"criteria": [
{
"type": "fuzzy",
"metadata": "user_query",
"values": [
"rescue dogs"
]
}
],
"actions": {
"docs": [
{
"_index": "index1",
"_id": "id3"
},
{
"_index": "index2",
"_id": "id4"
}
]
}
}
],
)
const response = await client.queryRules.putRuleset({
ruleset_id: "my-ruleset",
rules: [
{
rule_id: "my-rule1",
type: "pinned",
criteria: [
{
type: "contains",
metadata: "user_query",
values: ["pugs", "puggles"],
},
{
type: "exact",
metadata: "user_country",
values: ["us"],
},
],
actions: {
ids: ["id1", "id2"],
},
},
{
rule_id: "my-rule2",
type: "pinned",
criteria: [
{
type: "fuzzy",
metadata: "user_query",
values: ["rescue dogs"],
},
],
actions: {
docs: [
{
_index: "index1",
_id: "id3",
},
{
_index: "index2",
_id: "id4",
},
],
},
},
],
});
response = client.query_rules.put_ruleset(
ruleset_id: "my-ruleset",
body: {
"rules": [
{
"rule_id": "my-rule1",
"type": "pinned",
"criteria": [
{
"type": "contains",
"metadata": "user_query",
"values": [
"pugs",
"puggles"
]
},
{
"type": "exact",
"metadata": "user_country",
"values": [
"us"
]
}
],
"actions": {
"ids": [
"id1",
"id2"
]
}
},
{
"rule_id": "my-rule2",
"type": "pinned",
"criteria": [
{
"type": "fuzzy",
"metadata": "user_query",
"values": [
"rescue dogs"
]
}
],
"actions": {
"docs": [
{
"_index": "index1",
"_id": "id3"
},
{
"_index": "index2",
"_id": "id4"
}
]
}
}
]
}
)
$resp = $client->queryRules()->putRuleset([
"ruleset_id" => "my-ruleset",
"body" => [
"rules" => array(
[
"rule_id" => "my-rule1",
"type" => "pinned",
"criteria" => array(
[
"type" => "contains",
"metadata" => "user_query",
"values" => array(
"pugs",
"puggles",
),
],
[
"type" => "exact",
"metadata" => "user_country",
"values" => array(
"us",
),
],
),
"actions" => [
"ids" => array(
"id1",
"id2",
),
],
],
[
"rule_id" => "my-rule2",
"type" => "pinned",
"criteria" => array(
[
"type" => "fuzzy",
"metadata" => "user_query",
"values" => array(
"rescue dogs",
),
],
),
"actions" => [
"docs" => array(
[
"_index" => "index1",
"_id" => "id3",
],
[
"_index" => "index2",
"_id" => "id4",
],
),
],
],
),
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"rules":[{"rule_id":"my-rule1","type":"pinned","criteria":[{"type":"contains","metadata":"user_query","values":["pugs","puggles"]},{"type":"exact","metadata":"user_country","values":["us"]}],"actions":{"ids":["id1","id2"]}},{"rule_id":"my-rule2","type":"pinned","criteria":[{"type":"fuzzy","metadata":"user_query","values":["rescue dogs"]}],"actions":{"docs":[{"_index":"index1","_id":"id3"},{"_index":"index2","_id":"id4"}]}}]}' "$ELASTICSEARCH_URL/_query_rules/my-ruleset"
client.queryRules().putRuleset(p -> p
.rules(List.of(QueryRule.queryRuleOf(q -> q
.ruleId("my-rule1")
.type(QueryRuleType.Pinned)
.criteria(List.of(QueryRuleCriteria.of(qu -> qu
.type(QueryRuleCriteriaType.Contains)
.metadata("user_query")
.values(List.of(JsonData.fromJson("\"pugs\""),JsonData.fromJson("\"puggles\"")))),QueryRuleCriteria.of(qu -> qu
.type(QueryRuleCriteriaType.Exact)
.metadata("user_country")
.values(JsonData.fromJson("\"us\"")))))
.actions(a -> a
.ids(List.of("id1","id2"))
)),QueryRule.queryRuleOf(q -> q
.ruleId("my-rule2")
.type(QueryRuleType.Pinned)
.criteria(c -> c
.type(QueryRuleCriteriaType.Fuzzy)
.metadata("user_query")
.values(JsonData.fromJson("\"rescue dogs\""))
)
.actions(a -> a
.docs(List.of(PinnedDoc.of(pi -> pi
.id("id3")
.index("index1")),PinnedDoc.of(pi -> pi
.id("id4")
.index("index2"))))
))))
.rulesetId("my-ruleset")
);
{
"rules": [
{
"rule_id": "my-rule1",
"type": "pinned",
"criteria": [
{
"type": "contains",
"metadata": "user_query",
"values": [ "pugs", "puggles" ]
},
{
"type": "exact",
"metadata": "user_country",
"values": [ "us" ]
}
],
"actions": {
"ids": [
"id1",
"id2"
]
}
},
{
"rule_id": "my-rule2",
"type": "pinned",
"criteria": [
{
"type": "fuzzy",
"metadata": "user_query",
"values": [ "rescue dogs" ]
}
],
"actions": {
"docs": [
{
"_index": "index1",
"_id": "id3"
},
{
"_index": "index2",
"_id": "id4"
}
]
}
}
]
}
Delete a rollup job
Deprecated
Technical preview; Added in 6.3.0
A job must be stopped before it can be deleted. If you attempt to delete a started job, an error occurs. Similarly, if you attempt to delete a nonexistent job, an exception occurs.
IMPORTANT: When you delete a job, you remove only the process that is actively monitoring and rolling up data. The API does not delete any previously rolled up data. This is by design; a user may wish to roll up a static data set. Because the data set is static, after it has been fully rolled up there is no need to keep the indexing rollup job around (as there will be no new data). Thus the job can be deleted, leaving behind the rolled up data for analysis. If you wish to also remove the rollup data and the rollup index contains the data for only a single job, you can delete the whole rollup index. If the rollup index stores data from several jobs, you must issue a delete-by-query that targets the rollup job's identifier in the rollup index. For example:
POST my_rollup_index/_delete_by_query
{
"query": {
"term": {
"_rollup.id": "the_rollup_job_id"
}
}
}
Required authorization
- Cluster privileges:
manage_rollup
DELETE _rollup/job/sensor
resp = client.rollup.delete_job(
id="sensor",
)
const response = await client.rollup.deleteJob({
id: "sensor",
});
response = client.rollup.delete_job(
id: "sensor"
)
$resp = $client->rollup()->deleteJob([
"id" => "sensor",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_rollup/job/sensor"
client.rollup().deleteJob(d -> d
.id("sensor")
);
{
"acknowledged": true
}
Create or update a script or search template
Generally available
All methods and paths for this operation:
Creates or updates a stored script or search template.
Required authorization
- Cluster privileges:
manage
Path parameters
-
The identifier for the stored script or search template. It must be unique within the cluster.
-
The context in which the script or search template should run. To prevent errors, the API immediately compiles the script or template in this context.
Query parameters
-
The context in which the script or search template should run. To prevent errors, the API immediately compiles the script or template in this context. If you specify both this and the
<context>path parameter, the API uses the request path parameter. -
The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to
-1to indicate that the request should never timeout.External documentation -
The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. It can also be set to
-1to indicate that the request should never timeout.External documentation
PUT _scripts/my-search-template
{
"script": {
"lang": "mustache",
"source": {
"query": {
"match": {
"message": "{{query_string}}"
}
},
"from": "{{from}}",
"size": "{{size}}"
}
}
}
resp = client.put_script(
id="my-search-template",
script={
"lang": "mustache",
"source": {
"query": {
"match": {
"message": "{{query_string}}"
}
},
"from": "{{from}}",
"size": "{{size}}"
}
},
)
const response = await client.putScript({
id: "my-search-template",
script: {
lang: "mustache",
source: {
query: {
match: {
message: "{{query_string}}",
},
},
from: "{{from}}",
size: "{{size}}",
},
},
});
response = client.put_script(
id: "my-search-template",
body: {
"script": {
"lang": "mustache",
"source": {
"query": {
"match": {
"message": "{{query_string}}"
}
},
"from": "{{from}}",
"size": "{{size}}"
}
}
}
)
$resp = $client->putScript([
"id" => "my-search-template",
"body" => [
"script" => [
"lang" => "mustache",
"source" => [
"query" => [
"match" => [
"message" => "{{query_string}}",
],
],
"from" => "{{from}}",
"size" => "{{size}}",
],
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"script":{"lang":"mustache","source":{"query":{"match":{"message":"{{query_string}}"}},"from":"{{from}}","size":"{{size}}"}}}' "$ELASTICSEARCH_URL/_scripts/my-search-template"
{
"script": {
"lang": "mustache",
"source": {
"query": {
"match": {
"message": "{{query_string}}"
}
},
"from": "{{from}}",
"size": "{{size}}"
}
}
}
{
"script": {
"lang": "painless",
"source": "Math.log(_score * 2) + params['my_modifier']"
}
}
Run a script
Technical preview; Added in 6.3.0
All methods and paths for this operation:
Runs a script and returns a result. Use this API to build and test scripts, such as when defining a script for a runtime field. This API requires very few dependencies and is especially useful if you don't have permissions to write documents on a cluster.
The API uses several contexts, which control how scripts are run, what variables are available at runtime, and what the return type is.
Each context requires a script, but additional parameters depend on the context you're using for that script.
Body
Required
-
The context that the script should run in. NOTE: Result ordering in the field contexts is not guaranteed.
Supported values include:
painless_test: The default context if no other context is specified.filter: Treats scripts as if they were run inside a script query.score: Treats scripts as if they were run inside ascript_scorefunction in afunction_scorequery.boolean_field: The context for boolean fields. The script returns atrueorfalseresponse.date_field: The context for date fields.emittakes a long value and the script returns a sorted list of dates.double_field: The context for double numeric fields. The script returns a sorted list of double values.geo_point_field: The context for geo-point fields.emittakes two double parameters, the latitude and longitude values, and the script returns an object in GeoJSON format containing the coordinates for the geo point.ip_field: The context foripfields. The script returns a sorted list of IP addresses.keyword_field: The context for keyword fields. The script returns a sorted list of string values.long_field: The context for long numeric fields. The script returns a sorted list of long values.composite_field: The context for composite runtime fields. The script returns a map of values.
Values are
painless_test,filter,score,boolean_field,date_field,double_field,geo_point_field,ip_field,keyword_field,long_field, orcomposite_field. -
Additional parameters for the
context. NOTE: This parameter is required for all contexts exceptpainless_test, which is the default if no value is provided forcontext. -
The Painless script to run.
POST /_scripts/painless/_execute
{
"script": {
"source": "params.count / params.total",
"params": {
"count": 100.0,
"total": 1000.0
}
}
}
resp = client.scripts_painless_execute(
script={
"source": "params.count / params.total",
"params": {
"count": 100,
"total": 1000
}
},
)
const response = await client.scriptsPainlessExecute({
script: {
source: "params.count / params.total",
params: {
count: 100,
total: 1000,
},
},
});
response = client.scripts_painless_execute(
body: {
"script": {
"source": "params.count / params.total",
"params": {
"count": 100,
"total": 1000
}
}
}
)
$resp = $client->scriptsPainlessExecute([
"body" => [
"script" => [
"source" => "params.count / params.total",
"params" => [
"count" => 100,
"total" => 1000,
],
],
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"script":{"source":"params.count / params.total","params":{"count":100,"total":1000}}}' "$ELASTICSEARCH_URL/_scripts/painless/_execute"
client.scriptsPainlessExecute(s -> s
.script(sc -> sc
.source(so -> so
.scriptString("params.count / params.total")
)
.params(Map.of("total", JsonData.fromJson("1000"),"count", JsonData.fromJson("100")))
)
);
{
"script": {
"source": "params.count / params.total",
"params": {
"count": 100.0,
"total": 1000.0
}
}
}
{
"script": {
"source": "doc['field'].value.length() <= params.max_length",
"params": {
"max_length": 4
}
},
"context": "filter",
"context_setup": {
"index": "my-index-000001",
"document": {
"field": "four"
}
}
}
{
"script": {
"source": "doc['rank'].value / params.max_rank",
"params": {
"max_rank": 5.0
}
},
"context": "score",
"context_setup": {
"index": "my-index-000001",
"document": {
"rank": 4
}
}
}
{
"result": "0.1"
}
{
"result": true
}
{
"result": 0.8
}
Path parameters
-
Index names that are used to limit the request. Only a single index name can be provided to this parameter.
-
The document identifier.
Query parameters
-
The analyzer to use for the query string. This parameter can be used only when the
qquery string parameter is specified. -
If
true, wildcard and prefix queries are analyzed. This parameter can be used only when theqquery string parameter is specified. -
The default operator for query string query:
andoror. This parameter can be used only when theqquery string parameter is specified.Values are
and,AND,or, orOR. -
The field to use as default where no field prefix is given in the query string. This parameter can be used only when the
qquery string parameter is specified. -
If
true, format-based query failures (such as providing text to a numeric field) in the query string will be ignored. This parameter can be used only when theqquery string parameter is specified. -
The node or shard the operation should be performed on. It is random by default.
-
A custom value used to route operations to a specific shard.
-
Trueorfalseto return the_sourcefield or not or a list of fields to return. -
A comma-separated list of source fields to exclude from the response. You can also use this parameter to exclude fields from the subset specified in
_source_includesquery parameter. If the_sourceparameter isfalse, this parameter is ignored. -
A comma-separated list of source fields to include in the response. If this parameter is specified, only these source fields are returned. You can exclude fields from this subset using the
_source_excludesquery parameter. If the_sourceparameter isfalse, this parameter is ignored. -
A comma-separated list of stored fields to return in the response.
-
The query in the Lucene query string syntax.
Body
-
Defines the search definition using the Query DSL.
External documentation
GET /my-index-000001/_explain/0
{
"query" : {
"match" : { "message" : "elasticsearch" }
}
}
resp = client.explain(
index="my-index-000001",
id="0",
query={
"match": {
"message": "elasticsearch"
}
},
)
const response = await client.explain({
index: "my-index-000001",
id: 0,
query: {
match: {
message: "elasticsearch",
},
},
});
response = client.explain(
index: "my-index-000001",
id: "0",
body: {
"query": {
"match": {
"message": "elasticsearch"
}
}
}
)
$resp = $client->explain([
"index" => "my-index-000001",
"id" => "0",
"body" => [
"query" => [
"match" => [
"message" => "elasticsearch",
],
],
],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"query":{"match":{"message":"elasticsearch"}}}' "$ELASTICSEARCH_URL/my-index-000001/_explain/0"
client.explain(e -> e
.id("0")
.index("my-index-000001")
.query(q -> q
.match(m -> m
.field("message")
.query(FieldValue.of("elasticsearch"))
)
)
);
{
"query" : {
"match" : { "message" : "elasticsearch" }
}
}
{
"_index":"my-index-000001",
"_id":"0",
"matched":true,
"explanation":{
"value":1.6943598,
"description":"weight(message:elasticsearch in 0) [PerFieldSimilarity], result of:",
"details":[
{
"value":1.6943598,
"description":"score(freq=1.0), computed as boost * idf * tf from:",
"details":[
{
"value":2.2,
"description":"boost",
"details":[]
},
{
"value":1.3862944,
"description":"idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:",
"details":[
{
"value":1,
"description":"n, number of documents containing term",
"details":[]
},
{
"value":5,
"description":"N, total number of documents with field",
"details":[]
}
]
},
{
"value":0.5555556,
"description":"tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:",
"details":[
{
"value":1.0,
"description":"freq, occurrences of term within document",
"details":[]
},
{
"value":1.2,
"description":"k1, term saturation parameter",
"details":[]
},
{
"value":0.75,
"description":"b, length normalization parameter",
"details":[]
},
{
"value":3.0,
"description":"dl, length of field",
"details":[]
},
{
"value":5.4,
"description":"avgdl, average length of field",
"details":[]
}
]
}
]
}
]
}
}
Run multiple templated searches
Generally available; Added in 5.0.0
All methods and paths for this operation:
Run multiple templated searches with a single request.
If you are providing a text file or text input to curl, use the --data-binary flag instead of -d to preserve newlines.
For example:
$ cat requests
{ "index": "my-index" }
{ "id": "my-search-template", "params": { "query_string": "hello world", "from": 0, "size": 10 }}
{ "index": "my-other-index" }
{ "id": "my-other-search-template", "params": { "query_type": "match_all" }}
$ curl -H "Content-Type: application/x-ndjson" -XGET localhost:9200/_msearch/template --data-binary "@requests"; echo
Required authorization
- Index privileges:
read
Path parameters
-
A comma-separated list of data streams, indices, and aliases to search. It supports wildcards (
*). To search all data streams and indices, omit this parameter or use*.
Query parameters
-
If
true, network round-trips are minimized for cross-cluster search requests. -
The maximum number of concurrent searches the API can run.
-
The type of the search operation.
Supported values include:
query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.
Values are
query_then_fetchordfs_query_then_fetch. -
If
true, the response returnshits.totalas an integer. Iffalse, it returnshits.totalas an object. -
If
true, the response prefixes aggregation and suggester names with their respective types.
Body
object
Required
Contains parameters used to limit or change the subsequent search body request.
-
Supported values include:
all: Match any data stream or index, including hidden ones.open: Match open, non-hidden indices. Also matches any non-hidden data stream.closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden: Match hidden data streams and hidden indices. Must be combined withopen,closed, orboth.none: Wildcard expressions are not accepted.
-
Supported values include:
query_then_fetch: Documents are scored using local term and document frequencies for the shard. This is usually faster but less accurate.dfs_query_then_fetch: Documents are scored using global term and document frequencies across all shards. This is usually slower but more accurate.
Values are
query_then_fetchordfs_query_then_fetch.
GET my-index/_msearch/template
{ }
{ "id": "my-search-template", "params": { "query_string": "hello world", "from": 0, "size": 10 }}
{ }
{ "id": "my-other-search-template", "params": { "query_type": "match_all" }}
resp = client.msearch_template(
index="my-index",
search_templates=[
{},
{
"id": "my-search-template",
"params": {
"query_string": "hello world",
"from": 0,
"size": 10
}
},
{},
{
"id": "my-other-search-template",
"params": {
"query_type": "match_all"
}
}
],
)
const response = await client.msearchTemplate({
index: "my-index",
search_templates: [
{},
{
id: "my-search-template",
params: {
query_string: "hello world",
from: 0,
size: 10,
},
},
{},
{
id: "my-other-search-template",
params: {
query_type: "match_all",
},
},
],
});
response = client.msearch_template(
index: "my-index",
body: [
{},
{
"id": "my-search-template",
"params": {
"query_string": "hello world",
"from": 0,
"size": 10
}
},
{},
{
"id": "my-other-search-template",
"params": {
"query_type": "match_all"
}
}
]
)
$resp = $client->msearchTemplate([
"index" => "my-index",
"body" => array(
new ArrayObject([]),
[
"id" => "my-search-template",
"params" => [
"query_string" => "hello world",
"from" => 0,
"size" => 10,
],
],
new ArrayObject([]),
[
"id" => "my-other-search-template",
"params" => [
"query_type" => "match_all",
],
],
),
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '[{},{"id":"my-search-template","params":{"query_string":"hello world","from":0,"size":10}},{},{"id":"my-other-search-template","params":{"query_type":"match_all"}}]' "$ELASTICSEARCH_URL/my-index/_msearch/template"
{ }
{ "id": "my-search-template", "params": { "query_string": "hello world", "from": 0, "size": 10 }}
{ }
{ "id": "my-other-search-template", "params": { "query_type": "match_all" }}
Search a vector tile
Generally available; Added in 7.15.0
All methods and paths for this operation:
Search a vector tile for geospatial values. Before using this API, you should be familiar with the Mapbox vector tile specification. The API returns results as a binary mapbox vector tile.
Internally, Elasticsearch translates a vector tile search API request into a search containing:
- A
geo_bounding_boxquery on the<field>. The query uses the<zoom>/<x>/<y>tile as a bounding box. - A
geotile_gridorgeohex_gridaggregation on the<field>. Thegrid_aggparameter determines the aggregation type. The aggregation uses the<zoom>/<x>/<y>tile as a bounding box. - Optionally, a
geo_boundsaggregation on the<field>. The search only includes this aggregation if theexact_boundsparameter istrue. - If the optional parameter
with_labelsistrue, the internal search will include a dynamic runtime field that calls thegetLabelPositionfunction of the geometry doc value. This enables the generation of new point features containing suggested geometry labels, so that, for example, multi-polygons will have only one label.
For example, Elasticsearch may translate a vector tile search API request with a grid_agg argument of geotile and an exact_bounds argument of true into the following search
GET my-index/_search
{
"size": 10000,
"query": {
"geo_bounding_box": {
"my-geo-field": {
"top_left": {
"lat": -40.979898069620134,
"lon": -45
},
"bottom_right": {
"lat": -66.51326044311186,
"lon": 0
}
}
}
},
"aggregations": {
"grid": {
"geotile_grid": {
"field": "my-geo-field",
"precision": 11,
"size": 65536,
"bounds": {
"top_left": {
"lat": -40.979898069620134,
"lon": -45
},
"bottom_right": {
"lat": -66.51326044311186,
"lon": 0
}
}
}
},
"bounds": {
"geo_bounds": {
"field": "my-geo-field",
"wrap_longitude": false
}
}
}
}
The API returns results as a binary Mapbox vector tile. Mapbox vector tiles are encoded as Google Protobufs (PBF). By default, the tile contains three layers:
- A
hitslayer containing a feature for each<field>value matching thegeo_bounding_boxquery. - An
aggslayer containing a feature for each cell of thegeotile_gridorgeohex_grid. The layer only contains features for cells with matching data. - A meta layer containing:
- A feature containing a bounding box. By default, this is the bounding box of the tile.
- Value ranges for any sub-aggregations on the
geotile_gridorgeohex_grid. - Metadata for the search.
The API only returns features that can display at its zoom level. For example, if a polygon feature has no area at its zoom level, the API omits it. The API returns errors as UTF-8 encoded JSON.
IMPORTANT: You can specify several options for this API as either a query parameter or request body parameter. If you specify both parameters, the query parameter takes precedence.
Grid precision for geotile
For a grid_agg of geotile, you can use cells in the aggs layer as tiles for lower zoom levels.
grid_precision represents the additional zoom levels available through these cells. The final precision is computed by as follows: <zoom> + grid_precision.
For example, if <zoom> is 7 and grid_precision is 8, then the geotile_grid aggregation will use a precision of 15.
The maximum final precision is 29.
The grid_precision also determines the number of cells for the grid as follows: (2^grid_precision) x (2^grid_precision).
For example, a value of 8 divides the tile into a grid of 256 x 256 cells.
The aggs layer only contains features for cells with matching data.
Grid precision for geohex
For a grid_agg of geohex, Elasticsearch uses <zoom> and grid_precision to calculate a final precision as follows: <zoom> + grid_precision.
This precision determines the H3 resolution of the hexagonal cells produced by the geohex aggregation.
The following table maps the H3 resolution for each precision.
For example, if <zoom> is 3 and grid_precision is 3, the precision is 6.
At a precision of 6, hexagonal cells have an H3 resolution of 2.
If <zoom> is 3 and grid_precision is 4, the precision is 7.
At a precision of 7, hexagonal cells have an H3 resolution of 3.
| Precision | Unique tile bins | H3 resolution | Unique hex bins | Ratio |
|---|---|---|---|---|
| 1 | 4 | 0 | 122 | 30.5 |
| 2 | 16 | 0 | 122 | 7.625 |
| 3 | 64 | 1 | 842 | 13.15625 |
| 4 | 256 | 1 | 842 | 3.2890625 |
| 5 | 1024 | 2 | 5882 | 5.744140625 |
| 6 | 4096 | 2 | 5882 | 1.436035156 |
| 7 | 16384 | 3 | 41162 | 2.512329102 |
| 8 | 65536 | 3 | 41162 | 0.6280822754 |
| 9 | 262144 | 4 | 288122 | 1.099098206 |
| 10 | 1048576 | 4 | 288122 | 0.2747745514 |
| 11 | 4194304 | 5 | 2016842 | 0.4808526039 |
| 12 | 16777216 | 6 | 14117882 | 0.8414913416 |
| 13 | 67108864 | 6 | 14117882 | 0.2103728354 |
| 14 | 268435456 | 7 | 98825162 | 0.3681524172 |
| 15 | 1073741824 | 8 | 691776122 | 0.644266719 |
| 16 | 4294967296 | 8 | 691776122 | 0.1610666797 |
| 17 | 17179869184 | 9 | 4842432842 | 0.2818666889 |
| 18 | 68719476736 | 10 | 33897029882 | 0.4932667053 |
| 19 | 274877906944 | 11 | 237279209162 | 0.8632167343 |
| 20 | 1099511627776 | 11 | 237279209162 | 0.2158041836 |
| 21 | 4398046511104 | 12 | 1660954464122 | 0.3776573213 |
| 22 | 17592186044416 | 13 | 11626681248842 | 0.6609003122 |
| 23 | 70368744177664 | 13 | 11626681248842 | 0.165225078 |
| 24 | 281474976710656 | 14 | 81386768741882 | 0.2891438866 |
| 25 | 1125899906842620 | 15 | 569707381193162 | 0.5060018015 |
| 26 | 4503599627370500 | 15 | 569707381193162 | 0.1265004504 |
| 27 | 18014398509482000 | 15 | 569707381193162 | 0.03162511259 |
| 28 | 72057594037927900 | 15 | 569707381193162 | 0.007906278149 |
| 29 | 288230376151712000 | 15 | 569707381193162 | 0.001976569537 |
Hexagonal cells don't align perfectly on a vector tile. Some cells may intersect more than one vector tile. To compute the H3 resolution for each precision, Elasticsearch compares the average density of hexagonal bins at each resolution with the average density of tile bins at each zoom level. Elasticsearch uses the H3 resolution that is closest to the corresponding geotile density.
Required authorization
- Index privileges:
read
Path parameters
-
Comma-separated list of data streams, indices, or aliases to search
-
Field containing geospatial data to return
-
Zoom level for the vector tile to search
-
X coordinate for the vector tile to search
-
Y coordinate for the vector tile to search
Query parameters
-
If
false, the meta layer's feature is the bounding box of the tile. If true, the meta layer's feature is a bounding box resulting from a geo_bounds aggregation. The aggregation runs on values that intersect the // tile with wrap_longitude set to false. The resulting bounding box may be larger than the vector tile. -
The size, in pixels, of a side of the tile. Vector tiles are square with equal sides.
-
Aggregation used to create a grid for
field.Values are
geotileorgeohex. -
Additional zoom levels available through the aggs layer. For example, if is 7 and grid_precision is 8, you can zoom in up to level 15. Accepts 0-8. If 0, results don't include the aggs layer.
-
Determines the geometry type for features in the aggs layer. In the aggs layer, each feature represents a geotile_grid cell. If 'grid' each feature is a Polygon of the cells bounding box. If 'point' each feature is a Point that is the centroid of the cell.
Values are
grid,point, orcentroid. -
Maximum number of features to return in the hits layer. Accepts 0-10000. If 0, results don't include the hits layer.
-
The number of hits matching the query to count accurately. If
true, the exact number of hits is returned at the cost of some performance. Iffalse, the response does not include the total number of hits matching the query. -
If
true, the hits and aggs layers will contain additional point features representing suggested label positions for the original features.PointandMultiPointfeatures will have one of the points selected.PolygonandMultiPolygonfeatures will have a single point generated, either the centroid, if it is within the polygon, or another point within the polygon selected from the sorted triangle-tree.LineStringfeatures will likewise provide a roughly central point selected from the triangle-tree.- The aggregation results will provide one central point for each aggregation bucket.
All attributes from the original features will also be copied to the new label features. In addition, the new features will be distinguishable using the tag
_mvt_label_position.
Body
-
Sub-aggregations for the geotile_grid.
It supports the following aggregation types:
avgboxplotcardinalityextended statsmaxmedian absolute deviationminpercentilepercentile-rankstatssumvalue count
The aggregation names can't start with
_mvt_. The_mvt_prefix is reserved for internal aggregations. -
The size, in pixels, of a clipping buffer outside the tile. This allows renderers to avoid outline artifacts from geometries that extend past the extent of the tile.
Default value is
5. -
If
false, the meta layer's feature is the bounding box of the tile. Iftrue, the meta layer's feature is a bounding box resulting from ageo_boundsaggregation. The aggregation runs on values that intersect the<zoom>/<x>/<y>tile withwrap_longitudeset tofalse. The resulting bounding box may be larger than the vector tile.Default value is
false. -
The size, in pixels, of a side of the tile. Vector tiles are square with equal sides.
Default value is
4096. -
The fields to return in the
hitslayer. It supports wildcards (*). This parameter does not support fields with array values. Fields with array values may return inconsistent results. -
The aggregation used to create a grid for the
field.Values are
geotileorgeohex. -
Additional zoom levels available through the aggs layer. For example, if
<zoom>is7andgrid_precisionis8, you can zoom in up to level 15. Accepts 0-8. If 0, results don't include the aggs layer.Default value is
8. -
Determines the geometry type for features in the aggs layer. In the aggs layer, each feature represents a
geotile_gridcell. Ifgrid, each feature is a polygon of the cells bounding box. Ifpoint`, each feature is a Point that is the centroid of the cell.Values are
grid,point, orcentroid. -
The query DSL used to filter documents for the search.
External documentation -
Defines one or more runtime fields in the search request. These fields take precedence over mapped fields with the same name.
-
The maximum number of features to return in the hits layer. Accepts 0-10000. If 0, results don't include the hits layer.
Default value is
10000. sort
string | object | array[string | object] Sort the features in the hits layer. By default, the API calculates a bounding box for each feature. It sorts features based on this box's diagonal length, from longest to shortest.
-
The number of hits matching the query to count accurately. If
true, the exact number of hits is returned at the cost of some performance. Iffalse, the response does not include the total number of hits matching the query. -
If
true, the hits and aggs layers will contain additional point features representing suggested label positions for the original features.PointandMultiPointfeatures will have one of the points selected.PolygonandMultiPolygonfeatures will have a single point generated, either the centroid, if it is within the polygon, or another point within the polygon selected from the sorted triangle-tree.LineStringfeatures will likewise provide a roughly central point selected from the triangle-tree.- The aggregation results will provide one central point for each aggregation bucket.
All attributes from the original features will also be copied to the new label features. In addition, the new features will be distinguishable using the tag
_mvt_label_position.
GET museums/_mvt/location/13/4207/2692
{
"grid_agg": "geotile",
"grid_precision": 2,
"fields": [
"name",
"price"
],
"query": {
"term": {
"included": true
}
},
"aggs": {
"min_price": {
"min": {
"field": "price"
}
},
"max_price": {
"max": {
"field": "price"
}
},
"avg_price": {
"avg": {
"field": "price"
}
}
}
}
resp = client.search_mvt(
index="museums",
field="location",
zoom="13",
x="4207",
y="2692",
grid_agg="geotile",
grid_precision=2,
fields=[
"name",
"price"
],
query={
"term": {
"included": True
}
},
aggs={
"min_price": {
"min": {
"field": "price"
}
},
"max_price": {
"max": {
"field": "price"
}
},
"avg_price": {
"avg": {
"field": "price"
}
}
},
)
const response = await client.searchMvt({
index: "museums",
field: "location",
zoom: 13,
x: 4207,
y: 2692,
grid_agg: "geotile",
grid_precision: 2,
fields: ["name", "price"],
query: {
term: {
included: true,
},
},
aggs: {
min_price: {
min: {
field: "price",
},
},
max_price: {
max: {
field: "price",
},
},
avg_price: {
avg: {
field: "price",
},
},
},
});
response = client.search_mvt(
index: "museums",
field: "location",
zoom: "13",
x: "4207",
y: "2692",
body: {
"grid_agg": "geotile",
"grid_precision": 2,
"fields": [
"name",
"price"
],
"query": {
"term": {
"included": true
}
},
"aggs": {
"min_price": {
"min": {
"field": "price"
}
},
"max_price": {
"max": {
"field": "price"
}
},
"avg_price": {
"avg": {
"field": "price"
}
}
}
}
)
$resp = $client->searchMvt([
"index" => "museums",
"field" => "location",
"zoom" => "13",
"x" => "4207",
"y" => "2692",
"body" => [
"grid_agg" => "geotile",
"grid_precision" => 2,
"fields" => array(
"name",
"price",
),
"query" => [
"term" => [
"included" => true,
],
],
"aggs" => [
"min_price" => [
"min" => [
"field" => "price",
],
],
"max_price" => [
"max" => [
"field" => "price",
],
],
"avg_price" => [
"avg" => [
"field" => "price",
],
],
],
],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"grid_agg":"geotile","grid_precision":2,"fields":["name","price"],"query":{"term":{"included":true}},"aggs":{"min_price":{"min":{"field":"price"}},"max_price":{"max":{"field":"price"}},"avg_price":{"avg":{"field":"price"}}}}' "$ELASTICSEARCH_URL/museums/_mvt/location/13/4207/2692"
{
"grid_agg": "geotile",
"grid_precision": 2,
"fields": [
"name",
"price"
],
"query": {
"term": {
"included": true
}
},
"aggs": {
"min_price": {
"min": {
"field": "price"
}
},
"max_price": {
"max": {
"field": "price"
}
},
"avg_price": {
"avg": {
"field": "price"
}
}
}
}
{
"hits": {
"extent": 4096,
"version": 2,
"features": [
{
"geometry": {
"type": "Point",
"coordinates": [
3208,
3864
]
},
"properties": {
"_id": "1",
"_index": "museums",
"name": "NEMO Science Museum",
"price": 1750
},
"type": 1
},
{
"geometry": {
"type": "Point",
"coordinates": [
3429,
3496
]
},
"properties": {
"_id": "3",
"_index": "museums",
"name": "Nederlands Scheepvaartmuseum",
"price": 1650
},
"type": 1
},
{
"geometry": {
"type": "Point",
"coordinates": [
3429,
3496
]
},
"properties": {
"_id": "4",
"_index": "museums",
"name": "Amsterdam Centre for Architecture",
"price": 0
},
"type": 1
}
]
},
"aggs": {
"extent": 4096,
"version": 2,
"features": [
{
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
3072,
3072
],
[
4096,
3072
],
[
4096,
4096
],
[
3072,
4096
],
[
3072,
3072
]
]
]
},
"properties": {
"_count": 3,
"max_price.value": 1750.0,
"min_price.value": 0.0,
"avg_price.value": 1133.3333333333333
},
"type": 3
}
]
},
"meta": {
"extent": 4096,
"version": 2,
"features": [
{
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
0,
0
],
[
4096,
0
],
[
4096,
4096
],
[
0,
4096
],
[
0,
0
]
]
]
},
"properties": {
"_shards.failed": 0,
"_shards.skipped": 0,
"_shards.successful": 1,
"_shards.total": 1,
"aggregations._count.avg": 3.0,
"aggregations._count.count": 1,
"aggregations._count.max": 3.0,
"aggregations._count.min": 3.0,
"aggregations._count.sum": 3.0,
"aggregations.avg_price.avg": 1133.3333333333333,
"aggregations.avg_price.count": 1,
"aggregations.avg_price.max": 1133.3333333333333,
"aggregations.avg_price.min": 1133.3333333333333,
"aggregations.avg_price.sum": 1133.3333333333333,
"aggregations.max_price.avg": 1750.0,
"aggregations.max_price.count": 1,
"aggregations.max_price.max": 1750.0,
"aggregations.max_price.min": 1750.0,
"aggregations.max_price.sum": 1750.0,
"aggregations.min_price.avg": 0.0,
"aggregations.min_price.count": 1,
"aggregations.min_price.max": 0.0,
"aggregations.min_price.min": 0.0,
"aggregations.min_price.sum": 0.0,
"hits.max_score": 0.0,
"hits.total.relation": "eq",
"hits.total.value": 3,
"timed_out": false,
"took": 2
},
"type": 3
}
]
}
}
Get terms in an index
Generally available; Added in 7.14.0
All methods and paths for this operation:
Discover terms that match a partial string in an index. This API is designed for low-latency look-ups used in auto-complete scenarios.
The terms enum API may return terms from deleted documents. Deleted documents are initially only marked as deleted. It is not until their segments are merged that documents are actually deleted. Until that happens, the terms enum API will return terms from these documents.
Path parameters
-
A comma-separated list of data streams, indices, and index aliases to search. Wildcard (
*) expressions are supported. To search all data streams or indices, omit this parameter or use*or_all.
Body
Required
-
The string to match at the start of indexed terms. If not provided, all terms in the field are considered.
-
The number of matching terms to return.
Default value is
10. -
The maximum length of time to spend collecting results. If the timeout is exceeded the
completeflag set tofalsein the response and the results may be partial or empty.External documentation -
When
true, the provided search string is matched against index terms without case sensitivity.Default value is
false. -
Filter an index shard if the provided query rewrites to
match_none.External documentation -
The string to match at the start of indexed terms. If it is not provided, all terms in the field are considered.
The prefix string cannot be larger than the largest possible keyword value, which is Lucene's term byte-length limit of 32766.
-
The string after which terms in the index should be returned. It allows for a form of pagination if the last result from one request is passed as the
search_afterparameter for a subsequent request.
POST stackoverflow/_terms_enum
{
"field" : "tags",
"string" : "kiba"
}
resp = client.terms_enum(
index="stackoverflow",
field="tags",
string="kiba",
)
const response = await client.termsEnum({
index: "stackoverflow",
field: "tags",
string: "kiba",
});
response = client.terms_enum(
index: "stackoverflow",
body: {
"field": "tags",
"string": "kiba"
}
)
$resp = $client->termsEnum([
"index" => "stackoverflow",
"body" => [
"field" => "tags",
"string" => "kiba",
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"field":"tags","string":"kiba"}' "$ELASTICSEARCH_URL/stackoverflow/_terms_enum"
client.termsEnum(t -> t
.field("tags")
.index("stackoverflow")
.string("kiba")
);
{
"field" : "tags",
"string" : "kiba"
}
{
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"terms": [
"kibana"
],
"complete" : true
}
Create or update a search application
Beta; Added in 8.8.0
PUT _application/search_application/my-app
{
"indices": [ "index1", "index2" ],
"template": {
"script": {
"source": {
"query": {
"query_string": {
"query": "{{query_string}}",
"default_field": "{{default_field}}"
}
}
},
"params": {
"query_string": "*",
"default_field": "*"
}
},
"dictionary": {
"properties": {
"query_string": {
"type": "string"
},
"default_field": {
"type": "string",
"enum": [
"title",
"description"
]
},
"additionalProperties": false
},
"required": [
"query_string"
]
}
}
}
resp = client.search_application.put(
name="my-app",
search_application={
"indices": [
"index1",
"index2"
],
"template": {
"script": {
"source": {
"query": {
"query_string": {
"query": "{{query_string}}",
"default_field": "{{default_field}}"
}
}
},
"params": {
"query_string": "*",
"default_field": "*"
}
},
"dictionary": {
"properties": {
"query_string": {
"type": "string"
},
"default_field": {
"type": "string",
"enum": [
"title",
"description"
]
},
"additionalProperties": False
},
"required": [
"query_string"
]
}
}
},
)
const response = await client.searchApplication.put({
name: "my-app",
search_application: {
indices: ["index1", "index2"],
template: {
script: {
source: {
query: {
query_string: {
query: "{{query_string}}",
default_field: "{{default_field}}",
},
},
},
params: {
query_string: "*",
default_field: "*",
},
},
dictionary: {
properties: {
query_string: {
type: "string",
},
default_field: {
type: "string",
enum: ["title", "description"],
},
additionalProperties: false,
},
required: ["query_string"],
},
},
},
});
response = client.search_application.put(
name: "my-app",
body: {
"indices": [
"index1",
"index2"
],
"template": {
"script": {
"source": {
"query": {
"query_string": {
"query": "{{query_string}}",
"default_field": "{{default_field}}"
}
}
},
"params": {
"query_string": "*",
"default_field": "*"
}
},
"dictionary": {
"properties": {
"query_string": {
"type": "string"
},
"default_field": {
"type": "string",
"enum": [
"title",
"description"
]
},
"additionalProperties": false
},
"required": [
"query_string"
]
}
}
}
)
$resp = $client->searchApplication()->put([
"name" => "my-app",
"body" => [
"indices" => array(
"index1",
"index2",
),
"template" => [
"script" => [
"source" => [
"query" => [
"query_string" => [
"query" => "{{query_string}}",
"default_field" => "{{default_field}}",
],
],
],
"params" => [
"query_string" => "*",
"default_field" => "*",
],
],
"dictionary" => [
"properties" => [
"query_string" => [
"type" => "string",
],
"default_field" => [
"type" => "string",
"enum" => array(
"title",
"description",
),
],
"additionalProperties" => false,
],
"required" => array(
"query_string",
),
],
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"indices":["index1","index2"],"template":{"script":{"source":{"query":{"query_string":{"query":"{{query_string}}","default_field":"{{default_field}}"}}},"params":{"query_string":"*","default_field":"*"}},"dictionary":{"properties":{"query_string":{"type":"string"},"default_field":{"type":"string","enum":["title","description"]},"additionalProperties":false},"required":["query_string"]}}}' "$ELASTICSEARCH_URL/_application/search_application/my-app"
client.searchApplication().put(p -> p
.name("my-app")
.searchApplication(s -> s
.indices(List.of("index1","index2"))
.template(t -> t
.script(sc -> sc
.source(so -> so
.scriptTemplate(scr -> scr
.query(q -> q
.queryString(qu -> qu
.defaultField("{{default_field}}")
.query("{{query_string}}")
)
)
)
)
.params(Map.of("default_field", JsonData.fromJson("\"*\""),"query_string", JsonData.fromJson("\"*\"")))
)
)
)
);
{
"indices": [ "index1", "index2" ],
"template": {
"script": {
"source": {
"query": {
"query_string": {
"query": "{{query_string}}",
"default_field": "{{default_field}}"
}
}
},
"params": {
"query_string": "*",
"default_field": "*"
}
},
"dictionary": {
"properties": {
"query_string": {
"type": "string"
},
"default_field": {
"type": "string",
"enum": [
"title",
"description"
]
},
"additionalProperties": false
},
"required": [
"query_string"
]
}
}
}
GET _application/search_application?from=0&size=3&q=app*
resp = client.search_application.list(
from="0",
size="3",
q="app*",
)
const response = await client.searchApplication.list({
from: 0,
size: 3,
q: "app*",
});
response = client.search_application.list(
from: "0",
size: "3",
q: "app*"
)
$resp = $client->searchApplication()->list([
"from" => "0",
"size" => "3",
"q" => "app*",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_application/search_application?from=0&size=3&q=app*"
client.searchApplication().list(l -> l
.from(0)
.q("app*")
.size(3)
);
{
"count": 2,
"results": [
{
"name": "app-1",
"updated_at_millis": 1690981129366
},
{
"name": "app-2",
"updated_at_millis": 1691501823939
}
]
}
Clear the cache
Technical preview; Added in 7.10.0
All methods and paths for this operation:
Clear indices and data streams from the shared cache for partially mounted indices.
Required authorization
- Index privileges:
manage - Cluster privileges:
manage
Path parameters
-
A comma-separated list of data streams, indices, and aliases to clear from the cache. It supports wildcards (
*).
Query parameters
-
Whether to expand wildcard expression to concrete indices that are open, closed or both.
Supported values include:
all: Match any data stream or index, including hidden ones.open: Match open, non-hidden indices. Also matches any non-hidden data stream.closed: Match closed, non-hidden indices. Also matches any non-hidden data stream. Data streams cannot be closed.hidden: Match hidden data streams and hidden indices. Must be combined withopen,closed, orboth.none: Wildcard expressions are not accepted.
Values are
all,open,closed,hidden, ornone. -
Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes
_allstring or when no indices have been specified)
POST /my-index/_searchable_snapshots/cache/clear
resp = client.searchable_snapshots.clear_cache(
index="my-index",
)
const response = await client.searchableSnapshots.clearCache({
index: "my-index",
});
response = client.searchable_snapshots.clear_cache(
index: "my-index"
)
$resp = $client->searchableSnapshots()->clearCache([
"index" => "my-index",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/my-index/_searchable_snapshots/cache/clear"
client.searchableSnapshots().clearCache(c -> c
.index("my-index")
);
Activate a user profile
Generally available; Added in 8.2.0
Create or update a user profile on behalf of another user.
NOTE: The user profile feature is designed only for use by Kibana and Elastic's Observability, Enterprise Search, and Elastic Security solutions.
Individual users and external applications should not call this API directly.
The calling application must have either an access_token or a combination of username and password for the user that the profile document is intended for.
Elastic reserves the right to change or remove this feature in future releases without prior notice.
This API creates or updates a profile document for end users with information that is extracted from the user's authentication object including username, full_name, roles, and the authentication realm.
For example, in the JWT access_token case, the profile user's username is extracted from the JWT token claim pointed to by the claims.principal setting of the JWT realm that authenticated the token.
When updating a profile document, the API enables the document if it was disabled.
Any updates do not change existing content for either the labels or data fields.
Required authorization
- Cluster privileges:
manage_user_profile
Body
Required
-
The user's Elasticsearch access token or JWT. Both
accessandidJWT token types are supported and they depend on the underlying JWT realm configuration. If you specify theaccess_tokengrant type, this parameter is required. It is not valid with other grant types. -
The type of grant.
Supported values include:
password: In this type of grant, you must supply the user ID and password for which you want to create the API key.access_token: In this type of grant, you must supply an access token that was created by the Elasticsearch token service. If you are activating a user profile, you can alternatively supply a JWT (either a JWTaccess_tokenor a JWTid_token).
Values are
passwordoraccess_token. -
The user's password. If you specify the
passwordgrant type, this parameter is required. It is not valid with other grant types. -
The username that identifies the user. If you specify the
passwordgrant type, this parameter is required. It is not valid with other grant types.
POST /_security/profile/_activate
{
"grant_type": "password",
"username" : "jacknich",
"password" : "l0ng-r4nd0m-p@ssw0rd"
}
resp = client.security.activate_user_profile(
grant_type="password",
username="jacknich",
password="l0ng-r4nd0m-p@ssw0rd",
)
const response = await client.security.activateUserProfile({
grant_type: "password",
username: "jacknich",
password: "l0ng-r4nd0m-p@ssw0rd",
});
response = client.security.activate_user_profile(
body: {
"grant_type": "password",
"username": "jacknich",
"password": "l0ng-r4nd0m-p@ssw0rd"
}
)
$resp = $client->security()->activateUserProfile([
"body" => [
"grant_type" => "password",
"username" => "jacknich",
"password" => "l0ng-r4nd0m-p@ssw0rd",
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"grant_type":"password","username":"jacknich","password":"l0ng-r4nd0m-p@ssw0rd"}' "$ELASTICSEARCH_URL/_security/profile/_activate"
client.security().activateUserProfile(a -> a
.grantType(GrantType.Password)
.password("l0ng-r4nd0m-p@ssw0rd")
.username("jacknich")
);
{
"grant_type": "password",
"username" : "jacknich",
"password" : "l0ng-r4nd0m-p@ssw0rd"
}
{
"uid": "u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0",
"enabled": true,
"last_synchronized": 1642650651037,
"user": {
"username": "jacknich",
"roles": [
"admin", "other_role1"
],
"realm_name": "native",
"full_name": "Jack Nicholson",
"email": "jacknich@example.com"
},
"labels": {},
"data": {},
"_doc": {
"_primary_term": 88,
"_seq_no": 66
}
}
Authenticate a user
Generally available; Added in 5.5.0
Authenticates a user and returns information about the authenticated user. Include the user information in a basic auth header. A successful call returns a JSON structure that shows user information such as their username, the roles that are assigned to the user, any assigned metadata, and information about the realms that authenticated and authorized the user. If the user cannot be authenticated, this API returns a 401 status code.
GET /_security/_authenticate
resp = client.security.authenticate()
const response = await client.security.authenticate();
response = client.security.authenticate
$resp = $client->security()->authenticate();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/_authenticate"
client.security().authenticate();
{
"username": "rdeniro",
"roles": [
"admin"
],
"full_name": null,
"email": null,
"metadata": { },
"enabled": true,
"authentication_realm": {
"name" : "file",
"type" : "file"
},
"lookup_realm": {
"name" : "file",
"type" : "file"
},
"authentication_type": "realm"
}
POST /_security/role/my_admin_role/_clear_cache
resp = client.security.clear_cached_roles(
name="my_admin_role",
)
const response = await client.security.clearCachedRoles({
name: "my_admin_role",
});
response = client.security.clear_cached_roles(
name: "my_admin_role"
)
$resp = $client->security()->clearCachedRoles([
"name" => "my_admin_role",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/role/my_admin_role/_clear_cache"
client.security().clearCachedRoles(c -> c
.name("my_admin_role")
);
Get API key information
Generally available; Added in 6.7.0
Retrieves information for one or more API keys.
NOTE: If you have only the manage_own_api_key privilege, this API returns only the API keys that you own.
If you have read_security, manage_api_key or greater privileges (including manage_security), this API returns all API keys regardless of ownership.
Required authorization
- Cluster privileges:
manage_own_api_key,read_security
Query parameters
-
An API key id. This parameter cannot be used with any of
name,realm_nameorusername. -
An API key name. This parameter cannot be used with any of
id,realm_nameorusername. It supports prefix search with wildcard. -
A boolean flag that can be used to query API keys owned by the currently authenticated user. The
realm_nameorusernameparameters cannot be specified when this parameter is set totrueas they are assumed to be the currently authenticated ones. -
The name of an authentication realm. This parameter cannot be used with either
idornameor whenownerflag is set totrue. -
The username of a user. This parameter cannot be used with either
idornameor whenownerflag is set totrue. -
Return the snapshot of the owner user's role descriptors associated with the API key. An API key's actual permission is the intersection of its assigned role descriptors and the owner user's role descriptors.
-
A boolean flag that can be used to query API keys that are currently active. An API key is considered active if it is neither invalidated, nor expired at query time. You can specify this together with other parameters such as
ownerorname. Ifactive_onlyis false, the response will include both active and inactive (expired or invalidated) keys. -
Determines whether to also retrieve the profile uid, for the API key owner principal, if it exists.
GET /_security/api_key?username=myuser&realm_name=native1
resp = client.security.get_api_key(
username="myuser",
realm_name="native1",
)
const response = await client.security.getApiKey({
username: "myuser",
realm_name: "native1",
});
response = client.security.get_api_key(
username: "myuser",
realm_name: "native1"
)
$resp = $client->security()->getApiKey([
"username" => "myuser",
"realm_name" => "native1",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/api_key?username=myuser&realm_name=native1"
client.security().getApiKey(g -> g
.realmName("native1")
.username("myuser")
);
{
"api_keys": [
{
"id": "VuaCfGcBCdbkQm-e5aOx",
"name": "my-api-key",
"creation": 1548550550158,
"expiration": 1548551550158,
"invalidated": false,
"username": "myuser",
"realm": "native1",
"realm_type": "native",
"metadata": {
"application": "myapp"
},
"role_descriptors": { },
"limited_by": [
{
"role-power-user": {
"cluster": [
"monitor"
],
"indices": [
{
"names": [
"*"
],
"privileges": [
"read"
],
"allow_restricted_indices": false
}
],
"applications": [ ],
"run_as": [ ],
"metadata": { },
"transient_metadata": {
"enabled": true
}
}
}
]
}
]
}
{
"api_keys": [
{
"id": "0GF5GXsBCXxz2eDxWwFN",
"name": "hadoop_myuser_key",
"creation": 1548550550158,
"expiration": 1548551550158,
"invalidated": false,
"username": "myuser",
"realm": "native1",
"realm_type": "native",
"metadata": {
"application": "myapp"
},
"role_descriptors": {
"role-a": {
"cluster": [
"monitor"
],
"indices": [
{
"names": [
"index-a"
],
"privileges": [
"read"
],
"allow_restricted_indices": false
}
],
"applications": [ ],
"run_as": [ ],
"metadata": { },
"transient_metadata": {
"enabled": true
}
}
}
},
{
"id": "6wHJmcQpReKBa42EHV5SBw",
"name": "api-key-name-2",
"creation": 1548550550158,
"invalidated": false,
"username": "user-y",
"realm": "realm-2",
"metadata": {},
"role_descriptors": { }
}
]
}
Get application privileges
Generally available; Added in 6.4.0
All methods and paths for this operation:
To use this API, you must have one of the following privileges:
- The
read_securitycluster privilege (or a greater privilege such asmanage_securityorall). - The "Manage Application Privileges" global privilege for the application being referenced in the request.
Required authorization
- Cluster privileges:
read_security
Path parameters
-
The name of the application. Application privileges are always associated with exactly one application. If you do not specify this parameter, the API returns information about all privileges for all applications.
-
The name of the privilege. If you do not specify this parameter, the API returns information about all privileges for the requested application.
GET /_security/privilege/myapp/read
resp = client.security.get_privileges(
application="myapp",
name="read",
)
const response = await client.security.getPrivileges({
application: "myapp",
name: "read",
});
response = client.security.get_privileges(
application: "myapp",
name: "read"
)
$resp = $client->security()->getPrivileges([
"application" => "myapp",
"name" => "read",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/privilege/myapp/read"
client.security().getPrivileges(g -> g
.application("myapp")
.name("read")
);
{
"myapp": {
"read": {
"application": "myapp",
"name": "read",
"actions": [
"data:read/*",
"action:login"
],
"metadata": {
"description": "Read access to myapp"
}
}
}
}
Delete application privileges
Generally available; Added in 6.4.0
To use this API, you must have one of the following privileges:
- The
manage_securitycluster privilege (or a greater privilege such asall). - The "Manage Application Privileges" global privilege for the application being referenced in the request.
Required authorization
- Cluster privileges:
manage_security
Path parameters
-
The name of the application. Application privileges are always associated with exactly one application.
-
The name of the privilege.
DELETE /_security/privilege/myapp/read
resp = client.security.delete_privileges(
application="myapp",
name="read",
)
const response = await client.security.deletePrivileges({
application: "myapp",
name: "read",
});
response = client.security.delete_privileges(
application: "myapp",
name: "read"
)
$resp = $client->security()->deletePrivileges([
"application" => "myapp",
"name" => "read",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/privilege/myapp/read"
client.security().deletePrivileges(d -> d
.application("myapp")
.name("read")
);
{
"myapp": {
"read": {
"found" : true
}
}
}
Get roles
Generally available
All methods and paths for this operation:
Get roles in the native realm. The role management APIs are generally the preferred way to manage roles, rather than using file-based role management. The get roles API cannot retrieve roles that are defined in roles files.
Required authorization
- Cluster privileges:
read_security
GET /_security/role/my_admin_role
resp = client.security.get_role(
name="my_admin_role",
)
const response = await client.security.getRole({
name: "my_admin_role",
});
response = client.security.get_role(
name: "my_admin_role"
)
$resp = $client->security()->getRole([
"name" => "my_admin_role",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/role/my_admin_role"
client.security().getRole(g -> g
.name("my_admin_role")
);
{
"my_admin_role": {
"description": "Grants full access to all management features within the cluster.",
"cluster" : [ "all" ],
"indices" : [
{
"names" : [ "index1", "index2" ],
"privileges" : [ "all" ],
"allow_restricted_indices" : false,
"field_security" : {
"grant" : [ "title", "body" ]}
}
],
"applications" : [ ],
"run_as" : [ "other_user" ],
"metadata" : {
"version" : 1
},
"transient_metadata": {
"enabled": true
}
}
}
Get role mappings
Generally available; Added in 5.5.0
All methods and paths for this operation:
Role mappings define which roles are assigned to each user. The role mapping APIs are generally the preferred way to manage role mappings rather than using role mapping files. The get role mappings API cannot retrieve role mappings that are defined in role mapping files.
Required authorization
- Cluster privileges:
manage_security
Path parameters
-
The distinct name that identifies the role mapping. The name is used solely as an identifier to facilitate interaction via the API; it does not affect the behavior of the mapping in any way. You can specify multiple mapping names as a comma-separated list. If you do not specify this parameter, the API returns information about all role mappings.
GET /_security/role_mapping/mapping1
resp = client.security.get_role_mapping(
name="mapping1",
)
const response = await client.security.getRoleMapping({
name: "mapping1",
});
response = client.security.get_role_mapping(
name: "mapping1"
)
$resp = $client->security()->getRoleMapping([
"name" => "mapping1",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/role_mapping/mapping1"
client.security().getRoleMapping(g -> g
.name("mapping1")
);
{
"mapping1": {
"enabled": true,
"roles": [
"user"
],
"rules": {
"field": {
"username": "*"
}
},
"metadata": {}
}
}
Create or update users
Generally available
All methods and paths for this operation:
Add and update users in the native realm. A password is required for adding a new user but is optional when updating an existing user. To change a user's password without updating any other fields, use the change password API.
Required authorization
- Cluster privileges:
manage_security
Path parameters
-
An identifier for the user.
NOTE: Usernames must be at least 1 and no more than 507 characters. They can contain alphanumeric characters (a-z, A-Z, 0-9), spaces, punctuation, and printable symbols in the Basic Latin (ASCII) block. Leading or trailing whitespace is not allowed.
Query parameters
-
Valid values are
true,false, andwait_for. These values have the same meaning as in the index API, but the default value for this API is true.Values are
true,false, orwait_for.
Body
Required
-
Arbitrary metadata that you want to associate with the user.
-
The user's password. Passwords must be at least 6 characters long. When adding a user, one of
passwordorpassword_hashis required. When updating an existing user, the password is optional, so that other fields on the user (such as their roles) may be updated without modifying the user's password -
A hash of the user's password. This must be produced using the same hashing algorithm as has been configured for password storage. For more details, see the explanation of the
xpack.security.authc.password_hashing.algorithmsetting in the user cache and password hash algorithm documentation. Using this parameter allows the client to pre-hash the password for performance and/or confidentiality reasons. Thepasswordparameter and thepassword_hashparameter cannot be used in the same request.External documentation -
A set of roles the user has. The roles determine the user's access permissions. To create a user without any roles, specify an empty list (
[]). -
Specifies whether the user is enabled.
Default value is
true.
POST /_security/user/jacknich
{
"password" : "l0ng-r4nd0m-p@ssw0rd",
"roles" : [ "admin", "other_role1" ],
"full_name" : "Jack Nicholson",
"email" : "jacknich@example.com",
"metadata" : {
"intelligence" : 7
}
}
resp = client.security.put_user(
username="jacknich",
password="l0ng-r4nd0m-p@ssw0rd",
roles=[
"admin",
"other_role1"
],
full_name="Jack Nicholson",
email="jacknich@example.com",
metadata={
"intelligence": 7
},
)
const response = await client.security.putUser({
username: "jacknich",
password: "l0ng-r4nd0m-p@ssw0rd",
roles: ["admin", "other_role1"],
full_name: "Jack Nicholson",
email: "jacknich@example.com",
metadata: {
intelligence: 7,
},
});
response = client.security.put_user(
username: "jacknich",
body: {
"password": "l0ng-r4nd0m-p@ssw0rd",
"roles": [
"admin",
"other_role1"
],
"full_name": "Jack Nicholson",
"email": "jacknich@example.com",
"metadata": {
"intelligence": 7
}
}
)
$resp = $client->security()->putUser([
"username" => "jacknich",
"body" => [
"password" => "l0ng-r4nd0m-p@ssw0rd",
"roles" => array(
"admin",
"other_role1",
),
"full_name" => "Jack Nicholson",
"email" => "jacknich@example.com",
"metadata" => [
"intelligence" => 7,
],
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"password":"l0ng-r4nd0m-p@ssw0rd","roles":["admin","other_role1"],"full_name":"Jack Nicholson","email":"jacknich@example.com","metadata":{"intelligence":7}}' "$ELASTICSEARCH_URL/_security/user/jacknich"
client.security().putUser(p -> p
.email("jacknich@example.com")
.fullName("Jack Nicholson")
.metadata("intelligence", JsonData.fromJson("7"))
.password("l0ng-r4nd0m-p@ssw0rd")
.roles(List.of("admin","other_role1"))
.username("jacknich")
);
{
"password" : "l0ng-r4nd0m-p@ssw0rd",
"roles" : [ "admin", "other_role1" ],
"full_name" : "Jack Nicholson",
"email" : "jacknich@example.com",
"metadata" : {
"intelligence" : 7
}
}
{
"created": true
}
Enable a user profile
Generally available; Added in 8.2.0
All methods and paths for this operation:
Enable user profiles to make them visible in user profile searches.
NOTE: The user profile feature is designed only for use by Kibana and Elastic's Observability, Enterprise Search, and Elastic Security solutions. Individual users and external applications should not call this API directly. Elastic reserves the right to change or remove this feature in future releases without prior notice.
When you activate a user profile, it's automatically enabled and visible in user profile searches. If you later disable the user profile, you can use the enable user profile API to make the profile visible in these searches again.
Required authorization
- Cluster privileges:
manage_user_profile
POST /_security/profile/u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0/_enable
resp = client.security.enable_user_profile(
uid="u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0",
)
const response = await client.security.enableUserProfile({
uid: "u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0",
});
response = client.security.enable_user_profile(
uid: "u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0"
)
$resp = $client->security()->enableUserProfile([
"uid" => "u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/profile/u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0/_enable"
client.security().enableUserProfile(e -> e
.uid("u_79HkWkwmnBH5gqFKwoxggWPjEBOur1zLPXQPEl1VBW0_0")
);
Invalidate a token
Generally available; Added in 5.5.0
The access tokens returned by the get token API have a finite period of time for which they are valid.
After that time period, they can no longer be used.
The time period is defined by the xpack.security.authc.token.timeout setting.
The refresh tokens returned by the get token API are only valid for 24 hours. They can also be used exactly once. If you want to invalidate one or more access or refresh tokens immediately, use this invalidate token API.
NOTE: While all parameters are optional, at least one of them is required.
More specifically, either one of token or refresh_token parameters is required.
If none of these two are specified, then realm_name and/or username need to be specified.
Body
Required
-
An access token. This parameter cannot be used if any of
refresh_token,realm_name, orusernameare used. -
A refresh token. This parameter cannot be used if any of
refresh_token,realm_name, orusernameare used. -
The name of an authentication realm. This parameter cannot be used with either
refresh_tokenortoken. -
The username of a user. This parameter cannot be used with either
refresh_tokenortoken.
DELETE /_security/oauth2/token
{
"token" : "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ=="
}
resp = client.security.invalidate_token(
token="dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
)
const response = await client.security.invalidateToken({
token:
"dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
});
response = client.security.invalidate_token(
body: {
"token": "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ=="
}
)
$resp = $client->security()->invalidateToken([
"body" => [
"token" => "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==",
],
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"token":"dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ=="}' "$ELASTICSEARCH_URL/_security/oauth2/token"
client.security().invalidateToken(i -> i
.token("dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==")
);
{
"token" : "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ=="
}
{
"refresh_token" : "vLBPvmAB6KvwvJZr27cS"
}
{
"realm_name" : "saml1"
}
{
"username" : "myuser"
}
{
"username" : "myuser",
"realm_name" : "saml1"
}
{
"invalidated_tokens":9,
"previously_invalidated_tokens":15,
"error_count":2,
"error_details":[
{
"type":"exception",
"reason":"Elasticsearch exception [type=exception, reason=foo]",
"caused_by":{
"type":"exception",
"reason":"Elasticsearch exception [type=illegal_argument_exception, reason=bar]"
}
},
{
"type":"exception",
"reason":"Elasticsearch exception [type=exception, reason=boo]",
"caused_by":{
"type":"exception",
"reason":"Elasticsearch exception [type=illegal_argument_exception, reason=far]"
}
}
]
}
Get user privileges
Generally available; Added in 6.5.0
Get the security privileges for the logged in user. All users can use this API, but only to determine their own privileges. To check the privileges of other users, you must use the run as feature. To check whether a user has a specific list of privileges, use the has privileges API.
GET /_security/user/_privileges
resp = client.security.get_user_privileges()
const response = await client.security.getUserPrivileges();
response = client.security.get_user_privileges
$resp = $client->security()->getUserPrivileges();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_security/user/_privileges"
client.security().getUserPrivileges(g -> g);
{
"cluster" : [
"all"
],
"global" : [ ],
"indices" : [
{
"names" : [
"*"
],
"privileges" : [
"all"
],
"allow_restricted_indices" : true
}
],
"applications" : [
{
"application" : "*",
"privileges" : [
"*"
],
"resources" : [
"*"
]
}
],
"run_as" : [
"*"
]
}
Check user privileges
Generally available; Added in 6.4.0
All methods and paths for this operation:
Determine whether the specified user has a specified list of privileges. All users can use this API, but only to determine their own privileges. To check the privileges of other users, you must use the run as feature.
GET /_security/user/_has_privileges
{
"cluster": [ "monitor", "manage" ],
"index" : [
{
"names": [ "suppliers", "products" ],
"privileges": [ "read" ]
},
{
"names": [ "inventory" ],
"privileges" : [ "read", "write" ]
}
],
"application": [
{
"application": "inventory_manager",
"privileges" : [ "read", "data:write/inventory" ],
"resources" : [ "product/1852563" ]
}
]
}
resp = client.security.has_privileges(
cluster=[
"monitor",
"manage"
],
index=[
{
"names": [
"suppliers",
"products"
],
"privileges": [
"read"
]
},
{
"names": [
"inventory"
],
"privileges": [
"read",
"write"
]
}
],
application=[
{
"application": "inventory_manager",
"privileges": [
"read",
"data:write/inventory"
],
"resources": [
"product/1852563"
]
}
],
)
const response = await client.security.hasPrivileges({
cluster: ["monitor", "manage"],
index: [
{
names: ["suppliers", "products"],
privileges: ["read"],
},
{
names: ["inventory"],
privileges: ["read", "write"],
},
],
application: [
{
application: "inventory_manager",
privileges: ["read", "data:write/inventory"],
resources: ["product/1852563"],
},
],
});
response = client.security.has_privileges(
body: {
"cluster": [
"monitor",
"manage"
],
"index": [
{
"names": [
"suppliers",
"products"
],
"privileges": [
"read"
]
},
{
"names": [
"inventory"
],
"privileges": [
"read",
"write"
]
}
],
"application": [
{
"application": "inventory_manager",
"privileges": [
"read",
"data:write/inventory"
],
"resources": [
"product/1852563"
]
}
]
}
)
$resp = $client->security()->hasPrivileges([
"body" => [
"cluster" => array(
"monitor",
"manage",
),
"index" => array(
[
"names" => array(
"suppliers",
"products",
),
"privileges" => array(
"read",
),
],
[
"names" => array(
"inventory",
),
"privileges" => array(
"read",
"write",
),
],
),
"application" => array(
[
"application" => "inventory_manager",
"privileges" => array(
"read",
"data:write/inventory",
),
"resources" => array(
"product/1852563",
),
],
),
],
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"cluster":["monitor","manage"],"index":[{"names":["suppliers","products"],"privileges":["read"]},{"names":["inventory"],"privileges":["read","write"]}],"application":[{"application":"inventory_manager","privileges":["read","data:write/inventory"],"resources":["product/1852563"]}]}' "$ELASTICSEARCH_URL/_security/user/_has_privileges"
client.security().hasPrivileges(h -> h
.application(a -> a
.application("inventory_manager")
.privileges(List.of("read","data:write/inventory"))
.resources("product/1852563")
)
.cluster(List.of("monitor","manage"))
.index(List.of(IndexPrivilegesCheck.of(i -> i
.names(List.of("suppliers","products"))
.privileges("read")),IndexPrivilegesCheck.of(i -> i
.names("inventory")
.privileges(List.of("read","write")))))
);
{
"cluster": [ "monitor", "manage" ],
"index" : [
{
"names": [ "suppliers", "products" ],
"privileges": [ "read" ]
},
{
"names": [ "inventory" ],
"privileges" : [ "read", "write" ]
}
],
"application": [
{
"application": "inventory_manager",
"privileges" : [ "read", "data:write/inventory" ],
"resources" : [ "product/1852563" ]
}
]
}
{
"username": "rdeniro",
"has_all_requested" : false,
"cluster" : {
"monitor" : true,
"manage" : false
},
"index" : {
"suppliers" : {
"read" : true
},
"products" : {
"read" : true
},
"inventory" : {
"read" : true,
"write" : false
}
},
"application" : {
"inventory_manager" : {
"product/1852563" : {
"read": false,
"data:write/inventory": false
}
}
}
}
Prepare SAML authentication
Generally available; Added in 7.5.0
Create a SAML authentication request (<AuthnRequest>) as a URL string based on the configuration of the respective SAML realm in Elasticsearch.
NOTE: This API is intended for use by custom web applications other than Kibana. If you are using Kibana, refer to the documentation for configuring SAML single-sign-on on the Elastic Stack.
This API returns a URL pointing to the SAML Identity Provider.
You can use the URL to redirect the browser of the user in order to continue the authentication process.
The URL includes a single parameter named SAMLRequest, which contains a SAML Authentication request that is deflated and Base64 encoded.
If the configuration dictates that SAML authentication requests should be signed, the URL has two extra parameters named SigAlg and Signature.
These parameters contain the algorithm used for the signature and the signature value itself.
It also returns a random string that uniquely identifies this SAML Authentication request.
The caller of this API needs to store this identifier as it needs to be used in a following step of the authentication process.
Body
Required
-
The Assertion Consumer Service URL that matches the one of the SAML realms in Elasticsearch. The realm is used to generate the authentication request. You must specify either this parameter or the
realmparameter. -
The name of the SAML realm in Elasticsearch for which the configuration is used to generate the authentication request. You must specify either this parameter or the
acsparameter. -
A string that will be included in the redirect URL that this API returns as the
RelayStatequery parameter. If the Authentication Request is signed, this value is used as part of the signature computation.
POST /_security/saml/prepare
{
"realm" : "saml1"
}
resp = client.security.saml_prepare_authentication(
realm="saml1",
)
const response = await client.security.samlPrepareAuthentication({
realm: "saml1",
});
response = client.security.saml_prepare_authentication(
body: {
"realm": "saml1"
}
)
$resp = $client->security()->samlPrepareAuthentication([
"body" => [
"realm" => "saml1",
],
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"realm":"saml1"}' "$ELASTICSEARCH_URL/_security/saml/prepare"
client.security().samlPrepareAuthentication(s -> s
.realm("saml1")
);
{
"realm" : "saml1"
}
{
"acs" : "https://kibana.org/api/security/saml/callback"
}
{
"redirect": "https://my-idp.org/login?SAMLRequest=fVJdc6IwFP0rmbwDgUKLGbFDtc462%2B06FX3Yl50rBJsKCZsbrPbXL6J22hdfk%2FNx7zl3eL%2BvK7ITBqVWCfVdRolQuS6k2iR0mU2dmN6Phgh1FTQ8be2rehH%2FWoGWdESF%2FPST0NYorgElcgW1QG5zvkh%2FPfHAZbwx2upcV5SkiMLYzmqsFba1MAthdjIXy5enhL5a23DPOyo6W7kGBa7cwhZ2gO7G8OiW%2BR400kORt0bag7fzezAlk24eqcD2OxxlsNN5O3MdsW9c6CZnbq7rntF4d3s0D7BaHTZhIWN52P%2BcjiuGRbDU6cdj%2BEjJbJLQv4N4ADdhxBiEZbQuWclY4Q8iABbCXczCdSiKMAC%2FgyO2YqbQgrIJDZg%2FcFjsMD%2Fzb3gUcBa5sR%2F9oWR%2BzuJBqlPG14Jbn0DIf2TZ3Jn%2FXmSUrC5ddQB6bob37uZrJdeF4dIDHV3iuhb70Ptq83kOz53ubDLXlcwPJK0q%2FT42AqxIaAkVCkqm2tRgr49yfJGFU%2FZQ3hy3QyuUpd7obPv97kb%2FAQ%3D%3D"}",
"realm": "saml1",
"id": "_989a34500a4f5bf0f00d195aa04a7804b4ed42a1"
}
Clean up the snapshot repository
Generally available; Added in 7.4.0
Trigger the review of the contents of a snapshot repository and delete any stale data not referenced by existing snapshots.
Required authorization
- Cluster privileges:
manage
Query parameters
-
The period to wait for a connection to the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to
-1External documentation -
The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response will indicate that it was not completely acknowledged. To indicate that the request should never timeout, set it to
-1.External documentation
POST /_snapshot/my_repository/_cleanup
resp = client.snapshot.cleanup_repository(
name="my_repository",
)
const response = await client.snapshot.cleanupRepository({
name: "my_repository",
});
response = client.snapshot.cleanup_repository(
repository: "my_repository"
)
$resp = $client->snapshot()->cleanupRepository([
"repository" => "my_repository",
]);
curl -X POST -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_snapshot/my_repository/_cleanup"
client.snapshot().cleanupRepository(c -> c
.name("my_repository")
);
{
"results": {
"deleted_bytes": 20,
"deleted_blobs": 5
}
}
Path parameters
-
A comma-separated list of snapshot repository names used to limit the request. Wildcard (
*) expressions are supported including combining wildcards with exclude patterns starting with-.To get information about all snapshot repositories registered in the cluster, omit this parameter or use
*or_all.
Query parameters
-
If
true, the request gets information from the local node only. Iffalse, the request gets information from the master node. -
The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to
-1.External documentation
GET /_snapshot/my_repository
resp = client.snapshot.get_repository(
name="my_repository",
)
const response = await client.snapshot.getRepository({
name: "my_repository",
});
response = client.snapshot.get_repository(
repository: "my_repository"
)
$resp = $client->snapshot()->getRepository([
"repository" => "my_repository",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_snapshot/my_repository"
client.snapshot().getRepository(g -> g
.name("my_repository")
);
{
"my_repository" : {
"type" : "fs",
"uuid" : "0JLknrXbSUiVPuLakHjBrQ",
"settings" : {
"location" : "my_backup_location"
}
}
}
Path parameters
-
The ame of the snapshot repositories to unregister. Wildcard (
*) patterns are supported.
Query parameters
-
The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to
-1.External documentation -
The period to wait for a response from all relevant nodes in the cluster after updating the cluster metadata. If no response is received before the timeout expires, the cluster metadata update still applies but the response will indicate that it was not completely acknowledged. To indicate that the request should never timeout, set it to
-1.External documentation
DELETE /_snapshot/my_repository
resp = client.snapshot.delete_repository(
name="my_repository",
)
const response = await client.snapshot.deleteRepository({
name: "my_repository",
});
response = client.snapshot.delete_repository(
repository: "my_repository"
)
$resp = $client->snapshot()->deleteRepository([
"repository" => "my_repository",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_snapshot/my_repository"
client.snapshot().deleteRepository(d -> d
.name("my_repository")
);
Get the snapshot status
Generally available; Added in 7.8.0
All methods and paths for this operation:
Get a detailed description of the current state for each shard participating in the snapshot.
Note that this API should be used only to obtain detailed shard-level information for ongoing snapshots. If this detail is not needed or you want to obtain information about one or more existing snapshots, use the get snapshot API.
If you omit the <snapshot> request path parameter, the request retrieves information only for currently running snapshots.
This usage is preferred.
If needed, you can specify <repository> and <snapshot> to retrieve information for specific snapshots, even if they're not currently running.
WARNING: Using the API to return the status of any snapshots other than currently running snapshots can be expensive. The API requires a read from the repository for each shard in each snapshot. For example, if you have 100 snapshots with 1,000 shards each, an API request that includes all snapshots will require 100,000 reads (100 snapshots x 1,000 shards).
Depending on the latency of your storage, such requests can take an extremely long time to return results. These requests can also tax machine resources and, when using cloud storage, incur high processing costs.
Required authorization
- Cluster privileges:
monitor_snapshot
Path parameters
-
The snapshot repository name used to limit the request. It supports wildcards (
*) if<snapshot>isn't specified. -
A comma-separated list of snapshots to retrieve status for. The default is currently running snapshots. Wildcards (
*) are not supported.
Query parameters
-
The period to wait for the master node. If the master node is not available before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to
-1.External documentation
GET _snapshot/my_repository/snapshot_2/_status
resp = client.snapshot.status(
repository="my_repository",
snapshot="snapshot_2",
)
const response = await client.snapshot.status({
repository: "my_repository",
snapshot: "snapshot_2",
});
response = client.snapshot.status(
repository: "my_repository",
snapshot: "snapshot_2"
)
$resp = $client->snapshot()->status([
"repository" => "my_repository",
"snapshot" => "snapshot_2",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_snapshot/my_repository/snapshot_2/_status"
client.snapshot().status(s -> s
.repository("my_repository")
.snapshot("snapshot_2")
);
{
"snapshots" : [
{
"snapshot" : "snapshot_2",
"repository" : "my_repository",
"uuid" : "lNeQD1SvTQCqqJUMQSwmGg",
"state" : "SUCCESS",
"include_global_state" : false,
"shards_stats" : {
"initializing" : 0,
"started" : 0,
"finalizing" : 0,
"done" : 1,
"failed" : 0,
"total" : 1
},
"stats" : {
"incremental" : {
"file_count" : 3,
"size_in_bytes" : 5969
},
"total" : {
"file_count" : 4,
"size_in_bytes" : 6024
},
"start_time_in_millis" : 1594829326691,
"time_in_millis" : 205
},
"indices" : {
"index_1" : {
"shards_stats" : {
"initializing" : 0,
"started" : 0,
"finalizing" : 0,
"done" : 1,
"failed" : 0,
"total" : 1
},
"stats" : {
"incremental" : {
"file_count" : 3,
"size_in_bytes" : 5969
},
"total" : {
"file_count" : 4,
"size_in_bytes" : 6024
},
"start_time_in_millis" : 1594829326896,
"time_in_millis" : 0
},
"shards" : {
"0" : {
"stage" : "DONE",
"stats" : {
"incremental" : {
"file_count" : 3,
"size_in_bytes" : 5969
},
"total" : {
"file_count" : 4,
"size_in_bytes" : 6024
},
"start_time_in_millis" : 1594829326896,
"time_in_millis" : 0
}
}
}
}
}
}
]
}
Query parameters
-
The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation -
The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
GET _slm/policy/daily-snapshots?human
resp = client.slm.get_lifecycle(
policy_id="daily-snapshots",
human=True,
)
const response = await client.slm.getLifecycle({
policy_id: "daily-snapshots",
human: "true",
});
response = client.slm.get_lifecycle(
policy_id: "daily-snapshots",
human: "true"
)
$resp = $client->slm()->getLifecycle([
"policy_id" => "daily-snapshots",
"human" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_slm/policy/daily-snapshots?human"
{
"daily-snapshots": {
"version": 1,
"modified_date": "2099-05-06T01:30:00.000Z",
"modified_date_millis": 4081757400000,
"policy" : {
"schedule": "0 30 1 * * ?",
"name": "<daily-snap-{now/d}>",
"repository": "my_repository",
"config": {
"indices": ["data-*", "important"],
"ignore_unavailable": false,
"include_global_state": false
},
"retention": {
"expire_after": "30d",
"min_count": 5,
"max_count": 50
}
},
"stats": {
"policy": "daily-snapshots",
"snapshots_taken": 0,
"snapshots_failed": 0,
"snapshots_deleted": 0,
"snapshot_deletion_failures": 0
},
"next_execution": "2099-05-07T01:30:00.000Z",
"next_execution_millis": 4081843800000
}
}
Query parameters
-
Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation -
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
GET /_slm/stats
resp = client.slm.get_stats()
const response = await client.slm.getStats();
response = client.slm.get_stats
$resp = $client->slm()->getStats();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_slm/stats"
client.slm().getStats(g -> g);
{
"retention_runs": 13,
"retention_failed": 0,
"retention_timed_out": 0,
"retention_deletion_time": "1.4s",
"retention_deletion_time_millis": 1404,
"policy_stats": [ ],
"total_snapshots_taken": 1,
"total_snapshots_failed": 1,
"total_snapshots_deleted": 0,
"total_snapshot_deletion_failures": 0
}
Stop snapshot lifecycle management
Generally available; Added in 7.6.0
Stop all snapshot lifecycle management (SLM) operations and the SLM plugin. This API is useful when you are performing maintenance on a cluster and need to prevent SLM from performing any actions on your data streams or indices. Stopping SLM does not stop any snapshots that are in progress. You can manually trigger snapshots with the run snapshot lifecycle policy API even if SLM is stopped.
The API returns a response as soon as the request is acknowledged, but the plugin might continue to run until in-progress operations complete and it can be safely stopped. Use the get snapshot lifecycle management status API to see if SLM is running.
Query parameters
-
The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to
-1.External documentation -
The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error. To indicate that the request should never timeout, set it to
-1.External documentation
curl \
--request POST 'http://api.example.com/_slm/stop' \
--header "Authorization: $API_KEY"
Delete a synonym set
Generally available; Added in 8.10.0
You can only delete a synonyms set that is not in use by any index analyzer.
Synonyms sets can be used in synonym graph token filters and synonym token filters. These synonym filters can be used as part of search analyzers.
Analyzers need to be loaded when an index is restored (such as when a node starts, or the index becomes open). Even if the analyzer is not used on any field mapping, it still needs to be loaded on the index recovery phase.
If any analyzers cannot be loaded, the index becomes unavailable and the cluster status becomes red or yellow as index shards are not available. To prevent that, synonyms sets that are used in analyzers can't be deleted. A delete request in this case will return a 400 response code.
To remove a synonyms set, you must first remove all indices that contain analyzers using it. You can migrate an index by creating a new index that does not contain the token filter with the synonyms set, and use the reindex API in order to copy over the index data. Once finished, you can delete the index. When the synonyms set is not used in analyzers, you will be able to delete it.
Required authorization
- Cluster privileges:
manage_search_synonyms
DELETE _synonyms/my-synonyms-set
resp = client.synonyms.delete_synonym(
id="my-synonyms-set",
)
const response = await client.synonyms.deleteSynonym({
id: "my-synonyms-set",
});
response = client.synonyms.delete_synonym(
id: "my-synonyms-set"
)
$resp = $client->synonyms()->deleteSynonym([
"id" => "my-synonyms-set",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_synonyms/my-synonyms-set"
client.synonyms().deleteSynonym(d -> d
.id("my-synonyms-set")
);
GET _synonyms/my-synonyms-set/test-1
resp = client.synonyms.get_synonym_rule(
set_id="my-synonyms-set",
rule_id="test-1",
)
const response = await client.synonyms.getSynonymRule({
set_id: "my-synonyms-set",
rule_id: "test-1",
});
response = client.synonyms.get_synonym_rule(
set_id: "my-synonyms-set",
rule_id: "test-1"
)
$resp = $client->synonyms()->getSynonymRule([
"set_id" => "my-synonyms-set",
"rule_id" => "test-1",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_synonyms/my-synonyms-set/test-1"
client.synonyms().getSynonymRule(g -> g
.ruleId("test-1")
.setId("my-synonyms-set")
);
{
"id": "test-1",
"synonyms": "hello, hi"
}
Get all tasks
Technical preview; Added in 2.3.0
Get information about the tasks currently running on one or more nodes in the cluster.
WARNING: The task management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible.
Identifying running tasks
The X-Opaque-Id header, when provided on the HTTP request header, is going to be returned as a header in the response as well as in the headers field for in the task information.
This enables you to track certain calls or associate certain tasks with the client that started them.
For example:
curl -i -H "X-Opaque-Id: 123456" "http://localhost:9200/_tasks?group_by=parents"
The API returns the following result:
HTTP/1.1 200 OK
X-Opaque-Id: 123456
content-type: application/json; charset=UTF-8
content-length: 831
{
"tasks" : {
"u5lcZHqcQhu-rUoFaqDphA:45" : {
"node" : "u5lcZHqcQhu-rUoFaqDphA",
"id" : 45,
"type" : "transport",
"action" : "cluster:monitor/tasks/lists",
"start_time_in_millis" : 1513823752749,
"running_time_in_nanos" : 293139,
"cancellable" : false,
"headers" : {
"X-Opaque-Id" : "123456"
},
"children" : [
{
"node" : "u5lcZHqcQhu-rUoFaqDphA",
"id" : 46,
"type" : "direct",
"action" : "cluster:monitor/tasks/lists[n]",
"start_time_in_millis" : 1513823752750,
"running_time_in_nanos" : 92133,
"cancellable" : false,
"parent_task_id" : "u5lcZHqcQhu-rUoFaqDphA:45",
"headers" : {
"X-Opaque-Id" : "123456"
}
}
]
}
}
}
In this example, X-Opaque-Id: 123456 is the ID as a part of the response header.
The X-Opaque-Id in the task headers is the ID for the task that was initiated by the REST request.
The X-Opaque-Id in the children headers is the child task of the task that was initiated by the REST request.
Required authorization
- Cluster privileges:
monitor
Query parameters
-
A comma-separated list or wildcard expression of actions used to limit the request. For example, you can use
cluser:*to retrieve all cluster-related tasks. -
If
true, the response includes detailed information about the running tasks. This information is useful to distinguish tasks from each other but is more costly to run. -
A key that is used to group tasks in the response. The task lists can be grouped either by nodes or by parent tasks.
Supported values include:
nodes: Group tasks by node ID.parents: Group tasks by parent task ID.none: Do not group tasks.
Values are
nodes,parents, ornone. -
A comma-separated list of node IDs or names that is used to limit the returned information.
-
A parent task identifier that is used to limit returned information. To return all tasks, omit this parameter or use a value of
-1. If the parent task is not found, the API does not return a 404 response code. -
The period to wait for each node to respond. If a node does not respond before its timeout expires, the response does not include its information. However, timed out nodes are included in the
node_failuresproperty.External documentation -
If
true, the request blocks until the operation is complete.
GET _tasks?actions=*search&detailed
resp = client.tasks.list(
actions="*search",
detailed=True,
)
const response = await client.tasks.list({
actions: "*search",
detailed: "true",
});
response = client.tasks.list(
actions: "*search",
detailed: "true"
)
$resp = $client->tasks()->list([
"actions" => "*search",
"detailed" => "true",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_tasks?actions=*search&detailed"
{
"nodes" : {
"oTUltX4IQMOUUVeiohTt8A" : {
"name" : "H5dfFeA",
"transport_address" : "127.0.0.1:9300",
"host" : "127.0.0.1",
"ip" : "127.0.0.1:9300",
"tasks" : {
"oTUltX4IQMOUUVeiohTt8A:464" : {
"node" : "oTUltX4IQMOUUVeiohTt8A",
"id" : 464,
"type" : "transport",
"action" : "indices:data/read/search",
"description" : "indices[test], types[test], search_type[QUERY_THEN_FETCH], source[{\"query\":...}]",
"start_time_in_millis" : 1483478610008,
"running_time_in_nanos" : 13991383,
"cancellable" : true,
"cancelled" : false
}
}
}
}
}
Query parameters
-
If this value is false, the transform must be stopped before it can be deleted. If true, the transform is deleted regardless of its current state.
-
If this value is true, the destination index is deleted together with the transform. If false, the destination index will not be deleted
-
Period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
DELETE _transform/ecommerce_transform
resp = client.transform.delete_transform(
transform_id="ecommerce_transform",
)
const response = await client.transform.deleteTransform({
transform_id: "ecommerce_transform",
});
response = client.transform.delete_transform(
transform_id: "ecommerce_transform"
)
$resp = $client->transform()->deleteTransform([
"transform_id" => "ecommerce_transform",
]);
curl -X DELETE -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_transform/ecommerce_transform"
client.transform().deleteTransform(d -> d
.transformId("ecommerce_transform")
);
{
"acknowledged": true
}
Deactivate a watch
Generally available
All methods and paths for this operation:
A watch can be either active or inactive.
Required authorization
- Cluster privileges:
manage_watcher
PUT _watcher/watch/my_watch/_deactivate
resp = client.watcher.deactivate_watch(
watch_id="my_watch",
)
const response = await client.watcher.deactivateWatch({
watch_id: "my_watch",
});
response = client.watcher.deactivate_watch(
watch_id: "my_watch"
)
$resp = $client->watcher()->deactivateWatch([
"watch_id" => "my_watch",
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_watcher/watch/my_watch/_deactivate"
client.watcher().deactivateWatch(d -> d
.watchId("my_watch")
);
Update Watcher index settings
Generally available
Update settings for the Watcher internal index (.watches).
Only a subset of settings can be modified.
This includes index.auto_expand_replicas, index.number_of_replicas, index.routing.allocation.exclude.*,
index.routing.allocation.include.* and index.routing.allocation.require.*.
Modification of index.routing.allocation.include._tier_preference is an exception and is not allowed as the
Watcher shards must always be in the data_content tier.
Required authorization
- Cluster privileges:
manage_watcher
Query parameters
-
The period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error.
External documentation -
The period to wait for a response. If no response is received before the timeout expires, the request fails and returns an error.
External documentation
PUT /_watcher/settings
{
"index.auto_expand_replicas": "0-4"
}
resp = client.watcher.update_settings(
index.auto_expand_replicas="0-4",
)
const response = await client.watcher.updateSettings({
"index.auto_expand_replicas": "0-4",
});
response = client.watcher.update_settings(
body: {
"index.auto_expand_replicas": "0-4"
}
)
$resp = $client->watcher()->updateSettings([
"body" => [
"index.auto_expand_replicas" => "0-4",
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"index.auto_expand_replicas":"0-4"}' "$ELASTICSEARCH_URL/_watcher/settings"
client.watcher().updateSettings(u -> u
.indexAutoExpandReplicas("0-4")
);
{
"index.auto_expand_replicas": "0-4"
}