curl --request GET \
--url https://api.contextual.ai/v1/agents/templates/{template} \
--header 'Authorization: Bearer <token>'{
"name": "<string>",
"datastore_ids": [
"3c90c3cc-0d44-4b50-8888-8dd25736052a"
],
"template_name": "<string>",
"description": "<string>",
"system_prompt": "<string>",
"filter_prompt": "<string>",
"no_retrieval_system_prompt": "<string>",
"multiturn_system_prompt": "<string>",
"suggested_queries": [
"<string>"
],
"agent_configs": {
"retrieval_config": {
"top_k_retrieved_chunks": 123,
"lexical_alpha": 123,
"semantic_alpha": 123
},
"filter_and_rerank_config": {
"top_k_reranked_chunks": 123,
"reranker_score_filter_threshold": 0,
"rerank_instructions": "<string>",
"default_metadata_filters": {
"filters": [
{
"field": "field1",
"operator": "equals",
"value": "value1"
}
],
"operator": "AND"
},
"per_datastore_metadata_filters": {}
},
"generate_response_config": {
"model": "Contextual GLM",
"max_new_tokens": 123,
"temperature": 123,
"top_p": 123,
"frequency_penalty": 123,
"seed": 123,
"calculate_groundedness": true,
"avoid_commentary": false
},
"global_config": {
"enable_rerank": true,
"enable_filter": true,
"enable_multi_turn": true,
"should_check_retrieval_need": true
},
"reformulation_config": {
"enable_query_expansion": true,
"query_expansion_prompt": "<string>",
"enable_query_decomposition": true,
"query_decomposition_prompt": "<string>"
},
"translation_config": {
"translate_needed": true,
"translate_confidence": 123
}
},
"agent_usages": {
"query": 123,
"tune": 123,
"eval": 123
}
}curl --request GET \
--url https://api.contextual.ai/v1/agents/templates/{template} \
--header 'Authorization: Bearer <token>'{
"name": "<string>",
"datastore_ids": [
"3c90c3cc-0d44-4b50-8888-8dd25736052a"
],
"template_name": "<string>",
"description": "<string>",
"system_prompt": "<string>",
"filter_prompt": "<string>",
"no_retrieval_system_prompt": "<string>",
"multiturn_system_prompt": "<string>",
"suggested_queries": [
"<string>"
],
"agent_configs": {
"retrieval_config": {
"top_k_retrieved_chunks": 123,
"lexical_alpha": 123,
"semantic_alpha": 123
},
"filter_and_rerank_config": {
"top_k_reranked_chunks": 123,
"reranker_score_filter_threshold": 0,
"rerank_instructions": "<string>",
"default_metadata_filters": {
"filters": [
{
"field": "field1",
"operator": "equals",
"value": "value1"
}
],
"operator": "AND"
},
"per_datastore_metadata_filters": {}
},
"generate_response_config": {
"model": "Contextual GLM",
"max_new_tokens": 123,
"temperature": 123,
"top_p": 123,
"frequency_penalty": 123,
"seed": 123,
"calculate_groundedness": true,
"avoid_commentary": false
},
"global_config": {
"enable_rerank": true,
"enable_filter": true,
"enable_multi_turn": true,
"should_check_retrieval_need": true
},
"reformulation_config": {
"enable_query_expansion": true,
"query_expansion_prompt": "<string>",
"enable_query_decomposition": true,
"query_decomposition_prompt": "<string>"
},
"translation_config": {
"translate_needed": true,
"translate_confidence": 123
}
},
"agent_usages": {
"query": 123,
"tune": 123,
"eval": 123
}
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Template for which to get config, can be a built-in template (DefaultTemplateName) or a custom template name
default, finance, customer_support, company_policy_q_and_a Successful Response
Response to GET Agent request
Name of the agent
The IDs of the datastore(s) associated with the agent
The template used to create this agent.
Description of the agent
Instructions that your agent references when generating responses. Note that we do not guarantee that the system will follow these instructions exactly.
The prompt to an LLM which determines whether retrieved chunks are relevant to a given query and filters out irrelevant chunks. This prompt is applied per chunk.
Instructions on how the agent should respond when there are no relevant retrievals that can be used to answer a query.
Instructions on how the agent should handle multi-turn conversations.
These queries will show up as suggestions in the Contextual UI when users load the agent. We recommend including common queries that users will ask, as well as complex queries so users understand the types of complex queries the system can handle. The max length of all the suggested queries is 1000.
The following advanced parameters are experimental and subject to change.
Show child attributes
Parameters that affect how the agent retrieves from datastore(s)
Show child attributes
The maximum number of retrieved chunks from the datastore.
The weight of lexical search during retrieval. Must sum to 1 with semantic_alpha.
The weight of semantic search during retrieval. Must sum to 1 with lexical_alpha.
Parameters that affect filtering and reranking of retrieved knowledge
Show child attributes
The number of highest ranked chunks after reranking to be used
If the reranker relevance score associated with a chunk is below this threshold, then the chunk will be filtered out and not used for generation. Scores are between 0 and 1, with scores closer to 1 being more relevant. Set the value to 0 to disable the reranker score filtering.
0 <= x <= 1Instructions that the reranker references when ranking retrievals. Note that we do not guarantee that the reranker will follow these instructions exactly. Examples: "Prioritize internal sales documents over market analysis reports. More recent documents should be weighted higher. Enterprise portal content supersedes distributor communications." and "Emphasize forecasts from top-tier investment banks. Recent analysis should take precedence. Disregard aggregator sites and favor detailed research notes over news summaries."
Optional metadata filter which is applied while retrieving from every datastore linked to this agent.
Show child attributes
Field name to search for in the metadata
Operator to be used for the filter.
equals, containsany, exists, startswith, gt, gte, lt, lte, notequals, between, wildcard The value to be searched for in the field. In case of exists operator, it is not needed.
{
"filters": [
{
"field": "field1",
"operator": "equals",
"value": "value1"
}
],
"operator": "AND"
}Defines an optional custom metadata filter per datastore ID. Each entry in the dictionary should have a datastore UUID as the key, and the value should be a metadata filter definition. The filter will be applied in addition to filter(s) specified in default_metadata_filters and in the documents_filters field in the /query request during retrieval.
Show child attributes
"Defines a custom metadata filter as a Composite MetadataFilter. Which can be be a list of filters or nested filters.
Show child attributes
Filters added to the query for filtering docs
Defines a custom metadata filter. The expected input is a dict which can have different operators, fields and values. For example:
{"field": "title", "operator": "startswith", "value": "hr-"}Use lowercase for value when not using equals operator. For document_id and date_created the query is built using direct query without nesting.
Show child attributes
Field name to search for in the metadata
Operator to be used for the filter.
equals, containsany, exists, startswith, gt, gte, lt, lte, notequals, between, wildcard The value to be searched for in the field. In case of exists operator, it is not needed.
Composite operator to be used to combine filters
AND, OR, AND_NOT Parameters that affect response generation
Show child attributes
The model to use for response generation.
Contextual GLM, Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash, Gemini 2.0 Flash Lite, Claude Opus 4, Claude Sonnet 4, GPT-5, Default The maximum number of tokens the model can generate in a response.
The sampling temperature, which affects the randomness in the response.
A parameter for nucleus sampling, an alternative to temperature which also affects the randomness of the response.
This parameter adjusts how the model treats repeated tokens during text generation.
This parameter controls the randomness of how the model selects the next tokens during text generation.
This parameter controls generation of groundedness scores.
Flag to indicate whether the model should avoid providing additional commentary in responses. Commentary is conversational in nature and does not contain verifiable claims; therefore, commentary is not strictly grounded in available context. However, commentary may provide useful context which improves the helpfulness of responses.
Parameters that affect the agent's overall RAG workflow
Show child attributes
Enables reranking of retrieved chunks
Enables filtering of retrieved chunks with a separate LLM
Enables multi-turn conversations. This feature is currently experimental and will be improved.
Enables checking if retrieval is needed for the query. This feature is currently experimental and will be improved.
Parameters that affect the agent's query reformulation
Show child attributes
Whether to enable query expansion.
The prompt to use for query expansion.
Whether to enable query decomposition.
The prompt to use for query decomposition.
Was this page helpful?