Custom Agent Composer YAML workflows are available in public preview for enterprise users. Template agents (Basic Search and Agentic Search) are available for self-serve users, but creating custom workflows with YAML requires enterprise access. For more information or to request access, please contact your Contextual AI representative.
Overview
This catalog lists step types available for use in Agent Composer YAML graphs, including their configuration parameters, required inputs, outputs, and supported UI stream types.AgenticResearchStep
- Description: Runs a multi-turn agent loop that plans and invokes tools to gather information and produce structured research output.
- Config Parameters:
agent_config: Dict[str, Any] = requiredtools_config: Optional[List[Dict[str, Any]]] = Noneversion: str = "0.1"
- Inputs:
message_history: List[MessageAndRole] - Outputs: (varies by implementation; commonly includes structured research output consumed by generation steps)
- UI Stream Types:
RETRIEVALS
Tip: For final response synthesis, pairAgenticResearchStepwithGenerateFromResearchStep.
AppendStep
- Description: Appends a given item to a list and returns the new list.
- Config: None
- Inputs:
list: List[Any],element: Any - Outputs:
output: List[Any] - UI Stream Types: None
AttributionStep
- Description: Step to handle attributions.
- Config Parameters:
attribution: Optional[bool] = Nonetemplate_attribution: Optional[str] = Noneattribution_model: Optional[str] = Nonenumber_of_retrievals_for_attribution: Optional[int] = None
- Inputs:
generation_result: GenerationResult,chunks_info: List[Dict] - Outputs:
attribution_result: AttributionResult - UI Stream Types:
ATTRIBUTION
BuildPromptStep
- Description: Step to build the final prompt.
- Config Parameters:
untrusted_system_prompt: Optional[str] = Noneuntrusted_no_retrieval_system_prompt: Optional[str] = Nonetemplate_seed: Optional[str] = Nonetemplate_query: Optional[str] = Nonetemplate_knowledge: Optional[str] = Nonemax_new_tokens: Optional[int] = Noneskip_context_truncation: Optional[bool] = Noneallow_multi_turn: Optional[bool] = Noneavoid_commentary: Optional[bool] = Noneattribution: Optional[bool] = None
- Inputs:
query: str,retrievals: Retrievals,retrieval_record: Retrieval,message_history: List[MessageAndRole],chunks_info: List[Dict] - Outputs:
retrieval_record: Retrieval,message_history: List[MessageAndRole],chunks_info": List[Dict] - UI Stream Types:
RETRIEVALS
CheckRetrievalNeedStep
- Description: Step to check if retrieval is needed for the given query.
- Config Parameters:
should_check_retrieval_need: Optional[bool] = Nonetemplate_should_retrieve: Optional[str] = None
- Inputs:
query: str - Outputs:
is_retrieval_needed: bool,retrievals: Retrievals - UI Stream Types: None
ConcatenateStep
- Description: Concatenates two lists.
- Config: None
- Inputs:
list1: List[Any],list2: List[Any] - Outputs:
output: List[Any] - UI Stream Types: None
ContextualAgentStep
- Description: Step to retrieve information from contextual agents.
- Config Parameters:
agent_ids: Optional[List[str]] = None - Inputs:
query: str - Outputs:
retrievals: Retrievals - UI Stream Types:
RETRIEVALS
ConstantStep
- Description: Provides constant values as outputs.
- Config Parameters:
value: Any = required - Inputs: None
- Outputs:
output: Any - UI Stream Types: None
ContextualizeChunk
- Description: Contextualize chunk text with surrounding document context using VLM.
- Config Parameters:
shard_tokens: int = 20000shard_overlap: int = 10000
- Inputs:
chunk: Dict[str, Any],tenant_id: str,document_id: str - Outputs:
chunk: Dict[str, Any] - UI Stream Types: None
CreateMessageHistoryStep
- Description: Converts a query into a message history.
- Config Parameters:
enable_model_armor: Optional[bool] = Noneshould_check_retrieval_need: Optional[bool] = Nonetemplate_should_retrieve: Optional[str] = Noneuntrusted_system_prompt: Optional[str] = Noneuntrusted_no_retrieval_system_prompt: Optional[str] = Nonetemplate_seed: Optional[str] = Nonetemplate_query: Optional[str] = Nonetemplate_knowledge: Optional[str] = Nonemax_new_tokens: Optional[int] = Noneskip_context_truncation: Optional[bool] = Noneallow_multi_turn: Optional[bool] = Noneavoid_commentary: Optional[bool] = Noneattribution: Optional[bool] = None
- Inputs:
query: str - Outputs:
message_history: List[MessageAndRole] - UI Stream Types:
RETRIEVALS: False
DeleteMemberStep
- Description: Deletes the specified key from a dict or a pydantic model.
- Config Parameters:
allow_missing_key: bool = False - Inputs:
input: Union[dict, BaseModel],key: str - Outputs:
output: Union[dict, BaseModel] - UI Stream Types: None
DocumentDownloadStep
- Description: Downloads documents from ingestion storage using DownloadDocumentTool.
- Config: None
- Inputs:
tenant: dict,datastore_id: UUID,short_path: str - Outputs:
content: str,file_name: str,document_id: str,full_gcs_path: str,content_type: str,size_bytes: int - UI Stream Types: None
EntitleRetrievalsStep
- Description: Post-process retrievals.
- Config: None
- Inputs:
retrievals: Retrievals - Outputs:
retrievals: Retrievals,entitlements_api_time_elapsed: float - UI Stream Types: None
ExtractChunkMetadata
- Description: Extract key-value metadata from chunk text using VLM.
- Config: None
- Inputs:
chunk: Dict[str, Any],prompt: str,vlm_model: str - Outputs:
metadata: Dict[str, Any],chunk: Dict[str, Any] - UI Stream Types: None
ExtractMetadataStep
- Description: Extract metadata from PDF using ExtractMetadataTool.
- Config: None
- Inputs:
content: str,full_gcs_path: str,user_id: UUID,file_name: str - Outputs:
shard_metadata_list: List[ShardMetadata] - UI Stream Types: None
FileAnalysisInputStep
- Description: Analyze files uploaded at query time.
- Config: None
- Inputs:
query: str - Outputs:
retrievals: Retrievals - UI Stream Types:
RETRIEVALS
FilterRetrievalsStep
- Description: Filter retrievals.
- Config Parameters:
template_filter: Optional[str] = Nonefilter_retrievals: Optional[bool] = Nonefilter_model: Optional[str] = Nonestructured_output: Optional[bool] = Noneuntrusted_filter_prompt: Optional[str] = Noneenable_batch_processing: Optional[bool] = None
- Inputs:
query: str,retrievals: Retrievals - Outputs:
retrievals: Retrievals - UI Stream Types: None
GenerateEmbeddingsStep
- Description: Generate embeddings for the query.
- Config Parameters:
max_encoder_length: Optional[int] = Noneretrieval_encoder_model: Optional[str] = None
- Inputs:
query: str,reformulated_query: str,expanded_queries: List[str] - Outputs:
query_embedding: List[float],merged_embeded_reformulated_queries: List[Tuple[List[float], str]] - UI Stream Types: None
GenerateFromResearchStep
- Description: Generate a response.
- Config Parameters:
model_name_or_path: strbase_url: Optional[str] = Noneidentity_guidelines_prompt: str = "You are a helpful assistant made by Contextual AI."response_guidelines_prompt: str = "You should respond in a friendly and helpful manner."litellm_modify_params: bool | Literal["auto"] = "auto"version="0.1"
- Inputs:
message_history: List[MessageAndRole],research: List - Outputs:
response: str
GenerationStep
- Description: Generate a response.
- Config Parameters:
temperature: Optional[float] = Nonemax_new_tokens: Optional[int] = Nonetop_p: Optional[float] = Nonefrequency_penalty: Optional[float] = Noneseed: Optional[int] = Noneattribution: Optional[bool] = None
- Inputs:
message_history: List[MessageAndRole],chunks_info: List[Dict],disable_streaming: bool - Outputs:
generation_result: GenerationResult - UI Stream Types:
GENERATION
GetAgentInfoStep
- Description: Answers questions about the agent itself.
- Config: None
- Inputs:
query: str - Outputs:
generation_result: GenerationResult - UI Stream Types: None
GetDatastoreStatisticsStep
- Description: Handle datastore statistics (as provided).
- Config: None
- Inputs:
query: str - Outputs:
generation_result: GenerationResult - UI Stream Types: None
GetDatastoreSummaryStep
- Description: Aggregates results to generate a datastore summary.
- Config: None
- Inputs:
datastore_statistics: GenerationResult,documents_summary: GenerationResult,query: str - Outputs:
generation_result: GenerationResult - UI Stream Types: None
GetDocumentsSummaryStep
- Description: Generates a summary of documents in the datastore.
- Config: None
- Inputs:
query: str - Outputs:
generation_result: GenerationResult - UI Stream Types: None
GetMemberStep
- Description: Extract values from dictionaries or object attributes.
- Config Parameters:
key: str = required - Inputs:
input: Union[dict, BaseModel] - Outputs:
output: Any - UI Stream Types: None
GetPlatformInfoStep
- Description: Answers questions about the platform itself.
- Config: None
- Inputs:
query: str - Outputs:
generation_result: GenerationResult - UI Stream Types: None
GroundednessStep
- Description: Compute groundedness scores.
- Config Parameters:
attribution: Optional[bool] = Nonecalculate_groundedness: Optional[bool] = None
- Inputs:
generation_result: GenerationResult,attribution_result: AttributionResult,chunks_info: List[Dict] - Outputs:
groundedness_scores: List[GroundednessScore] - UI Stream Types:
GROUNDEDNESS
IndexChunkMetadata
- Description: Index chunk with metadata in ElasticSearch.
- Config: None
- Inputs:
chunk: Dict[str, Any],metadata: Dict[str, Any],index_config: Dict[str, str],tenant_id: str,document_id: str - Outputs:
indexed: bool,chunk_id: str - UI Stream Types: None
IndexMetadataStep
- Description: Index metadata into Elasticsearch using IndexMetadataTool.
- Config: None
- Inputs:
shard_metadata_list: List[ShardMetadata],tenant: dict,datastore_id: UUID - Outputs:
doc_id_list: List[str] - UI Stream Types: None
IndexStep
- Description: Return the element at a given index of a list.
- Config: None
- Inputs:
list: List[Any],index: int - Outputs:
output: Any - UI Stream Types: None
IsFirstTurnStep
- Description: Returns whether this is the first turn in a multi-turn interaction.
- Config: None
- Inputs: None
- Outputs:
is_first_turn: bool,turn_count: int - UI Stream Types: None
JSONCreatorStep
- Description: Creates JSON objects with dynamic variable substitution using
$. - Config Parameters:
schema: str = required - Inputs: Dynamic (based on
$variablesin schema) - Outputs:
json: Dict[str, Any] - UI Stream Types: None
LanguageModelStep
- Description: Generic language model inference with template variables.
- Config Parameters:
prompt_template: str = requiredmodel_id: str = requiredtemperature: float = 0.7max_tokens: int = 1000structured_output: Optional[Dict] = None
- Inputs: Dynamic (template variables)
- Outputs:
response: str - UI Stream Types: None
LengthStep
- Description: Return the length of a list.
- Config: None
- Inputs:
list: List[Any] - Outputs:
output: int - UI Stream Types: None
MCPClientStep
- Description: Execute tools on Model Context Protocol servers.
- Config Parameters:
server_url: str = requiredtool_name: str = requiredtool_args: str = requiredtransport_type: str = "http"connection_timeout: int = 30server_name: Optional[str] = Noneauth_headers: Optional[Dict[str, str]] = None
- Inputs: Dynamic (based on
tool_args) - Outputs:
mcp_result: Dict[str, Any] - UI Stream Types: None
MergeStep
- Description: Merge two dicts (or pydantic models).
- Config Parameters:
allow_key_conflicts: bool = False - Inputs:
dict1: Union[dict, BaseModel],dict2: Union[dict, BaseModel] - Outputs:
output: Union[dict, BaseModel] - UI Stream Types: None
ModelArmorFilterUserPromptStep
- Description: Send user prompt to model armor for safety check.
- Config Parameters:
enable_model_armor: Optional[bool] = None - Inputs:
query: str - Outputs: None
- UI Stream Types: None
ProcessMetadataStep
- Description: Process metadata for retrievals.
- Config: None
- Inputs:
retrievals: Retrievals - Outputs:
retrievals: Retrievals - UI Stream Types: None
QueryDecompositionStep
- Description: Decompose the query.
- Config Parameters:
query_decomposition_prompt: Optional[str] = Nonequery_decomposition_model: Optional[str] = Noneenable_query_decomposition: Optional[bool] = None
- Inputs:
query: str - Outputs:
expanded_queries: List[str] - UI Stream Types: None
QueryExpansionStep
- Description: Expand the query.
- Config Parameters:
query_expansion_prompt: Optional[str] = Nonequery_expansion_model: Optional[str] = Noneenable_query_expansion: Optional[bool] = None
- Inputs:
query: str - Outputs:
reformulated_query: str,documents_filters: Optional[Union[CompositeMetadataFilter, BaseMetadataFilter]] - UI Stream Types:
QUERY_REFORMULATION
QueryMultiturnStep
- Description: Handle query multiturn.
- Config Parameters:
query_multiturn_model: Optional[str] = Noneallow_multi_turn: Optional[bool] = Noneuntrusted_multiturn_system_prompt: Optional[str] = Nonetemplate_query_multiturn: Optional[str] = None
- Inputs:
query: str - Outputs:
reformulated_query: str - UI Stream Types:
QUERY_REFORMULATION
QueryRegexStrippingStep
- Description: Strip the query.
- Config Parameters:
query_regex_stripping: Optional[List[str]] = None - Inputs:
query: str - Outputs:
reformulated_query: str - UI Stream Types: None
ReformulateQueryStep
- Description: Reformulate a query for datastore search.
- Config Parameters: None
- Inputs:
query: str - Outputs:
reformulated_query: str,translate_needed: bool,detected_language: str - UI Stream Types:
QUERY_REFORMULATION
RerankRetrievalsStep
- Description: Rerank retrievals.
- Config Parameters:
rerank_top_k: Optional[int] = None,reranker: Optional[str] = None,rerank_retrievals: Optional[bool] = None,rerank_instructions: Optional[str] = None,reranker_score_filter_threshold: Optional[float] = None,rerank_with_llm: Optional[bool] = None,template_rerank: Optional[str] = None,structured_output: Optional[bool] = None, - Inputs:
query: str,retrievals: Retrievals - Outputs:
retrievals: Retrievals - UI Stream Types: None
ResponseGenerationStep
- Description: Generate a response based on query + retrievals.
- Config Parameters: None
- Inputs:
retrievals: Retrievals,query: str,translate_needed: bool,detected_language: str - Outputs:
response: str,attribution_result: AttributionResult,groundedness_scores: List[GroundednessScore] - UI Stream Types:
RETRIEVALS,GENERATION,ATTRIBUTION,GROUNDEDNESS
SalesforceSOSLStep
- Description: Execute SOSL searches against Salesforce.
- Config Parameters:
username: Optional[str] = None,password: Optional[SecretStr] = None,security_token: Optional[SecretStr] = None,client_id: Optional[str] = None,client_secret: Optional[SecretStr] = None,login_url: Optional[str] = None,auth_token_secret: Optional[SecretStr] = None,objects_to_return: str = "Case(Id, Subject, Status, Priority, Account.Name, Contact.Name)",limit: int = 250,timeout: int = 30 - Inputs:
query: str - Outputs:
search_results: str - UI Stream Types: None
SearchShardMetadataStep
- Description: Search document shard metadata using instant search.
- Config Parameters:
default_size: int = 10 - Inputs:
query: str - Outputs:
shard_search_results: List[Dict[str, Any]] - UI Stream Types: None
SearchStep
- Description: Perform search using the search service.
- Config Parameters:
datastores: Optional[List[UUID]] = None,disable_structured_data_search: bool = False - Inputs:
query: str,query_embedding: List[float],merged_embeded_reformulated_queries: List[Tuple[List[float], str]],documents_filters: Optional[...] - Outputs:
retrievals: Retrievals - UI Stream Types: None
SearchStructuredDataStep
- Description: Search structured data.
- Config Parameters:
datastores: Optional[List[UUID]] = None - Inputs:
query: str - Outputs:
retrievals: Retrievals - UI Stream Types: None
SearchUnstructuredDataStep
- Description: Search unstructured data and return retrievals.
- Config Parameters: None
- Inputs:
query: str - Outputs:
retrievals: Retrievals - UI Stream Types:
QUERY_REFORMULATION
SetMemberStep
- Description: Update values of dictionaries or pydantic objects.
- Config Parameters:
overwrite: bool = False - Inputs:
input: Union[dict, BaseModel],key: str,value: Any - Outputs:
output: Union[dict, BaseModel] - UI Stream Types: None
ShardImageExtractionStep
- Description: Extract page images from PDF shard search results.
- Config Parameters:
default_dpi: int = 144 - Inputs:
shard_search_results: List[Dict[str, Any]] - Outputs:
extracted_images: List[str] - UI Stream Types: None
SliceStep
- Description: Slice a list from
start_indextoend_index. - Config: None
- Inputs:
list: List[Any],start_index: int,end_index: int - Outputs:
output: List[Any] - UI Stream Types: None
TranslateQueryStep
- Description: Translate the query.
- Config Parameters:
translate_confidence: Optional[float] = None,template_translation_forward: Optional[str] = None,template_translation_reverse: Optional[str] = None,translate_model: Optional[str] = None,translate_needed: Optional[bool] = None - Inputs:
query: str - Outputs:
reformulated_query: str,translate_needed: bool,detected_language: str - UI Stream Types: None
TranslateResponseStep
- Description: Translate the response.
- Config Parameters:
template_translation_forward: Optional[str] = None,template_translation_reverse: Optional[str] = None,translate_model: Optional[str] = None - Inputs:
generation_result: GenerationResult,translate_needed: bool,detected_language: str - Outputs:
generation_result: GenerationResult - UI Stream Types:
GENERATION
VisionLanguageModelStep
- Description: Vision language model inference with multimodal inputs.
- Config Parameters:
model_id: str = "vertex_ai/gemini-2.0-flash",temperature: float = 0.0,max_tokens: int = 4096,structured_output: Optional[Dict[str, Any]] = None - Inputs:
query: str,images: List[str] - Outputs:
generation_result: GenerationResult - UI Stream Types:
GENERATION
WebSearchStep
- Description: Perform web search.
- Config Parameters:
model: str = "gemini-2.5-flash" - Inputs:
query: str - Outputs:
web_result: WebResult - UI Stream Types: None
WebhookStep
- Description: Execute a webhook call with static and dynamic configuration.
- **Config Parameters:
webhook_url: str,method: HttpMethod = HttpMethod.POST,auth_token: Optional[SecretStr] = None,timeout: int = 30,retries: int = 2,static_headers: Optional[Dict[str, str]] = None,webhook_name: str = "webhook-step" - Inputs:
context_data: Dict[str, Any] - Outputs:
webhook_result: Optional[Dict[str, Any]] - UI Stream Types: None
WrapStep
- Description: Transform data by wrapping inputs in dictionaries.
- Config Parameters:
key: str = required - Inputs:
input: Any - Outputs:
output: Dict[str, Any] - UI Stream Types: None