POST
/
agents
/
{agent_id}
/
evaluate
curl --request POST \
  --url https://api.contextual.ai/v1/agents/{agent_id}/evaluate \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: multipart/form-data' \
  --form 'metrics[]=[
  "equivalence"
]' \
  --form 'evalset_name=<string>' \
  --form 'llm_model_id=<string>'
{
  "id": "3c90c3cc-0d44-4b50-8888-8dd25736052a"
}

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Path Parameters

agent_id
string
required

Agent ID of the agent to evaluate

Body

multipart/form-data
metrics[]
enum<string>[]

List of metrics to use. Supported metrics are equivalence and groundedness.

Available options:
equivalence,
groundedness
evalset_file
file

Evalset file (CSV) to use for evaluation, containing the columns prompt (i.e. question) and reference (i.e. ground truth response). Either evalset_name or evalset_file must be provided, but not both.

evalset_name
string

Name of the Dataset to use for evaluation, created through the /datasets/evaluate API. Either evalset_name or evalset_file must be provided, but not both.

llm_model_id
string

ID of the model to evaluate. Uses the default model if not specified.

Response

200
application/json
Successful Response

Response from Launch Evaluation request

id
string
required

ID of the launched evaluation