Create a tuning job for the specified Agent
to specialize it to your specific domain or use case.
This API initiates an asynchronous tuning task. You can provide the required data through one of two ways:
-
Provide a
training_file
and an optionaltest_file
. If notest_file
is provided, a portion of thetraining_file
will be held out as the test set. For easy reusability, thetraining_file
is automatically saved as aTuning
Dataset
, and thetest_file
as anEvaluation
Dataset
. You can manage them via the/datasets/tune
and/datasets/evaluation
endpoints. -
Provide a
Tuning
Dataset
and an optionalEvaluation
Dataset
. You can create aTuning
Dataset
andEvaluation
Dataset
using the/datasets/tune
and/datasets/evaluation
endpoints respectively.
The API returns a tune job id
which can be used to check on the status of your tuning task through the GET /tune/jobs/{job_id}/metadata
endpoint.
After the tuning job is complete, the metadata associated with the tune job will include evaluation results and a model ID. You can then deploy the tuned model to the agent by editing its config with the tuned model ID and the "Edit Agent" API (i.e. the PUT /agents/{agent_id}
API). To deactivate the tuned model, you will need to edit the Agent's config again and set the llm_model_id
field to "default". For an end-to-end walkthrough, see the Tune & Evaluation Guide
.