Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Body
Provide a unique name of the task. A function with this name will be created in the project. Functions configuration is overridden by the request parameters.
"add_numbers"
"parse_document"
"choose_tool"
Optionally provide an instruction for the model to complete the task. Recommended to be concise and to the point
"Calculate the sum of two numbers"
Optionally provide an input schema for the task. Can preferably include field descriptions to allow the model to reason about the input variables. Schema is validated against the input data and issues an error if it does not match. With the Opper SDKs you can define these schemas through libraries like Pydantic and Zod. For schemas with definitions, prefer using '$defs' and '#/$defs/...' references.
{
"properties": {
"x": { "title": "X", "type": "integer" },
"y": { "title": "Y", "type": "integer" }
},
"required": ["x", "y"],
"title": "OpperInputExample",
"type": "object"
}
Optionally provide an output schema for the task. Response is guaranteed to match the schema or throw an error. Can preferably include field descriptions to allow the model to reason about the output variables. With the Opper SDKs you can define these schemas through libraries like Pydantic and Zod. For schemas with definitions, prefer using '$defs' and '#/$defs/...' references.
{
"properties": {
"sum": { "title": "Sum", "type": "integer" }
},
"required": ["sum"],
"title": "OpperOutputExample",
"type": "object"
}
Optionally provide input data as context to complete the task. Could be a text, image, audio or a combination of these.
Optionally provide a model to use for completing the task.
If not provided, a default model will be used. Currently the default model is azure/gpt-4o-eu
To specify options for the model, use a dictionary of key-value pairs. The options are passed to the model on invocation.
An example of passing temperature to gpt-4o-mini
hosted on OpenAI is shown below.
{
"model": "openai/gpt-4o-mini", # the model name
"options": {
"temperature": 0.5 # the options for the model
}
}
To specify a fallback model, use a list of models. The models will then be tried in order. The second model will be used if the first model is not available, and so on.
[
"openai/gpt-4o-mini", # first model to try
"openai/gpt-4.1-nano", # second model to try
]
Optionally provide examples of successful task completions. Will be added to the prompt to help the model understand the task from examples.
[
{
"comment": "Adds two numbers",
"input": { "x": 1, "y": 3 },
"output": { "sum": 4 }
}
]
Optionally provide the parent span ID to add to the call event. This will automatically tie the call to a parent span in the UI.
"123e4567-e89b-12d3-a456-426614174000"
Optionally provide a list of tags to add to the call event. Useful for being able to understand aggregate analytics on some dimension.
{
"project": "project_456",
"user": "company_123"
}
Optional configuration for the function.Configuration is a dictionary of key-value pairs that can be used to override the default configuration for the function.
{
"beta.evaluation.enabled": true,
"invocation.cache.ttl": 0,
"invocation.few_shot.count": 0,
"invocation.structured_generation.max_attempts": 5
}
Response
Successful Response
The ID of the span of the call
Result of the task if the call does not use an output schema
"The sum of 1 and 3 is 4"
Result of the task if the call uses an output schema
{ "sum": 4 }
True if the result was returned from a cached results
true
The images generated by the call. Only available for image models. Depending on the configuration, the response can either be a list of image urls or a base64 encoded images.
["image_url"]
The usage of the call split into input and output tokens as well as the total tokens and an optional breakdown of the input and output tokens.The input tokens are the tokens sent to the model and the output tokens are the tokens received from the model. The total tokens is the sum of input and output tokens.
{
"input_tokens": 25,
"output_tokens": 972,
"output_tokens_details": { "reasoning_tokens": 704 },
"total_tokens": 997
}
The cost in USD of the call split into total, generation and platform costs where total is the sum of generation and platform costs
{
"generation": 0.0001,
"platform": 0.00001,
"total": 0.00011
}