Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Body
Provide a unique name of the task. A function with this name will be created in the project. Functions configuration is overridden by the request parameters.
"add_numbers"
"parse_document"
"choose_tool"
Optionally provide an instruction for the model to complete the task. Recommended to be concise and to the point
"Calculate the sum of two numbers"
Optionally provide an input schema for the task. Can preferably include field descriptions to allow the model to reason about the input variables. Schema is validated against the input data and issues an error if it does not match. With the Opper SDKs you can define these schemas through libraries like Pydantic and Zod. For schemas with definitions, prefer using '$defs' and '#/$defs/...' references.
{
"properties": {
"x": { "title": "X", "type": "integer" },
"y": { "title": "Y", "type": "integer" }
},
"required": ["x", "y"],
"title": "OpperInputExample",
"type": "object"
}
Optionally provide an output schema for the task. Response is guaranteed to match the schema or throw an error. Can preferably include field descriptions to allow the model to reason about the output variables. With the Opper SDKs you can define these schemas through libraries like Pydantic and Zod. For schemas with definitions, prefer using '$defs' and '#/$defs/...' references.
{
"properties": {
"sum": { "title": "Sum", "type": "integer" }
},
"required": ["sum"],
"title": "OpperOutputExample",
"type": "object"
}
Optionally provide input data as context to complete the task. Could be a text, image, audio or a combination of these.
Optionally provide a model to use for completing the task.
If not provided, a default model will be used. Currently the default model is azure/gpt-4o-eu
To specify options for the model, use a dictionary of key-value pairs. The options are passed to the model on invocation.
An example of passing temperature to gpt-4o-mini
hosted on OpenAI is shown below.
{
"model": "openai/gpt-4o-mini", # the model name
"options": {
"temperature": 0.5 # the options for the model
}
}
To specify a fallback model, use a list of models. The models will then be tried in order. The second model will be used if the first model is not available, and so on.
[
"openai/gpt-4o-mini", # first model to try
"openai/gpt-4.1-nano", # second model to try
]
Optionally provide examples of successful task completions. Will be added to the prompt to help the model understand the task from examples.
[
{
"comment": "Adds two numbers",
"input": { "x": 1, "y": 3 },
"output": { "sum": 4 }
}
]
Optionally provide the parent span ID to add to the call event. This will automatically tie the call to a parent span in the UI.
"123e4567-e89b-12d3-a456-426614174000"
Optionally provide a list of tags to add to the call event. Useful for being able to understand aggregate analytics on some dimension.
{
"project": "project_456",
"user": "company_123"
}
Optional configuration for the function.Configuration is a dictionary of key-value pairs that can be used to override the default configuration for the function.
{
"beta.evaluation.enabled": true,
"invocation.cache.ttl": 0,
"invocation.few_shot.count": 0,
"invocation.structured_generation.max_attempts": 5
}
Response
Server-Sent Events stream of function execution chunks
Server-Sent Event following the SSE specification
The actual data payload containing streaming chunk information
Event ID for the SSE event
"123"
Event type for the SSE event
"message"
Retry interval in milliseconds for the SSE connection
1000