Calls
A Call is a call to a LLM to perform a generation. With the Opper SDKs, calls can be configured to use a specific model, prompt, schema. Note that all calls are automatically logged as traces in Opper.
Behind the scenes, the Opper API works to construct prompts for the model to generate outputs that optimially completes the call. Prompts are always completely visible in the trace view.
from opperai import AsyncOpper
opper = AsyncOpper()
async def main():
result, _ = await opper.call(
name="respond",
input="What is the capital of Sweden?")
print(result)
asyncio.run(main())
import Client from "opperai";
const client = new Client();
(async () => {
const { message } = await client.call({
name: "respond",
input: "What is the capital of Sweden?",
});
console.log(message);
})();
curl -X POST https://api.opper.ai/v1/call \
-H "Content-Type: application/json" \
-H "x-opper-api-key: YOUR_API_KEY" \
-d '{
"name": "respond",
"input": "What is the capital of Sweden?",
}'
Opper also support multi modal inputs and outputs. For examples of this, check out the Examples folder in the Python SDK and Node SDK respectively.
Instructions are useful for instructing the model of how to respond to a given input. We typically recommend keeping them small and concise and let the schemas be expressive.
from opperai import AsyncOpper
opper = AsyncOpper()
async def main():
result, _ = await opper.call(
name="respond",
input="What is the capital of Sweden?",
instructions="Given a question, produce an answer in Swedish"
)
print(result)
asyncio.run(main())
import Client from "opperai";
const client = new Client();
(async () => {
const { message } = await client.call({
name: "respond",
input: "What is the capital of Sweden?",
instructions: "Answer questions in Swedish",
});
console.log(message);
})();
curl -X POST https://api.opper.ai/v1/call \
-H "Content-Type: application/json" \
-H "x-opper-api-key: YOUR_API_KEY" \
-d '{
"name": "respond",
"input": "What is the capital of Sweden?",
"instructions": "Answer questions in Swedish"
}'
Calls are preferably defined to return a specific JSON schema. While this is JSON under the hood, we recommend using Pydantic for Python and Zod for Typescript to facilitate building accurate schemas:
from opperai import AsyncOpper
from pydantic import BaseModel
opper = AsyncOpper()
class Room(BaseModel):
beds: int
seaview: bool
description: str
async def main():
result, _ = await opper.call(
name="extractRoom",
input="Room at Grand Hotel with 2 beds and a view to the sea.",
output_type=Room
)
print(result)
asyncio.run(main())
import Client from "opperai";
const client = new Client();
const outputSchema = {
type: "object",
$schema: "https://json-schema.org/draft/2020-12/schema",
properties: {
room_count: {
type: "number",
description: "The number of rooms"
},
view: {
type: "string",
description: "The view from the room"
},
bed_size: {
type: "string",
description: "The size of the bed"
},
hotel_name: {
type: "string",
description: "The name of the hotel"
}
},
required: ["room_count", "view", "bed_size", "hotel_name"]
};
(async () => {
const { json_payload } = await client.call({
name: "extractRoom",
input: "The Grand Hotel offers a luxurious suite with 3 spacious rooms, each providing a breathtaking view of the ocean. The suite includes a king-sized bed, an en-suite bathroom, and a private balcony for an unforgettable stay.",
output_schema: outputSchema,
});
console.log(json_payload);
})();
curl -X POST https://api.opper.ai/v1/call \
-H "Content-Type: application/json" \
-H "x-opper-api-key: YOUR_API_KEY" \
-d '{
"name": "extractRoom",
"input": "Room at Grand Hotel with 2 beds and a view to the sea.",
"output_schema": {
"type": "object",
"properties": {
"beds": {
"type": "integer",
"description": "The number of beds in the room"
},
"seaview": {
"type": "boolean",
"description": "Whether the room has a sea view"
},
"description": {
"type": "string",
"description": "A brief description of the room"
}
},
"required": ["beds", "seaview", "description"]
},
}'
Calls can also be configured to return responses as a stream.
Note that this is not supported with an output schema. So make sure you do not specify output_type
of the call().
from opperai import AsyncOpper
import asyncio
aopper = AsyncOpper()
async def stream_call():
res = await aopper.call(name="streaming_call",
instructions="answer the following question",
input="what are some uses of 42",
stream=True)
async for chunk in res.deltas:
print(chunk)
if __name__ == "__main__":
asyncio.run(stream_call())
const stream = await client.call({
name: "streaming_call",
instructions: "Respond to the question",
input: "what are some uses of 42?",
stream: true,
});
On any call you can specify which model to use. Opper has support for a lot of models, and loves to add additional ones. If no model is specified, Opper reverts to azure/gpt-4o
. To specify a different model:
from opperai import AsyncOpper
opper = AsyncOpper()
async def main():
result, _ = await opper.call(
name="respond",
input="What is the capital of Sweden?",
model="anthropic/claude-3.5-sonnet"
)
print(result)
asyncio.run(main())
import Client from "opperai";
const client = new Client();
(async () => {
const { message } = await client.call({
name: "respond",
input: "What is the capital of Sweden?",
model: "anthropic/claude-3.5-sonnet",
});
console.log(message);
})();
curl -X POST https://api.opper.ai/v1/call \
-H "Content-Type: application/json" \
-H "x-opper-api-key: YOUR_API_KEY" \
-d '{
"name": "respond",
"input": "What is the capital of Sweden?",
"model": "anthropic/claude-3.5-sonnet"
}'
See Models for more information.
On a call, you can optionally specify a list of fallback models. They will be used in order if the main model fails for any reason.
from opperai import AsyncOpper
opper = AsyncOpper()
async def main():
result, _ = await opper.call(
name="respond",
input="What is the capital of Sweden?",
model="anthropic/claude-3.5-sonnet",
fallback_models=["azure/gpt-4o"]
)
print(result)
asyncio.run(main())
import Client from "opperai";
const client = new Client();
(async () => {
const { message } = await client.call({
name: "respond",
input: "What is the capital of Sweden?",
model: "anthropic/claude-3.5-sonnet",
fallback_models: ["azure/gpt-4o"]
});
console.log(message);
})();
curl -X POST https://api.opper.ai/v1/call \
-H "Content-Type: application/json" \
-H "x-opper-api-key: YOUR_API_KEY" \
-d '{
"name": "respond",
"input": "What is the capital of Sweden?",
"model": "anthropic/claude-3.5-sonnet",
"fallback_models": ["azure/gpt-4o"]
}'
Providing examples to a call is a great way to show how you want outputs to look like given some inputs. This will help the model reason without having to improve the prompt. The number of examples is limited to 10.
from opperai import AsyncOpper
from opperai.types import Example
opper = AsyncOpper()
async def main():
result, _ = await opper.call(
name="GetFirstWeekday",
instructions= "Extract the first weekday mentioned in the text",
examples= [
Example(
input= "Today is Tuesday, yesterday was Monday",
output= "Monday",
),
Example(
input= "Sunday, Saturday and Friday are the best days of the week",
output= "Friday",
),
Example(
input= "It is Saturday that is the best day of the week, next to Tuesday",
output= "Tuesday",
),
],
input="Do you want to come by on this Thursday or this Tuesday?",
)
print(result)
asyncio.run(main())
import Client from "opperai";
const client = new Client();
(async () => {
const { message: weekday } = await client.call({
name: "GetFirstWeekday",
instructions: "Extract the first weekday mentioned in the text",
examples: [
{
input: "Today is Tuesday, yesterday was Monday",
output: "Monday",
},
{
input: "Sunday, Saturday and Friday are the best days of the week",
output: "Friday",
},
{
input: "It is Saturday that is the best day of the week, next to Tuesday",
output: "Tuesday",
},
],
input: "Do you want to come by on this Thursday or this Tuesday?",
});
console.log("First weekday: ", weekday);
})();
For a more extensive example of using examples by populating dataset and automatically retrieving them in the call, see Guiding output with examples.
It is possible to pass on configuration parameters directly to the model, such as temperature or image size. If you would like to utilize this possibility, please reach out to us at support@opper.ai.
from opperai import Opper
from opperai.types import CallConfiguration
opper = Opper()
output, _ = opper.call(
name="call_with_max_tokens",
instructions="answer the following question",
input="what are some uses of 42",
configuration=CallConfiguration(
model_parameters={
"max_tokens": 10,
}
),
)
print(output)
import Client from "opperai";
const client = new Client();
async function main() {
const { message } = await client.call({
name: "call_with_max_tokens",
instructions: "answer the following question",
input: "what are some uses of 42",
configuration: {
model_parameters: {
max_tokens: 10
}
}
});
console.log(message);
}
main();
Few-shot prompting allows the model to learn from previous successful generations. To use few-shot learning, you need to:
- Save successful generations to your dataset (this can be done in the generations tab of the function or from the trace view)
- Enable few-shot in your call configuration
from opperai import AsyncOpper
from opperai.types import CallConfiguration
opper = AsyncOpper()
async def main():
result, _ = await opper.call(
name="capitals",
instructions="Respond with the capital for the given country",
input="France",
configuration=CallConfiguration(
invocation=CallConfiguration.Invocation(
few_shot=CallConfiguration.Invocation.FewShot(count=2)
)
),
)
print(result)
asyncio.run(main())
import Client from "opperai";
const client = new Client();
(async () => {
const { message } = await client.call({
name: "capitals",
instructions: "Respond with the capital for the given country",
input: "France",
configuration: {
invocation: {
few_shot: {
count: 2
}
}
}
});
console.log(message);
})();
The count
parameter specifies how many examples from your saved generations to include in the prompt. These examples are automatically selected based on their relevance to the current input.
Read more