In-context learning (ICL) lets you teach models how to complete tasks by showing examples rather than explaining in detail. This is often more reliable than complex instructions. It is also known as few shot prompting.
How It Works
- You provide examples - input/output pairs showing ideal completions
- At run time Opper retrieves relevant ones - semantically similar to your current input
- Model sees examples in context - and follows the pattern
Examples are collected through feedback and automatically saved to the dataset:
Examples add automatically at run time:
For a deeper dive into automatic in-context learning, see our blog post.
Quick Start: Inline Examples
The simplest approach is passing examples directly in your call:
response = await opper.call(
name="describe_room",
instructions="Describe the hotel room for customers",
input={"beds": 2, "view": "ocean"},
examples=[
{
"input": {"beds": 1, "view": "city"},
"output": {"description": "A cozy room with city views, perfect for solo travelers."}
},
{
"input": {"beds": 2, "view": "garden"},
"output": {"description": "A spacious room overlooking the garden, ideal for couples."}
}
]
)
This is ideal for:
- Prototyping and testing
- Small, fixed example sets
- Edge cases you always want included
Managed Examples with Datasets
For production use, store examples in a dataset attached to a function. Opper automatically retrieves the most relevant examples for each call.
When you use /call, Opper automatically creates a function configured to use 3 examples by default. So you need to start populating your dataset. You can view and adjust this configuration in the platform.
Step 1: Populate the Dataset
You have two main options to add examples:
Option A: Automatic via Feedback (Recommended)
The most common approach is to save good outputs automatically through the feedback endpoint. When you make a call, you get a span_id back. If the output is good, submit positive feedback and it will be saved to the dataset.
By default, all positive feedback (score=1.0) is automatically saved to the function’s dataset.
from opperai import Opper
opper = Opper()
# 1. Make a call
response = opper.call(
name="describe_room",
instructions="Describe the hotel room for customers",
input={"beds": 2, "view": "ocean"}
)
# 2. If the output is good, submit positive feedback
# This automatically saves it as an example
opper.spans.submit_feedback(
span_id=response.span_id,
score=1.0,
comment="Great description, exactly the style we want"
)
Best for subject matter experts who want to review and curate examples directly.
- Go to Traces in the Opper platform
- Find a successful completion
- Click the feedback button to rate the output
- Positive feedback automatically saves to the dataset
Option C: Batch Upload from Code
Best for seeding initial examples or migrations from existing datasets.
# Get the function to access its dataset
function = opper.functions.get_by_name("describe_room")
# Add examples to the dataset
opper.datasets.create_entry(
dataset_id=function.dataset_uuid,
input='{"beds": 1, "view": "city"}',
output='{"description": "A cozy room with city views..."}',
comment="Example for city-view single rooms"
)
Step 2: Call the Function
Now calls automatically include relevant examples from the dataset:
response = opper.call(
name="describe_room",
input={"beds": 2, "view": "ocean"}
)
# 3 relevant examples are automatically included in the prompt
You can verify which examples were used by checking the trace in the platform.
Building a Feedback Loop
A common pattern is to automatically collect feedback from your users and let good outputs improve future outputs.
How it works:
- You make a call - your application calls an Opper function
- User provides feedback - rate responses with thumbs up/down
- Opper learns from feedback - positive feedback auto-saves to the dataset
- Future calls improve - new examples guide better outputs
from opperai import Opper
opper = Opper()
# 1. Make the call
response = opper.call(
name="answer_question",
instructions="Answer concisely",
input="What is the capital of France?"
)
print(f"Output: {response.message}")
# 2. Collect user feedback (e.g., thumbs up/down in your UI)
user_liked_it = get_user_feedback() # Your UI logic
# 3. Submit feedback - positive feedback auto-saves to dataset
opper.spans.submit_feedback(
span_id=response.span_id,
score=1.0 if user_liked_it else 0.0,
comment="User feedback"
)
This creates a flywheel where good outputs improve future outputs automatically.
Automated Evaluation with Observer
For high-volume applications, you can automate the feedback process entirely using the Observer. Instead of waiting for human feedback, The Observer automatically evaluates outputs and saves high-quality examples to your dataset.
How it works:
- The Observer watches your function - monitors all completions
- Auto-scores outputs - evaluates quality based on your criteria
- Captures good examples - high-scoring outputs auto-save to dataset
- Continuous improvement - your function improves without manual intervention
The Observer intelligently combines automated scores with human feedback, always prioritizing human judgment when both are available.
Configure The Observer settings in the platform under your function’s configuration.
Tips
Start with 3-10 examples. More isn’t always better. Quality and diversity matter more than quantity.
Cover edge cases. Include examples that show how to handle unusual inputs or what to do when information is missing.
Examples are semantic. Opper retrieves examples similar to the current input, so diversity across input types helps.
Check the prompt. In Traces, expand a call span to see which examples were included and how the prompt was constructed.