Skip to main content
In-context learning (ICL) lets you teach models how to complete tasks by showing examples rather than explaining in detail. This is often more reliable than complex instructions. It is also known as few shot prompting.

How It Works

  1. You provide examples - input/output pairs showing ideal completions
  2. At run time Opper retrieves relevant ones - semantically similar to your current input
  3. Model sees examples in context - and follows the pattern
Examples are collected through feedback and automatically saved to the dataset: Examples add automatically at run time: For a deeper dive into automatic in-context learning, see our blog post.

Quick Start: Inline Examples

The simplest approach is passing examples directly in your call:
response = await opper.call(
    name="describe_room",
    instructions="Describe the hotel room for customers",
    input={"beds": 2, "view": "ocean"},
    examples=[
        {
            "input": {"beds": 1, "view": "city"},
            "output": {"description": "A cozy room with city views, perfect for solo travelers."}
        },
        {
            "input": {"beds": 2, "view": "garden"},
            "output": {"description": "A spacious room overlooking the garden, ideal for couples."}
        }
    ]
)

Managed Examples with Datasets

For production use, store examples in a dataset attached to a function. Opper automatically retrieves the most relevant examples for each call.
When you use /call, Opper automatically creates a function configured to use 3 examples by default. You can view and adjust this configuration in the platform.
In-context learning settings in platform

Populate the Dataset

Save good outputs automatically through the feedback endpoint. When you make a call, you get a span_id back. If the output is good, submit positive feedback and it will be saved to the dataset. By default, all positive feedback (score=1.0) is automatically saved to the function’s dataset.
from opperai import Opper

opper = Opper()

response = opper.call(
    name="describe_room",
    instructions="Describe the hotel room for customers",
    input={"beds": 2, "view": "ocean"}
)

opper.spans.submit_feedback(
    span_id=response.span_id,
    score=1.0,
    comment="Great description, exactly the style we want"
)

Option B: Manual Curation in Platform

  1. Go to Traces in the Opper platform
  2. Find a successful completion
  3. Click the feedback button to rate the output
Add feedback in platform

Building a Feedback Loop

A common pattern is to automatically collect feedback from your users and let good outputs improve future outputs.
  1. You make a call - your application calls an Opper function
  2. User provides feedback - rate responses with thumbs up/down
  3. Opper learns from feedback - positive feedback auto-saves to the dataset
  4. Future calls improve - new examples guide better outputs
This creates a flywheel where good outputs improve future outputs automatically.

Automated Evaluation with Observer

For high-volume applications, you can automate the feedback process entirely using the Observer. Instead of waiting for human feedback, The Observer automatically evaluates outputs and saves high-quality examples to your dataset. Configure The Observer settings in the platform under your function’s configuration.

Tips

Start with 3-10 examples. Quality and diversity matter more than quantity.
Cover edge cases. Include examples that show how to handle unusual inputs.
Examples are semantic. Opper retrieves examples similar to the current input, so diversity across input types helps.