In-context learning (ICL) lets you teach models how to complete tasks by showing examples rather than explaining in detail. This is often more reliable than complex instructions. It is also known as few shot prompting.Documentation Index
Fetch the complete documentation index at: https://docs.opper.ai/llms.txt
Use this file to discover all available pages before exploring further.
How It Works
- You provide examples - input/output pairs showing ideal completions
- At run time Opper retrieves relevant ones - semantically similar to your current input
- Model sees examples in context - and follows the pattern
Quick Start: Inline Examples
The simplest approach is passing examples directly in your call:Managed Examples with Datasets
For production use, store examples in a dataset attached to a function. Opper automatically retrieves the most relevant examples for each call.When you use
/call, Opper automatically creates a function configured to use 3 examples by default. You can view and adjust this configuration in the platform.
Populate the Dataset
Option A: Automatic via Feedback (Recommended)
Save good outputs automatically through the feedback endpoint. When you make a call, you get aspan_id back. If the output is good, submit positive feedback and it will be saved to the dataset.
By default, all positive feedback (score=1.0) is automatically saved to the function’s dataset.
Option B: Manual Curation in Platform
- Go to Traces in the Opper platform
- Find a successful completion
- Click the feedback button to rate the output

Building a Feedback Loop
A common pattern is to automatically collect feedback from your users and let good outputs improve future outputs.- You make a call - your application calls an Opper function
- User provides feedback - rate responses with thumbs up/down
- Opper learns from feedback - positive feedback auto-saves to the dataset
- Future calls improve - new examples guide better outputs