In-context learning
Use successful completions as examples to future completions
It is often very hard to prompt models into consistent and reliable quality. With Opper it is easy to take the approach of showing models how to complete the task by providing examples. This is called in-context learning (or few-shot prompting).
How it works
In-context learning (ICL) is a technique where large language models (LLMs) learn to perform a task by observing a few examples provided within the prompt, without requiring any further training or parameter updates. Essentially, you show the model what you want it to do through examples rather than telling it directly. This approach leverages the pre-trained knowledge of the LLM and allows it to adapt to new tasks quickly. With Opper you can easily build task specific datasets and have them act as examples for completions. Datasets for tasks can be populated through SDKs, API and in the Dashboard.
Creating a server side task
Task completions are normally defined at call-time, but they can also be managed on the server side. Managing task configuration on the server can simplify reuse, but also help manage configuration, datasets and analytics centrally for each task.
In the following snippet we create a hosted task definition for generating a room description from a database entry:
Populating examples
Once we have a server side task definition, we can curate example entries for the task with the correct input and output schemas and populate the task dataset with these. A dataset is a collection of input/output pairs that represent ideal completion of a task.
Here we create three examples of input and output room descriptions where we want them to follow the pattern of: This room at “hotel” features “room description”. Mention “amenities”. Perfect for “recommendation”
Completing a task
We can now try to complete a task and see that it follows the style of the examples. As we now have all the configuration and datasets for the task stores centrally, we can just call the task and have relevant examples be added automatically:
We can see how this new task completes to the same style:
Inspecting examples and prompts
You can observe the usage of examples in the low level prompt to the model. Click on traces -> expand a trace -> expand a call span: