How it works
In-context learning (ICL) is a technique where large language models (LLMs) learn to perform a task by observing a few examples provided within the prompt, without requiring any further training or parameter updates. Essentially, you show the model what you want it to do through examples rather than telling it directly. This approach leverages the pre-trained knowledge of the LLM and allows it to adapt to new tasks quickly. With Opper you can easily build task specific datasets and have them act as examples for completions. Datasets for tasks can be populated through SDKs, API and in the Dashboard.Creating a server side task
Task completions are normally defined at call-time, but they can also be managed on the server side. Managing task configuration on the server can simplify reuse, but also help manage configuration, datasets and analytics centrally for each task. In the following snippet we create a hosted task definition for generating a room description from a database entry:Notice that we are setting ‘invocation.few_shot.count=3’ on the function. This will mean that invocations of this task will pull 3 semantically relevant examples in the dataset of the function in the prompt when completing new tasks. We will see later how we populate these examples for making sure that the output always follows the same style.
Populating examples
Once we have a server side task definition, we can curate example entries for the task with the correct input and output schemas and populate the task dataset with these. A dataset is a collection of input/output pairs that represent ideal completion of a task. Here we create three examples of input and output room descriptions where we want them to follow the pattern of: This room at “hotel” features “room description”. Mention “amenities”. Perfect for “recommendation”There are multiple ways of curating and adding these examples. You can do it in code like this, or in the trace view while inspecting a completion. If you have experts that you want to have manage the quality of the completions we recommend building a dedicated frontend for allowing experts to curate perfect outputs and save them using the above patterns
Completing a task
We can now try to complete a task and see that it follows the style of the examples. As we now have all the configuration and datasets for the task stores centrally, we can just call the task and have relevant examples be added automatically:Inspecting examples and prompts
You can observe the usage of examples in the low level prompt to the model. Click on traces -> expand a trace -> expand a call span: