LLMs are very capable text processors. You can use them for a variety of text related tasks. Below is a subset of common use cases and patterns of using them.

Reasoning

Making models reason through its response is a powerful technique that improves the quality of responses. By adding a thoughts field as the first field in the output schema you can make the model first express how it thinks about forming the response, this is called chain of thought.

from opperai import AsyncOpper
from pydantic import BaseModel
import asyncio

opper = AsyncOpper()

class Response(BaseModel):
    thoughts: str
    response: str

async def main():
    result, _ = await opper.call(
        input="If a train travels 120 km in 2 hours, what is its average speed in km/h?",
        output_type=Response
    )

    print(result)

# Run the async function
asyncio.run(main())

# Response(
#     thoughts="The train travels 120 km in 2 hours, so its average speed is 120 km / 2 hours = 60 km/h",
#     response="The average speed of the train is 60 km/h"
# )

Planning

It is possible to further improve reasoning by also adding a steps field to the output schema. This lets the model first be very diligent about the steps it needs to take to solve the problem, thereby improving the final answer.

from opperai import AsyncOpper
from pydantic import BaseModel
import asyncio

opper = AsyncOpper()

class Output(BaseModel):
    thoughts: str
    steps: list[str]
    final_answer: str

async def main():
    response, _ = await opper.call(
        name="get_ingredients",
        instructions="Given a dish, provide an ingredient list",
        input="Boullabaisse",
        output_type=Output
    )

    print(response)

# Run the async function
asyncio.run(main())

# Output(
#     thoughts="The user is asking for the ingredients to make Boullabaisse, a traditional French soup.",
#     steps=[
#         "1. Start with a base of onions, carrots, and celery.",
#         "2. Add garlic, tomatoes, and herbs like thyme, bay leaf, and parsley.",
#         "3. Simmer for 1 hour.",
#         "4. Add the fish, shellfish, and potatoes.",
#         "5. Simmer until the potatoes are tender."
#     ],
#     final_answer="The ingredients for making Boullabaisse are onions, carrots, celery, garlic, tomatoes, herbs, fish, shellfish, potatoes."
# )

Extracting information

You can use LLMs to extract structured data from text and images. This can be usedful to make these entities programmable.

Here is a simple example where we extract a structured Room object from a text string.

from opperai import AsyncOpper
from pydantic import BaseModel
import asyncio

aopper = AsyncOpper()

class Room(BaseModel):
    beds: int
    seaview: bool
    description: str

async def extractRoom(input: str) -> Room:
    response, _ = await aopper.call(
        name="extractRoom",
        instructions="Extract room details from the given text",
        input=input,
        output_type=Room
    )
    return response

async def main():
    result = await extractRoom("Room at Grand Hotel with 2 beds and a view to the sea")
    print(result)

# Run the async function
asyncio.run(main())

# Output:
# Room(beds=2, seaview=True, description="Room at Grand Hotel with 2 beds and a view to the sea")

Selecting from options

You can use LLMs to make decisions, perform selection and reason about them. This can be useful for building things like recommendation systems and next steps.

Here we have an example of providing a recommendation of additional things for a user to buy based on past purchase history

from opperai import AsyncOpper
from pydantic import BaseModel
import asyncio

aopper = AsyncOpper()

class RecommendedItem(BaseModel):
    thoughts: str # Improves reasoning
    item: str

async def recommend_additional_item(cart: list, purchase_history: list) -> RecommendedItem:
    response, _ = await aopper.call(
        name="recommend_additional_item",
        instructions="Your task is to complete a shopping cart so that the owner can make a dish out of the cart.",
        input={"cart": cart, "purchase_history": purchase_history},
        output_type=RecommendedItem
    )
    return response

# Options to choose from
purchase_history = [
    "milk",
    "pasta",
    "cream",
    "cheese",
    "bacon",
    "tomatoes",
    "potatoes",
    "water",
    "milk",
    "chicken",
    "beef",
    "fish",
    "vegetables",
    "fruit",
    "spices",
    "oil",
    "butter",
    "rice",
    "noodles",
    "flour",
    "sugar",
    "syrup",
    "spice",
    "cochenille",
]

# Current shopping cart
cart = [
  "pasta",
  "cream",
  "cheese",
]

async def main():
    result = await recommend_additional_item(cart, purchase_history)
    print(result)

# Run the async function
asyncio.run(main())
# Output: RecommendedItem(
#     thoughts="With pasta, cream, and cheese in the cart, the owner appears to be aiming to make a pasta dish, possibly Alfredo pasta which commonly uses these ingredients. To complete this meal, a protein such as chicken would complement the dish well, adding flavor and substance.",
#     item="chicken"
# )

Performing classification

You can use LLMs to classify text and images. This can be useful to solve various categorization problems.

Here is an example of classifies an input text, thought of as being a support request, into one of the four categories: Bug, Feature Request, Question or Unknown.

from opperai import Client
from pydantic import BaseModel
from typing import Literal

class Classification(BaseModel):
    thoughts: str # improves reasoning
    category: Literal['Bug', 'Feature Request', 'Question', 'Unknown']
    confidence: float

opper = Client()

async def classifyText(input: str) -> Classification:
    """Classify the text"""
    result, _ = await opper.call(
        name="classify_text",
        instructions="Classify the input text into one of the categories: Bug, Feature Request, Question, or Unknown",
        input=input,
        output_type=Classification
    )
    return result

# Example usage
import asyncio

async def main():
    print(await classifyText("I encountered an error when trying to save the file."))
    # Output: Classification(
    #     thoughts="This appears to be a bug report as the user is describing an error.",
    #     category='Bug',
    #     confidence=0.95
    # )

    print(await classifyText("Can you add a dark mode feature?"))
    # Output: Classification(
    #     thoughts="This is clearly a request for a new feature.",
    #     category='Feature Request',
    #     confidence=0.95
    # )

    print(await classifyText("How do I reset my password?"))
    # Output: Classification(
    #     thoughts="This is a typical user question about account management.",
    #     category='Question',
    #     confidence=0.95
    # )

asyncio.run(main())