LLMs are very capable text processors. You can use them for a variety of text related tasks. Below is a subset of common use cases and patterns of using them. With the Opper API, you can define tasks in a declarative manner, where you specify input data, output structures and optional configuration on how to complete the task. The Opper API returns completions as specified and builds optimal prompts for completing the task.

Reasoning

Making models reason through its response is a powerful technique that improves the quality of responses. By adding a thoughts field as the first field in the output schema you can make the model first express how it thinks about forming the response, this is called chain of thought. Here is a simple example of a reasoning task:
from opperai import Opper
from pydantic import BaseModel, Field
import os

opper = Opper(http_bearer=os.getenv("OPPER_API_KEY", ""))

class Response(BaseModel):
    thoughts: str = Field(description="Step-by-step reasoning for calculating the speed")
    response: str = Field(description="The calculated average speed in km/h")

def main():
    response = opper.call(
        name="train_speed_calculation",
        instructions="Calculate the average speed of a train and provide reasoning.",
        input="If a train travels 120 km in 2 hours, what is its average speed in km/h?",
        input_schema=None,
        output_schema=Response.model_json_schema()
    )

    print(response.json_payload)

# Run the function
main()

# Output:
# {
#     'thoughts': "The train travels 120 km in 2 hours, so its average speed is 120 km / 2 hours = 60 km/h",
#     'response': "The average speed of the train is 60 km/h"
# }

Reasoning + Generation

Combining reasoning with structured output generation allows you to create more sophisticated text processing tasks. This pattern is particularly useful for complex tasks that require both logical thinking and structured data extraction, such as converting natural language to SQL queries or generating code from specifications. Here are examples of how to combine reasoning with generation for more complex text processing tasks:
from opperai import Opper
from pydantic import BaseModel, Field
import os

opper = Opper(http_bearer=os.getenv("OPPER_API_KEY", ""))

# Fictive E-commerce Database Schema
FICTIVE_DB_SCHEMA = """
users(id, username, email, first_name, last_name, created_at, updated_at, is_active)
products(id, name, description, price, category_id, stock_quantity, created_at, updated_at)
categories(id, name, description, parent_category_id)
orders(id, user_id, total_amount, status, created_at, updated_at)
order_items(id, order_id, product_id, quantity, unit_price, total_price)
reviews(id, user_id, product_id, rating, comment, created_at)
addresses(id, user_id, street, city, state, zip_code, country, is_default)
payments(id, order_id, amount, payment_method, status, created_at)
inventory(id, product_id, quantity, warehouse_id, last_updated)
warehouses(id, name, address, city, state, zip_code)
"""

class Query(BaseModel):
    thoughts: str = Field(description="Thoughts about the query")
    sql_query: str = Field(description="The generated SQL query")
    confidence: float = Field(description="Confidence level (0.0 to 1.0) in the generated query")

if __name__ == "__main__":
    result = opper.call(
        name="generate_sql_query",
        instructions="Given a conversation and a database structure, generate an sql query to answer the question",
        input={
            "conversation": "Find all users who signed up last month",
            "db_structure": FICTIVE_DB_SCHEMA,
            "comment": "None"
        },
        output_schema=Query
    )
    
    print(f"SQL Query: {result.json_payload['sql_query']}") 

Extracting information

You can use LLMs to extract structured data from text and images. This can be useful to make these entities programmable. Here is a simple example of an information extraction task where we extract a structured Room object from a text string:
from opperai import Opper
from pydantic import BaseModel, Field
import os

opper = Opper(http_bearer=os.getenv("OPPER_API_KEY", ""))

class Room(BaseModel):
    beds: int = Field(description="Number of beds in the room")
    seaview: bool = Field(description="Whether the room has a seaview")
    description: str = Field(description="Description of the room")

def extractRoom(input_text: str) -> dict:
    response = opper.call(
        name="extract_room",
        instructions="Extract room details from the given text",
        input=input_text,
        input_schema=None,
        output_schema=Room.model_json_schema()
    )
    return response.json_payload

def main():
    result = extractRoom("Room at Grand Hotel with 2 beds and a view to the sea")
    print(result)

# Run the function
main()

# Output:
# {
#     'beds': 2,
#     'seaview': True,
#     'description': "Room at Grand Hotel with 2 beds and a view to the sea"
# }

Selecting from options

You can use LLMs to make decisions, perform selection and reason about them. This can be useful for building things like recommendation systems and next steps. Here is an example of a recommendation task that provides suggestions for additional items to buy based on past purchase history:
from opperai import Opper
from pydantic import BaseModel, Field
from typing import List
import os

opper = Opper(http_bearer=os.getenv("OPPER_API_KEY", ""))

class RecommendedItem(BaseModel):
    thoughts: str = Field(description="Reasoning process for recommending an additional item")
    item: str = Field(description="Recommended item to add to the cart")

def recommend_additional_item(cart: List[str], purchase_history: List[str]) -> dict:
    response = opper.call(
        name="recommend_additional_item",
        instructions="Your task is to complete a shopping cart so that the owner can make a dish out of the cart.",
        input={"cart": cart, "purchase_history": purchase_history},
        input_schema=None,
        output_schema=RecommendedItem.model_json_schema()
    )
    return response.json_payload

# Options to choose from
purchase_history = [
    "milk",
    "pasta",
    "cream",
    "cheese",
    "bacon",
    "tomatoes",
    "potatoes",
    "water",
    "milk",
    "chicken",
    "beef",
    "fish",
    "vegetables",
    "fruit",
    "spices",
    "oil",
    "butter",
    "rice",
    "noodles",
    "flour",
    "sugar",
    "syrup",
    "spice",
    "cochenille",
]

# Current shopping cart
cart = [
  "pasta",
  "cream",
  "cheese",
]

def main():
    result = recommend_additional_item(cart, purchase_history)
    print(result)

# Run the function
main()

# Output:
# {
#     'thoughts': "With pasta, cream, and cheese in the cart, the owner appears to be aiming to make a pasta dish, possibly Alfredo pasta which commonly uses these ingredients. To complete this meal, a protein such as chicken would complement the dish well, adding flavor and substance.",
#     'item': "chicken"
# }

Performing classification

You can use LLMs to classify text and images. This can be useful to solve various categorization problems. Here is an example of a classification task that categorizes support requests into one of four categories: Bug, Feature Request, Question, or Unknown:
from opperai import Opper
from pydantic import BaseModel, Field
from typing import Literal
import os

class Classification(BaseModel):
    thoughts: str = Field(description="Reasoning process for classification")
    category: Literal['Bug', 'Feature Request', 'Question', 'Unknown'] = Field(description="Category of the text")
    confidence: float = Field(description="Confidence level of the classification")

opper = Opper(http_bearer=os.getenv("OPPER_API_KEY", ""))

def classifyText(input_text: str) -> dict:
    """Classify the text"""
    response = opper.call(
        name="classify_text",
        instructions="Classify the input text into one of the categories: Bug, Feature Request, Question, or Unknown",
        input=input_text,
        input_schema=None,
        output_schema=Classification.model_json_schema()
    )
    return response.json_payload

# Example usage
def main():
    print(classifyText("I encountered an error when trying to save the file."))
    # Output: {
    #     'thoughts': "This appears to be a bug report as the user is describing an error.",
    #     'category': 'Bug',
    #     'confidence': 0.95
    # }

    print(classifyText("Can you add a dark mode feature?"))
    # Output: {
    #     'thoughts': "This is clearly a request for a new feature.",
    #     'category': 'Feature Request',
    #     'confidence': 0.95
    # }

    print(classifyText("How do I reset my password?"))
    # Output: {
    #     'thoughts': "This is a typical user question about account management.",
    #     'category': 'Question',
    #     'confidence': 0.95
    # }

main()