Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.opper.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Opper Agent SDK is part of the unified opperai SDK and is available for Python and TypeScript with a matching API.
On the standalone opper-agents (Python) or @opperai/agents (TypeScript) package? See Migration & legacy SDKs for the upgrade path and links to the legacy repos.

Source

opper-ai/opper-sdks — full source for both languages.

Working examples

11 numbered examples per language. Each one runs against the live API.
LanguagePackageImportsExamples
Pythonopperaifrom opperai.agent import Agent, tool, Hookspython/examples/agents/
TypeScriptopperaiimport { Agent, tool } from "opperai"typescript/examples/agents/

What an agent is

An agent runs a think → act → observe loop:
  1. Think — the LLM is given the goal, instructions, and available tools, and decides what to do next.
  2. Act — if the model called a tool, the SDK executes it and feeds the result back.
  3. Loop — repeat until the model produces a final answer (and validates it against output_schema if set).
You define instructions, tools, and an optional output schema. The SDK handles the rest.

Minimal agent

from opperai.agent import Agent, tool

@tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"Sunny, 22°C in {city}"

agent = Agent(
    name="weather-assistant",
    instructions="You are a helpful weather assistant.",
    tools=[get_weather],
)

result = await agent.run("What's the weather in Paris?")
print(result.output)
print(result.meta.usage.total_tokens)

What’s in the box

PatternPageSource example
ToolsTools02_agent_with_tools
Structured outputStructured output01_agent_with_schema
StreamingStreaming03_streaming
Lifecycle hooksHooks04_hooks_logging, 05_hooks_timing
Multi-agentMulti-agent07_agent_as_tool, 08_multi_agent
Multi-turnConversation10_conversation
MCPMCP09_mcp_stdio
Tracing is on by default. Every agent run, LLM call, and tool execution shows up as a structured trace tree in the Opper dashboard.

When to use it

Reach for the Agent SDK when the work involves choosing between tools, chaining steps based on intermediate results, or coordinating multiple specialists. For a single deterministic LLM call, use opper.call() directly.