With our version 2 API, SDKs and Docs we are unifying the experience of building with Opper across languages. Migrating to version 2 is essentially just syntax changes. API-keys, index information and Schemas will continue to work as earlier, but need the syntax to use them have changed.

See docs for more information.

Key changes

1. Terminology: “Calls” → “Task completion”

All docs, navigation, and code now talk about tasks and task completions instead of calls. The call endpoint is still the same, but we are emphasising that the parameters to the call is a task specification.

2. Parameter rename: …_type…_schema

Python switches input_type / output_typeinput_schema / output_schema (TypeScript was already using “…Schema”.)

3. Single-object response

v1 returned a tuple:

result, trace = await opper.call(...)

v2 returns one object:

response = opper.call(...)
payload = response.json_payload

4. Span-based tracing

Replace the @trace decorator and opper.traces.* context manager with explicit span helpers:

span = opper.spans.create(name="on_message")

response = opper.call(..., parent_span_id=span.id)

opper.spans.update(
    span_id=span.id,
    input=user_input,
    output=response
)

opper.span_metrics.create_metric(
    span_id=span.id,
    dimension="thumbs_up",
    value=1
)

5. Dedicated streaming endpoint

Streaming moves from

opper.call(..., stream=True)

to

for event in opper.stream(...).result:
    handle(event)

6. Model fall-backs folded into model=

model = ["anthropic/claude-sonnet-4", "azure/gpt-4o"]   # no more fallback_models

7. Flat configuration={}

CallConfiguration(...) disappears—use dot-notation keys in a plain dict:

configuration = {
    "invocation.few_shot.count": 3,
}

For model/provider specific configuration pass model as a dict:

opper.call(...,
    model={"name": "openai/gpt-4.1", "options": {"temperature": 0.5}}
    )

8. Arbitrary tagging

tags = {"user": "alice", "env": "prod"}

Attach tags to completions or spans for richer filtering, billing, and analytics.

9. Indexes renamed to Knowledge

SDK namespaces change:

# v1
opper.indexes.create(...)
# v2
opper.knowledge.create(...)

10. Synchronous client is now the default

from opperai import Opper
opper = Opper(http_bearer="TOKEN")

To use async mehtods, append _async after the operation - for example call_async()

11. Server-side functions API added

opper.functions.create / get / call lets you store and version task definitions centrally.

Migration Guide

Migrate step-by-step; each change is independent.

  1. Switch the client import

    # v1
    from opperai import AsyncOpper
    opper = AsyncOpper(api_key="TOKEN")
    
    # v2
    from opperai import Opper
    opper = Opper(http_bearer="TOKEN")
    
  2. Rename parameters

    response = opper.call(
        input_schema=MyIn,
        output_schema=MyOut,
        ...
    )
    
  3. Handle the single response object

    payload = response.json_payload      # structured
    text    = response.message           # natural-language
    
  4. Adopt span-based tracing

    span = opper.spans.create(name="round")
    
    response = opper.call(
        parent_span_id=span.id,
        ...
    )
    
    opper.spans.update(
        span_id=span.id,
        input=user_input,
        output=response
    )
    

    Add metrics if you collect feedback:

    opper.span_metrics.create_metric(
        span_id=span.id,
        dimension="thumbs_up",
        value=1
    )
    
  5. Move streaming code

    for chunk in opper.stream(
            input_schema=MyIn,
            output_schema=MyOut,
            input=my_input):
        process(chunk)
    
  6. Collapse primary + fallback models

    model = ["anthropic/claude-sonnet-4", "azure/gpt-4o"]
    
  7. Flatten any CallConfiguration usage

    configuration = {
        "invocation.few_shot.count": 3,
    }
    
  8. Add tags (optional)

    tags = {"user": session.user_id, "env": "prod"}
    
  9. Update Indexes → Knowledge calls

    opper.knowledge.create(...)
    
  10. Retest and deploy

Tip: Start with the parameter rename and single-response handling; tracing and streaming can be migrated later without breaking existing code.