Skip to main content
Memory allows agents to store and retrieve information across iterations and multiple process() calls. The LLM decides what to remember and when to recall it.

Enabling Memory

Enable memory when creating an agent:
from opper_agents import Agent, tool

@tool
def get_user_preference(key: str) -> str:
    """Get a user preference from the database."""
    # Simulated database lookup
    return "dark mode"

agent = Agent(
    name="PreferenceAgent",
    description="Manages user preferences",
    tools=[get_user_preference],
    enable_memory=True
)

How Memory Works

When memory is enabled:
  1. Memory catalog: The LLM sees a list of stored memory entries
  2. Read decisions: The LLM can request to read specific entries
  3. Write decisions: The LLM can store new information
  4. Persistence: Memory persists across process() calls on the same agent instance

Memory in Action

from opper_agents import Agent

agent = Agent(
    name="AssistantAgent",
    description="A helpful assistant that remembers context",
    enable_memory=True
)

# First conversation
result1 = await agent.process("My name is Alice and I prefer dark mode")
# Agent stores: name="Alice", preference="dark mode"

# Later conversation (same agent instance)
result2 = await agent.process("What's my name and preferred theme?")
# Agent recalls from memory: "Your name is Alice and you prefer dark mode"

Memory Operations

The agent has access to these memory operations:

List Entries

See what’s stored in memory:
Memory catalog:
- user_name: "The user's name"
- theme_preference: "UI theme preference"
- last_calculation: "Result of last math operation"

Read Entries

Request specific entries by key:
Read: ["user_name", "theme_preference"]
Result: {
  "user_name": "Alice",
  "theme_preference": "dark"
}

Write Entries

Store new information:
Write: {
  key: "favorite_color",
  value: "blue",
  description: "User's favorite color"
}

Delete Entries

Remove outdated information:
Delete: ["old_preference"]

Monitoring Memory with Hooks

Track memory operations:
from opper_agents import hook
from opper_agents.base.context import AgentContext

@hook("memory_read")
async def on_memory_read(context: AgentContext, agent, keys, values, **kwargs):
    print(f"Memory read: {keys}")
    print(f"Values: {values}")

@hook("memory_write")
async def on_memory_write(context: AgentContext, agent, key, value, description, **kwargs):
    print(f"Memory write: {key} = {value}")
    print(f"Description: {description}")

agent = Agent(
    name="MemoryAgent",
    enable_memory=True,
    hooks=[on_memory_read, on_memory_write]
)

Memory Across Tasks

Memory persists across multiple process() calls, making it useful for multi-step workflows:
agent = Agent(
    name="ProjectAgent",
    description="Manages project tasks",
    enable_memory=True,
    tools=[create_task, list_tasks, complete_task]
)

# Step 1: Create project
await agent.process("Create a new project called 'Website Redesign'")
# Memory: project_name="Website Redesign"

# Step 2: Add tasks
await agent.process("Add tasks: design mockups, implement frontend, test")
# Memory: tasks=["design mockups", "implement frontend", "test"]

# Step 3: Check status
await agent.process("What tasks are remaining?")
# Agent recalls project and tasks from memory

Best Practices

  1. Meaningful descriptions: Help the LLM understand what each memory entry contains
  2. Structured keys: Use consistent naming like user_name, project_status
  3. Clean up: The LLM can delete outdated entries
  4. Don’t over-rely: Memory is for context, not a database
  5. Scope appropriately: One agent instance = one memory scope

Limitations

  • Memory is per-agent-instance (not shared between agents by default)
  • Large memory catalogs increase token usage
  • The LLM decides what to remember (not deterministic)

Building Custom Memory

You can implement custom memory backends to persist data to external stores like Redis, PostgreSQL, or any other storage system. This is useful for sharing memory across agent instances or persisting memory beyond the process lifetime.
To create a custom memory backend, implement the Memory interface:
interface Memory {
  hasEntries(): Promise<boolean>;
  listEntries(): Promise<Array<{ key: string; description: string; metadata: Record<string, unknown> }>>;
  read(keys?: string[]): Promise<Record<string, unknown>>;
  write(key: string, value: unknown, description?: string, metadata?: Record<string, unknown>): Promise<MemoryEntry>;
  delete(key: string): Promise<boolean>;
  clear(): Promise<number>;
}
Here’s a complete example of a Redis-backed memory implementation:
import { Agent, Memory } from "@opperai/agents";
import { createClient } from "redis";

type RedisClient = ReturnType<typeof createClient>;

interface MemoryEntry {
  key: string;
  description: string;
  value: unknown;
  metadata: {
    createdAt: number;
    updatedAt: number;
  };
}

class RedisMemory implements Memory {
  private prefix = "memory:";

  constructor(private redis: RedisClient) {}

  private entryKey(key: string) {
    return `${this.prefix}${key}`;
  }

  async hasEntries(): Promise<boolean> {
    const keys = await this.redis.keys(`${this.prefix}*`);
    return keys.length > 0;
  }

  async listEntries(): Promise<Array<{ key: string; description: string; metadata: Record<string, unknown> }>> {
    const keys = await this.redis.keys(`${this.prefix}*`);
    const entries = [];
    for (const redisKey of keys) {
      const data = await this.redis.get(redisKey);
      if (data) {
        const entry = JSON.parse(data) as MemoryEntry;
        entries.push({
          key: entry.key,
          description: entry.description,
          metadata: entry.metadata as Record<string, unknown>,
        });
      }
    }
    return entries;
  }

  async read(keys?: string[]): Promise<Record<string, unknown>> {
    const result: Record<string, unknown> = {};
    const redisKeys = keys
      ? keys.map((k) => this.entryKey(k))
      : await this.redis.keys(`${this.prefix}*`);

    for (const redisKey of redisKeys) {
      const data = await this.redis.get(redisKey);
      if (data) {
        const entry = JSON.parse(data) as MemoryEntry;
        result[entry.key] = entry.value;
      }
    }
    return result;
  }

  async write(
    key: string,
    value: unknown,
    description?: string,
    metadata?: Record<string, unknown>
  ): Promise<MemoryEntry> {
    const now = Date.now();
    const existing = await this.redis.get(this.entryKey(key));
    const createdAt = existing ? (JSON.parse(existing) as MemoryEntry).metadata.createdAt : now;

    const entry: MemoryEntry = {
      key,
      description: description || "",
      value,
      metadata: {
        createdAt,
        updatedAt: now,
        ...metadata,
      },
    };

    await this.redis.set(this.entryKey(key), JSON.stringify(entry));
    return entry;
  }

  async delete(key: string): Promise<boolean> {
    const result = await this.redis.del(this.entryKey(key));
    return result > 0;
  }

  async clear(): Promise<number> {
    const keys = await this.redis.keys(`${this.prefix}*`);
    if (keys.length === 0) return 0;
    return await this.redis.del(keys);
  }
}

// Usage
const redisClient = createClient();
await redisClient.connect();

try {
  const agent = new Agent({
    name: "MyAgent",
    memory: new RedisMemory(redisClient),
    enableMemory: true,
  });

  const result = await agent.process("Remember that my favorite color is blue.");
  console.log(result);
} finally {
  await redisClient.quit();
}

Next Steps