Documentation Index Fetch the complete documentation index at: https://docs.opper.ai/llms.txt
Use this file to discover all available pages before exploring further.
The system prompt appends to the base LLM prompt. It informs the LLM about agent state, capabilities, and context, helping it make better decisions about when and how to use your agent’s commands.
Why System Prompts Matter
System prompts give the LLM real-time awareness of:
Agent state - Is it running, processing, or encountering errors?
Available capabilities - What commands can be invoked right now?
Current context - What conversation or task is active?
Recent activity - What just happened?
This context helps the LLM provide better responses and make smarter decisions about tool usage.
Setting System Prompt
On Initialization
Set an initial prompt when your agent starts:
def initialize ( self ):
self .set_system_prompt( """
You are a webhook processing agent.
Current state: Initializing
Webhooks processed: 0
Queue size: 0
""" .strip())
Update Anytime
Update the prompt whenever significant state changes occur:
def process_webhook ( self , webhook ):
self .processed_count += 1
# Update LLM with new state
self .set_system_prompt( f """
You are a webhook processing agent.
Current state: Running
Webhooks processed: { self .processed_count }
Queue size: { len ( self .queue) }
Last webhook: { webhook[ 'id' ] }
""" .strip())
When to Update
On Significant State Changes
def connect_to_database ( self ):
self .db = Database.connect()
self .set_system_prompt( f """
Database agent ready.
Connection: Active
Tables: { len ( self .db.tables()) }
Mode: { self .db_mode }
""" )
After Commands Execute
def start_monitoring_command ( self , args ):
target = args[ "target" ]
self .monitoring_targets.add(target)
self .set_system_prompt( f """
Monitoring agent tracking { len ( self .monitoring_targets) } targets.
Targets: { ', ' .join( self .monitoring_targets) }
Alert threshold: { self .threshold }
""" )
On Lifecycle Events
You can update system prompts in response to agent lifecycle events. Learn more about available events in Lifecycle Events .
def on_agent_activated ( self , previous_agent , conversation_id ):
self .set_system_prompt( f """
Task agent now active.
Conversation: { conversation_id }
Previous agent: { previous_agent or 'None' }
Pending tasks: { len ( self .tasks) }
""" )
def on_agent_deactivated ( self , next_agent ):
# Minimal prompt when inactive
self .set_system_prompt( "Task agent (inactive)." )
Keep it Concise
The LLM processes your system prompt with every message. Keep it brief and focused.
Bad: Too Verbose
Good: Concise Summary
self .set_system_prompt( """
You are a file processing agent responsible for
monitoring a directory and processing files as
they arrive. You should process each file
according to its type and then move it to the
appropriate output directory. Currently you are
monitoring /data/inbox and have processed 42
files so far. The last file processed was
document.pdf which was processed at 14:30:22
and took 1.2 seconds to complete. The current
queue has 3 files waiting: image.jpg, data.csv,
and report.docx. Your processing rate is
currently 35 files per hour.
""" )
Structure Guidelines
Use Sections for Clarity
self .set_system_prompt( f """
Database query agent.
CONNECTION:
- Status: Active
- Database: { self .db_name }
- Tables: { len ( self .tables) }
CURRENT OPERATION:
- Query: { self .current_query[: 50 ] } ...
- Duration: { elapsed } s
STATS:
- Queries executed: { self .query_count }
- Cache hit rate: { self .cache_rate } %
""" .strip())
Good: Important Info at Top
Bad: Buried the Error
self .set_system_prompt( f """
Status: ERROR - Database connection lost
Last successful query: 5 minutes ago
Retry attempts: 3/5
""" )
Dynamic State Updates
Track real-time state and update the prompt accordingly. For persisting state across agent restarts, see State Management .
class MonitoringAgent ( OpperatorAgent ):
def initialize ( self ):
self .alerts = []
self .update_prompt()
def update_prompt ( self ):
alert_summary = "None"
if self .alerts:
recent = self .alerts[ - 3 :] # Last 3 only
alert_summary = " \n " .join([
f "- { a[ 'severity' ] } : { a[ 'message' ] } "
for a in recent
])
self .set_system_prompt( f """
Monitoring agent.
Active alerts: { len ( self .alerts) }
{ alert_summary }
Monitors: { len ( self .monitors) } active
""" .strip())
def add_alert ( self , severity , message ):
self .alerts.append({
'severity' : severity,
'message' : message,
'time' : time.time()
})
self .update_prompt() # Update immediately
Capability Listing
Inform the LLM about what your agent can do:
def initialize ( self ):
self .set_system_prompt( f """
API client agent.
Capabilities:
- GET/POST/PUT/DELETE requests
- OAuth2 authentication
- Rate limiting: { self .rate_limit } /min
- Retry logic: { self .max_retries } attempts
API: { self .api_base_url }
Auth: { 'Active' if self .token else 'Not configured' }
""" .strip())
Update when capabilities change:
def authenticate ( self , token ):
self .token = token
self .authenticated = True
self .set_system_prompt( f """
API client agent.
Status: Authenticated ✓
Endpoints: { len ( self .endpoints) } available
Rate limit: { self .remaining } / { self .rate_limit }
""" .strip())
Context-Aware Prompts
Adapt prompts to different contexts:
def on_agent_activated ( self , previous_agent , conversation_id ):
# User just switched to this agent - show full context
self .set_system_prompt( f """
Build agent now active.
Recent builds: { self .recent_builds_summary() }
Current branch: { self .get_current_branch() }
Last build: { self .last_build_status }
""" .strip())
def on_agent_deactivated ( self , next_agent ):
# User switched away - minimal state
self .set_system_prompt( "Build agent (inactive)." )
Error States
Clearly communicate errors to help the LLM provide appropriate responses:
def on_connection_error ( self , error ):
self .set_system_prompt( f """
⚠ API agent - CONNECTION ERROR
Error: { error }
Last success: { self .last_success_time }
Retry in: { self .retry_delay } s
Status: Will auto-reconnect
""" .strip())
Update when recovered:
def on_reconnect ( self ):
self .set_system_prompt( f """
API agent - Reconnected ✓
Connection: Restored
Backlog: { len ( self .pending_requests) } requests
Processing: Resuming
""" .strip())
Complete Example
Agent that updates prompt throughout its lifecycle:
from opperator_agent import OpperatorAgent, LogLevel
import time
class TaskQueueAgent ( OpperatorAgent ):
def __init__ ( self ):
super (). __init__ ( name = "task_queue_agent" )
self .tasks = []
self .processed = 0
self .failed = 0
def initialize ( self ):
self .set_system_prompt( "Task queue agent initializing..." )
def start ( self ):
self .update_prompt()
def update_prompt ( self ):
"""Central prompt update method"""
status = "Running" if self .running else "Stopped"
# Build task summary
if not self .tasks:
task_info = "No pending tasks"
else :
next_task = self .tasks[ 0 ]
task_info = f "Next: { next_task[ 'type' ] } \n Queue: { len ( self .tasks) } pending"
# Build stats
total = self .processed + self .failed
success_rate = 0
if total > 0 :
success_rate = int (( self .processed / total) * 100 )
self .set_system_prompt( f """
Task queue agent.
STATUS: { status }
{ task_info }
STATS:
- Completed: { self .processed }
- Failed: { self .failed }
- Success rate: { success_rate } %
""" .strip())
def add_task ( self , task ):
self .tasks.append(task)
self .update_prompt()
def process_next_task ( self ):
if not self .tasks:
return
task = self .tasks.pop( 0 )
try :
self .execute_task(task)
self .processed += 1
except Exception as exc:
self .log(LogLevel. ERROR , "Task failed" , error = str (exc))
self .failed += 1
self .update_prompt()
def on_shutdown ( self ):
self .set_system_prompt( f """
Task queue agent shutting down.
Pending tasks: { len ( self .tasks) }
Session stats: { self .processed } completed, { self .failed } failed
""" .strip())
Prompt Templates
Use a centralized builder for consistent prompts:
class DatabaseAgent ( OpperatorAgent ):
def initialize ( self ):
self .status = "initializing"
self .connection = None
self .queries_count = 0
self .last_error = None
def _build_prompt ( self ):
"""Centralized prompt builder"""
conn_status = "Connected" if self .connection else "Disconnected"
prompt = f "Database agent. \n\n Status: { self .status } \n "
prompt += f "Connection: { conn_status } \n "
if self .connection:
prompt += f "Database: { self .connection.name } \n "
prompt += f "Queries: { self .queries_count } \n "
if self .status == "error" :
prompt += f " \n ERROR: { self .last_error } \n "
return prompt.strip()
def update_status ( self , status ):
"""Update status and refresh prompt"""
self .status = status
self .set_system_prompt( self ._build_prompt())
def execute_query ( self , query ):
result = self .connection.execute(query)
self .queries_count += 1
self .set_system_prompt( self ._build_prompt())
return result
Best Practices
Update on state changes, not constantly
Good: Update when something changesdef add_item ( self , item ):
self .items.append(item)
self .update_prompt()
Bad: Update every iterationdef main_loop ( self ):
while self .running:
self .do_work()
self .set_system_prompt( ... ) # Too frequent!
time.sleep( 0.1 )
Use helper method for updates
Good: Centralized logicdef update_prompt ( self ):
self .set_system_prompt( self ._build_prompt())
def _build_prompt ( self ):
return f "Agent state: { self .state } "
Bad: Duplicated logicdef method_a ( self ):
self .set_system_prompt( f "Agent state: { self .state } " )
def method_b ( self ):
self .set_system_prompt( f "Agent state: { self .state } " )
Summarize lists, don't enumerate
Good: Summaryitems = [ "file1.txt" , "file2.txt" , ... , "file50.txt" ]
prompt = f "Processing { len (items) } files"
Bad: Full listprompt = f "Processing: { ', ' .join(items) } " # Way too long!
Good: Last 3 itemsrecent = self .history[ - 3 :]
summary = " \n " .join([ f "- { item } " for item in recent])
Bad: Full historysummary = " \n " .join([ f "- { item } " for item in self .history])
Common Mistakes
Too much detail Don’t include implementation details or verbose descriptions: # Bad
self .set_system_prompt( f """
Processing file located at /very/long/path/to/file.txt
which was created on 2024-01-15 at 14:23:45 and has
a size of 1,234,567 bytes and contains 45,678 lines...
""" )
# Good
self .set_system_prompt( f """
Processing: file.txt (1.2MB)
Progress: 45%
""" )
Static information Don’t include information that never changes: # Bad - doesn't change
self .set_system_prompt( """
This agent processes webhooks from external services.
It validates the payload, stores it in the database,
and triggers downstream processes.
""" )
# Good - updates with state
self .set_system_prompt( f """
Webhook agent.
Processed: { self .count }
Active: { self .active }
""" )
**Forgetting to strip() Always use .strip() to remove leading/trailing whitespace: # Good
self .set_system_prompt( """
Agent status.
Running: Yes
""" .strip()) # ← Important!
Quick Reference
System Prompt Purpose
Inform LLM of current agent state
List available capabilities
Show context and recent activity
Communicate errors or warnings
Update Triggers
Significant state changes
After command execution
On lifecycle events (activated, deactivated)
When capabilities change
Keep it Concise
Summarize, don’t enumerate
Show recent items only (last 3-5)
Key information first
Use sections for organization
Best Practices
Centralize prompt building logic
Update on changes, not constantly
Use helper methods
Group related information
Always use .strip()
Next Steps
Commands & Tools Register commands to add agent capabilities
Custom sidebars Display status information in the sidebar