LLMs are very capable of generating and processing code. Here are a couple of patterns to consider when using them for code generation.

Generating code

A key challenge with generating code is that models have a tendency to add a lot of helping text, while you might only want the code. A technique that helps is to structure the output to separate instructions and code.

import os

from opperai import Opper

opper = Opper(
    http_bearer=os.getenv("OPPER_API_KEY"),
)

def generate_code(prompt: str):
    response = opper.call(
        name="code-generation/generate-code",
        instructions="Generate code based on the user's request. Provide both explanation and code.",
        input=prompt,
        model="gcp/gemini-2.5-pro",
    )
    return response.message

if __name__ == "__main__":
    prompt = "Generate a Python function to calculate the factorial of a number."
    result = generate_code(prompt)
    print(result)