Examples
This section contains practical, copy-paste-ready examples for common LM Deluge workflows. Each example is tested against the current codebase.
Quick Reference
Section titled “Quick Reference”| Example | What you’ll learn |
|---|---|
| Chat Loops | Build interactive multi-turn conversations |
| Streaming | Stream responses token-by-token (OpenAI) |
| Batch Processing | Cost-effective batch API processing |
| Computer Use | Claude Computer Use with Anthropic and Bedrock |
Basic Patterns
Section titled “Basic Patterns”Simple Request
Section titled “Simple Request”import asynciofrom lm_deluge import LLMClient, Conversation
async def main(): client = LLMClient("claude-4.5-haiku", max_new_tokens=1024) response = await client.start(Conversation().user("Hello!")) print(response.completion)
asyncio.run(main())With Tools
Section titled “With Tools”import asynciofrom lm_deluge import LLMClient, Conversation, Tool
def add(a: int, b: int) -> int: """Add two numbers.""" return a + b
async def main(): client = LLMClient("gpt-4o-mini") tools = [Tool.from_function(add)]
conv = Conversation().user("What is 2 + 3?") conv, response = await client.run_agent_loop(conv, tools=tools) print(response.completion)
asyncio.run(main())Structured Output
Section titled “Structured Output”from pydantic import BaseModelfrom lm_deluge import LLMClient
class Person(BaseModel): name: str age: int
client = LLMClient("gpt-4o-mini")response = client.process_prompts_sync( ["Extract: John is 25 years old"], output_schema=Person,)[0]print(response.completion) # {"name": "John", "age": 25}What’s Where
Section titled “What’s Where”Looking for something specific? Here’s where to find it:
- Tool creation & agent loops: Tool Use and Building Agents
- MCP servers: MCP Integration
- Prompt caching: Caching
- File uploads: Working with Files
- Structured outputs: Structured Outputs
- Rate limiting: Rate Limiting