Skip to content

Examples

This section contains practical, copy-paste-ready examples for common LM Deluge workflows. Each example is tested against the current codebase.

ExampleWhat you’ll learn
Chat LoopsBuild interactive multi-turn conversations
StreamingStream responses token-by-token (OpenAI)
Batch ProcessingCost-effective batch API processing
Computer UseClaude Computer Use with Anthropic and Bedrock
import asyncio
from lm_deluge import LLMClient, Conversation
async def main():
client = LLMClient("claude-4.5-haiku", max_new_tokens=1024)
response = await client.start(Conversation().user("Hello!"))
print(response.completion)
asyncio.run(main())
import asyncio
from lm_deluge import LLMClient, Conversation, Tool
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
async def main():
client = LLMClient("gpt-4o-mini")
tools = [Tool.from_function(add)]
conv = Conversation().user("What is 2 + 3?")
conv, response = await client.run_agent_loop(conv, tools=tools)
print(response.completion)
asyncio.run(main())
from pydantic import BaseModel
from lm_deluge import LLMClient
class Person(BaseModel):
name: str
age: int
client = LLMClient("gpt-4o-mini")
response = client.process_prompts_sync(
["Extract: John is 25 years old"],
output_schema=Person,
)[0]
print(response.completion) # {"name": "John", "age": 25}

Looking for something specific? Here’s where to find it: