Skip to content

Tool Use

LM Deluge provides a unified API for tool use across all LLM providers. Define tools once and use them with any model, whether you pass a pure Python function, a Pydantic model, or an MCP server.

The easiest way to create a tool is from a Python function:

from lm_deluge import Conversation, LLMClient, Tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"The weather in {city} is sunny and 72°F"
tool = Tool.from_function(get_weather)
client = LLMClient("claude-3-haiku")
response = client.process_prompts_sync(
[Conversation.user("What's the weather in Paris?")],
tools=[tool],
)[0]
if response.content:
for call in response.content.tool_calls:
print(call.name, call.arguments)

Tool.from_function() introspects type hints and docstrings to produce a JSON Schema automatically. Provide optional arguments such as name or description if you want to override the defaults.

The Tool helpers all end up producing the same JSON Schema. Pick the one that matches your workflow:

  • Tool.from_function(callable) – uses the function signature and docstring.
  • Tool.from_params(name, ToolParams(...)) – build schemas programmatically.
  • Tool.from_pydantic(name, BaseModel) – reuse Pydantic models.
  • Tool.from_typed_dict(name, TypedDict) – wrap typed dictionaries.
from pydantic import BaseModel
class CalculateTip(BaseModel):
bill_amount: float
tip_percentage: float = 20.0
tip_tool = Tool.from_pydantic("calculate_tip", CalculateTip)

Tools returned by the model can be executed immediately:

response = client.process_prompts_sync(
["Calculate a 15% tip on a $50 bill"],
tools=[tip_tool],
)[0]
if response.content:
for call in response.content.tool_calls:
print(tip_tool.call(**call.arguments))
async def run_async_call(call):
return await tip_tool.acall(**call.arguments)

Use .call() for synchronous helpers and .acall() inside async applications. LM Deluge automatically detects whether your tool function is async and will run it on the appropriate event loop.

LM Deluge includes a built-in agent loop that automatically executes tool calls:

import asyncio
from lm_deluge import LLMClient, Tool, Conversation
async def main():
tools = [Tool.from_function(get_weather)]
client = LLMClient("gpt-4o-mini")
conv = Conversation.user("What's the weather in London?")
# Runs multiple turns automatically, calling tools as needed
conv, resp = await client.run_agent_loop(conv, tools=tools)
print(resp.content.completion)
asyncio.run(main())

The agent loop will:

  1. Send the conversation to the model
  2. If the model calls tools, execute them
  3. Add the tool results to the conversation
  4. Repeat until the model returns a final response

You can provide multiple tools at once:

def get_weather(city: str) -> str:
"""Get the weather for a city."""
return f"Sunny in {city}"
def get_time(timezone: str) -> str:
"""Get the current time in a timezone."""
return f"12:00 PM in {timezone}"
tools = [
Tool.from_function(get_weather),
Tool.from_function(get_time),
]
resps = client.process_prompts_sync(
["What's the weather in Tokyo and what time is it there?"],
tools=tools
)

Several providers expose built-in tools via special schemas. Import them from lm_deluge.built_in_tools and pass them through the tools argument just like regular Tool objects.

from lm_deluge import LLMClient
from lm_deluge.built_in_tools.openai import computer_use_openai
client = LLMClient("openai-computer-use-preview", use_responses_api=True)
response = client.process_prompts_sync(
["Open a browser and search for the Europa Clipper mission."],
tools=[computer_use_openai(display_width=1440, display_height=900)],
)[0]

Anthropic’s computer-use beta tools are enabled the same way: pass Tool.built_in("computer_use") or reuse the helpers from lm_deluge.built_in_tools. LM Deluge injects the extra headers and tool schemas required by each provider.

The tools argument accepts more than just Tool instances:

  • Raw provider tool dictionaries (for OpenAI computer use, web search previews, etc.)
  • MCPServer objects, which let providers call a remote MCP server directly (Anthropic + OpenAI Responses)
  • Pre-expanded MCP tools (see MCP Integration)

When you pass an MCPServer, LM Deluge forwards the descriptor to providers that support native MCP connections or expands it locally if you set force_local_mcp=True on the client.