Skip to content

Installation

Install LM Deluge using pip (Python 3.10+):

Terminal window
python -m pip install -U lm-deluge

Optional extras:

  • pip install plyvel if you want LevelDB-backed local caching
  • pip install pdf2image pillow if you plan to turn PDFs into images via Image.from_pdf

LM Deluge reads API keys from environment variables so the client can contact each provider directly. Load them at process startup (for example with python-dotenv) and pass the values down to your workers, CLI scripts, or notebook kernels.

Depending on which providers you plan to use, set the appropriate API keys:

Terminal window
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Google AI Studio (Gemini)
GEMINI_API_KEY=...
# Cohere
COHERE_API_KEY=...
# OpenRouter (any model prefixed with openrouter:)
OPENROUTER_API_KEY=...
# Meta, Groq, DeepSeek, etc. use provider-specific keys defined in src/lm_deluge/models
# AWS Bedrock (for Amazon-hosted Anthropic and Meta models)
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...

Create a .env file in your project root:

.env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...

Load the file inside your application:

import dotenv
dotenv.load_dotenv()

Verify your installation by running a simple test:

import dotenv
from lm_deluge import LLMClient
dotenv.load_dotenv()
client = LLMClient("gpt-4.1-mini")
responses = client.process_prompts_sync(["Say hello!"])
print(responses[0].completion)

If you see a response from the model, you’re all set!

Head over to the Quick Start guide to learn how to use LM Deluge effectively.