
Eden AI
Unified API aggregator for AI services across providers
Discover top open-source software, updated regularly with real-world adoption signals.

Unified gateway for all LLM APIs with OpenAI compatibility
LiteLLM provides a single Python interface to call dozens of LLM providers—OpenAI, Azure, Anthropic, Bedrock, HuggingFace, and more—using the familiar OpenAI request/response format.

LiteLLM is designed for developers, data scientists, and enterprises that need to integrate multiple large language model (LLM) services without rewriting code for each vendor. By exposing a consistent OpenAI‑style API, it lets you swap providers or run parallel experiments with a single function call.
The library translates inputs to each provider’s completion, embedding, and image‑generation endpoints, guarantees a uniform choices[0].message.content response shape, and includes built‑in retry, fallback, and routing logic. It supports async calls, streaming token‑by‑token output, and configurable budgets, rate limits, and per‑project isolation. Observability callbacks can forward logs to Lunary, MLflow, Langfuse, Helicone, and other platforms.
Install via pip install litellm or run the official Docker image with the -stable tag for production‑grade load‑tested containers. Set provider API keys as environment variables and optionally deploy the LiteLLM proxy server for multi‑tenant routing, hosted preview, or enterprise‑managed services.
When teams consider LiteLLM, these hosted platforms usually appear on the same shortlist.
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Multi‑model A/B testing
Switch between OpenAI, Anthropic, and Cohere models with a single function call, enabling rapid performance comparison.
Enterprise spend monitoring
Set per‑project budgets and rate limits, automatically routing excess traffic to fallback providers.
Real‑time chat application
Leverage async and streaming support to deliver token‑by‑token responses to end users.
Centralized logging for compliance
Send request and response data to Langfuse, Helicone, or MLflow for audit trails and performance analytics.
Use `pip install litellm` or pull the official Docker image with the `-stable` tag.
LiteLLM supports OpenAI, Azure, Anthropic, Bedrock, HuggingFace, TogetherAI, VertexAI, Groq, and many others; see the provider list in the docs.
Yes, the library translates calls to the provider’s completion, embedding, and image_generation endpoints.
You can configure budgets and rate limits per project, API key, or model through the proxy’s routing settings.
A preview hosted proxy is available, and an enterprise tier offers managed deployment.
Project at a glance
ActiveLast synced 4 days ago