
Eden AI
Unified API aggregator for AI services across providers
Discover top open-source software, updated regularly with real-world adoption signals.

Unified AI gateway with instant failover and zero-config startup
Bifrost provides a high-performance AI gateway that unifies over 12 providers via a single OpenAI-compatible API, offering automatic failover, load balancing, semantic caching, and enterprise-grade controls with zero-config deployment.

Bifrost is a high‑performance AI gateway that lets developers access more than a dozen LLM providers—OpenAI, Anthropic, AWS Bedrock, Google Vertex, Azure, Cohere, Mistral, Ollama, Groq, and others—through a single OpenAI‑compatible endpoint. The platform adds virtually no latency (≈11 µs overhead) while delivering automatic failover, intelligent load balancing, and semantic caching to keep costs low and response times fast.
Start in seconds with npx -y @maximhq/bifrost or a Docker container, then configure providers, budgets, and access policies via the built‑in web UI or API. Enterprise features include SSO (Google/GitHub), hierarchical budget control, Prometheus metrics, distributed tracing, and HashiCorp Vault integration for secure key storage. Bifrost can replace existing OpenAI, Anthropic, or Google GenAI endpoints with a single URL change, making migration painless for any language or framework.
When teams consider Bifrost, these hosted platforms usually appear on the same shortlist.
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Multi‑provider fallback for production chatbots
Maintain 100% uptime by automatically switching between OpenAI, Anthropic, and Bedrock when any provider experiences latency or outage.
Cost‑optimized content generation
Leverage semantic caching and budget management to reduce API spend while serving high‑volume text generation.
Enterprise AI platform with SSO
Integrate Google or GitHub SSO, enforce rate limits, and monitor usage via Prometheus for compliance.
Rapid prototyping with zero‑config
Spin up the gateway via npx or Docker in under a minute and start testing across multiple models without code changes.
Run `npx -y @maximhq/bifrost` for a quick start or use Docker with `docker run -p 8080:8080 maximhq/bifrost`.
Supported providers include OpenAI, Anthropic, AWS Bedrock, Google Vertex, Azure, Cohere, Mistral, Ollama, Groq, and others.
Responses are cached based on semantic similarity to incoming prompts, so identical or near‑identical queries retrieve cached results, lowering latency and cost.
Yes—Bifrost exposes native Prometheus metrics, distributed tracing, and comprehensive logging for full visibility.
Project at a glance
ActiveLast synced 4 days ago