
Confident AI
DeepEval-powered LLM evaluation platform to test, benchmark, and safeguard apps
Discover top open-source software, updated regularly with real-world adoption signals.

Unified observability and management platform for LLM applications
OpenLIT streamlines AI development by providing OpenTelemetry-native tracing, cost tracking, prompt versioning, secret vaults, and dashboards for LLMs, vector DBs, and GPUs—all deployable via Docker or Helm.

OpenLIT is aimed at engineers and ops teams building production‑grade generative AI services who need end‑to‑end visibility, cost control, and secure management of prompts and API keys.
The platform offers OpenTelemetry‑native SDKs for tracing and metrics, an analytics dashboard that surfaces performance, cost, and exception data, a Prompt Hub for versioned prompt storage, and a Vault for encrypted secret handling. OpenGround lets you experiment with multiple LLM providers side‑by‑side, while built‑in cost tracking supports custom pricing files for fine‑tuned models.
Deploy the full stack with a single docker compose up -d command or via Helm on Kubernetes. After installing the SDK (pip install openlit), a one‑line openlit.init() call starts sending telemetry to the OpenTelemetry collector, which stores data in ClickHouse and feeds the UI at localhost:3000.
When teams consider OpenLIT, these hosted platforms usually appear on the same shortlist.
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Monitor LLM latency and token usage in production
Identify performance bottlenecks and optimize model selection, reducing response times by up to 30%.
Track cost of fine‑tuned models across multiple deployments
Maintain budgets with real‑time cost dashboards and custom pricing files.
Version and share prompts securely across microservices
Ensure consistent prompt behavior and prevent accidental key leaks via the Prompt Hub and Vault.
Compare alternative LLM providers during experimentation
Use OpenGround to run side‑by‑side tests, selecting the best model before committing to production.
Install the OpenLIT SDK (pip install openlit or npm package) and call openlit.init() with your OTLP endpoint; traces are sent to the OpenLIT collector.
The default Docker compose includes ClickHouse for storing metrics; you can replace it with another OTLP‑compatible backend if preferred.
Yes, the SDK follows OpenTelemetry semantic conventions, so you can point the exporter to any OTLP endpoint your stack already uses.
API keys and secrets are stored in the Vault component, encrypted at rest and accessed only through the SDK, avoiding hard‑coded credentials.
Future releases will add programmatic evaluation metrics and human‑feedback loops; these are marked as 'Coming Soon' in the roadmap.
Project at a glance
ActiveLast synced 4 days ago