
Confident AI
DeepEval-powered LLM evaluation platform to test, benchmark, and safeguard apps
Discover top open-source software, updated regularly with real-world adoption signals.

Full‑stack observability for LLM applications via OpenTelemetry
OpenLLMetry extends OpenTelemetry to provide end‑to‑end tracing, metrics, and logs for LLM providers, vector databases, and AI frameworks, integrating with existing observability stacks such as Datadog, Honeycomb, and New Relic.

OpenLLMetry adds comprehensive observability to generative‑AI workloads by building on the OpenTelemetry standard. It targets developers and ops teams who already instrument services and now need visibility into LLM calls, prompt handling, and vector‑search operations.
The library ships with a lightweight SDK that initializes with a single line of code and automatically instruments popular LLM providers (OpenAI, Anthropic, Cohere, etc.), vector databases (Chroma, Pinecone, Qdrant, …), and AI frameworks such as LangChain and LlamaIndex. All data is emitted as standard OpenTelemetry spans, metrics, and logs, allowing seamless export to any backend you already use—Datadog, Honeycomb, New Relic, Grafana, and more.
Add the traceloop-sdk to your Python environment, call Traceloop.init(), and configure your preferred exporter or the OpenTelemetry Collector. The SDK also includes optional anonymous telemetry to help maintain compatibility, which can be disabled via an environment variable. This approach lets you adopt observability incrementally without rewriting existing instrumentation.
When teams consider OpenLLMetry, these hosted platforms usually appear on the same shortlist.
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Debug LLM prompt failures
Trace each prompt, response, and token usage across providers, pinpointing latency spikes or error patterns.
Monitor vector DB similarity searches
Collect latency and success metrics for Chroma, Pinecone, Qdrant calls, enabling performance dashboards.
Integrate LLM traces into existing Datadog dashboards
Send OpenLLMetry spans to Datadog, correlating LLM activity with backend services for end‑to‑end observability.
Audit usage for cost and compliance
Aggregate token counts and model calls across providers, feeding reports to finance or governance tools.
No. OpenLLMetry builds on top of OpenTelemetry, so you can add its instrumentations alongside your current configuration.
The core library and SDK are Python‑based. A separate JavaScript/TypeScript version is available as OpenLLMetry‑JS.
Only the SDK collects anonymous usage data, which can be disabled via the TRACELOOP_TELEMETRY environment variable or init flag.
Yes. Any backend supported by OpenTelemetry (e.g., Datadog, Honeycomb, New Relic, OpenTelemetry Collector) works out of the box.
The library is free under the Apache‑2.0 license; you only pay for the observability platform you choose to export data to.
Project at a glance
ActiveLast synced 4 days ago