OpenLLMetry logo

OpenLLMetry

Full‑stack observability for LLM applications via OpenTelemetry

OpenLLMetry extends OpenTelemetry to provide end‑to‑end tracing, metrics, and logs for LLM providers, vector databases, and AI frameworks, integrating with existing observability stacks such as Datadog, Honeycomb, and New Relic.

OpenLLMetry banner

Overview

Overview

OpenLLMetry adds comprehensive observability to generative‑AI workloads by building on the OpenTelemetry standard. It targets developers and ops teams who already instrument services and now need visibility into LLM calls, prompt handling, and vector‑search operations.

Capabilities

The library ships with a lightweight SDK that initializes with a single line of code and automatically instruments popular LLM providers (OpenAI, Anthropic, Cohere, etc.), vector databases (Chroma, Pinecone, Qdrant, …), and AI frameworks such as LangChain and LlamaIndex. All data is emitted as standard OpenTelemetry spans, metrics, and logs, allowing seamless export to any backend you already use—Datadog, Honeycomb, New Relic, Grafana, and more.

Deployment

Add the traceloop-sdk to your Python environment, call Traceloop.init(), and configure your preferred exporter or the OpenTelemetry Collector. The SDK also includes optional anonymous telemetry to help maintain compatibility, which can be disabled via an environment variable. This approach lets you adopt observability incrementally without rewriting existing instrumentation.

Highlights

Native OpenTelemetry compatibility
Instrumentation for major LLM providers, vector DBs, and AI frameworks
Plug‑and‑play SDK with one‑line initialization
Out‑of‑the‑box support for all major observability backends

Pros

  • Seamless integration with existing OpenTelemetry pipelines
  • Broad coverage of LLM providers, vector databases, and AI frameworks
  • Simple SDK initialization (single line)
  • Supports all major observability platforms out of the box

Considerations

  • Instrumentation limited to providers listed in documentation
  • Additional telemetry data collection may need opt‑out
  • Python‑only core; other languages require separate repository
  • May add slight overhead to request latency

Managed products teams compare with

When teams consider OpenLLMetry, these hosted platforms usually appear on the same shortlist.

Confident AI logo

Confident AI

DeepEval-powered LLM evaluation platform to test, benchmark, and safeguard apps

Datadog logo

Datadog

Observability platform for metrics, logs, and traces

Dynatrace logo

Dynatrace

All‑in‑one observability with AI‑assisted root cause

Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.

Fit guide

Great for

  • Teams already using OpenTelemetry for services
  • Developers building GenAI applications needing traceability
  • Organizations that want to funnel LLM metrics into existing monitoring tools
  • Projects requiring compliance‑ready, anonymous usage telemetry

Not ideal when

  • Environments where zero runtime overhead is mandatory
  • Non‑Python LLM stacks without a dedicated instrumentation library
  • Use cases that need fine‑grained custom metrics beyond provided conventions
  • Teams unwilling to send any telemetry, even anonymous

How teams use it

Debug LLM prompt failures

Trace each prompt, response, and token usage across providers, pinpointing latency spikes or error patterns.

Monitor vector DB similarity searches

Collect latency and success metrics for Chroma, Pinecone, Qdrant calls, enabling performance dashboards.

Integrate LLM traces into existing Datadog dashboards

Send OpenLLMetry spans to Datadog, correlating LLM activity with backend services for end‑to‑end observability.

Audit usage for cost and compliance

Aggregate token counts and model calls across providers, feeding reports to finance or governance tools.

Tech snapshot

Python100%
Shell1%

Tags

mlopen-sourcemodel-monitoringmetricsobservabilityhelp-wanteddatasciencegenerative-aiopentelemetryllmopen-telemetrypythonartifical-intelligencegood-first-issuesmonitoringgood-first-issuellmopsopentelemetry-python

Frequently asked questions

Do I need to replace my existing OpenTelemetry setup?

No. OpenLLMetry builds on top of OpenTelemetry, so you can add its instrumentations alongside your current configuration.

Which languages are supported?

The core library and SDK are Python‑based. A separate JavaScript/TypeScript version is available as OpenLLMetry‑JS.

How is telemetry data handled?

Only the SDK collects anonymous usage data, which can be disabled via the TRACELOOP_TELEMETRY environment variable or init flag.

Can I send data to any backend?

Yes. Any backend supported by OpenTelemetry (e.g., Datadog, Honeycomb, New Relic, OpenTelemetry Collector) works out of the box.

Is there a cost to use OpenLLMetry?

The library is free under the Apache‑2.0 license; you only pay for the observability platform you choose to export data to.

Project at a glance

Active
Stars
6,776
Watchers
6,776
Forks
869
LicenseApache-2.0
Repo age2 years old
Last commit4 hours ago
Primary languagePython

Last synced 3 hours ago