VizTracer logo

VizTracer

Low-overhead Python tracer with interactive Perfetto visualizations

VizTracer records function entry/exit, threading, async, and PyTorch events with minimal impact, then visualizes traces in a Perfetto‑based UI that handles gigabytes of data smoothly.

VizTracer banner

Overview

Overview

VizTracer is a lightweight tracing tool for Python that captures detailed function entry and exit information, source code locations, and runtime events such as threading, multiprocessing, async coroutines, and PyTorch GPU activity. It requires little to no code changes—simply run your script with viztracer or use the provided context manager—and produces a JSON trace that can be opened in the built‑in vizviewer UI.

How it works

The trace data is rendered by a Perfetto‑powered front‑end, offering smooth navigation, zooming (AWSD), flame‑graph generation, and the ability to handle GB‑scale files. Advanced features include customizable filters, extra variable logging, and custom events, while optional orjson support speeds up JSON serialization. VizTracer runs on Linux, macOS, and Windows, and integrates with remote attach, making it suitable for a wide range of development and debugging workflows.

Highlights

Detailed timeline with source code and function arguments
Zero‑code‑change usage via CLI or context manager
Supports threading, multiprocessing, async, and PyTorch profiling
Perfetto UI renders gigabyte‑scale traces smoothly

Pros

  • Minimal runtime overhead for typical workloads
  • Cross‑platform support (Linux, macOS, Windows)
  • Rich, interactive visualization without external dependencies
  • Flexible filtering and custom event insertion

Considerations

  • Overhead can increase for highly recursive functions
  • Very large traces may require the external processor flag
  • Full feature set (e.g., sys.monitoring) needs Python 3.12+
  • Visualization runs a local HTTP server, which may be restricted

Managed products teams compare with

When teams consider VizTracer, these hosted platforms usually appear on the same shortlist.

Blackfire Continuous Profiler logo

Blackfire Continuous Profiler

Low-overhead continuous profiling for app performance optimization.

Datadog Continuous Profiler logo

Datadog Continuous Profiler

Always-on code profiling to cut latency and cloud costs.

Elastic Universal Profiling logo

Elastic Universal Profiling

Whole-system, always-on profiling with no instrumentation.

Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.

Fit guide

Great for

  • Developers needing quick insight into performance bottlenecks
  • Teams debugging multi‑threaded or async Python applications
  • Data scientists profiling PyTorch training loops
  • Engineers who prefer CLI‑driven tracing without extra packages

Not ideal when

  • Scenarios demanding sub‑microsecond profiling precision
  • Environments where running a local web server is prohibited
  • Projects with extremely deep recursion where overhead spikes
  • Users requiring out‑of‑the‑box HTML reports without a viewer

How teams use it

Debugging a Flask web service

Identify slow request handlers and middleware latency

Profiling a PyTorch training loop

Visualize GPU kernel timings and Python‑level function calls

Analyzing an async data pipeline

See coroutine scheduling, I/O waits, and event ordering

Tracing a multi‑process ETL job

Correlate subprocess activities across workers in a single view

Tech snapshot

Python65%
C27%
C++8%
Makefile1%

Tags

python3visualizationdebuggingtracerpythonprofilingloggingflamegraph

Frequently asked questions

How do I install VizTracer?

Run `pip install viztracer` to install the package from PyPI.

Do I need to modify my code to start tracing?

No. You can invoke `viztracer` from the command line or use the `VizTracer` context manager; most features work without code changes.

Can VizTracer handle large trace files?

Yes, the Perfetto UI can render gigabyte‑scale traces, and the `--use_external_processor` flag helps with very large files.

Is there support for profiling PyTorch GPU events?

Enable with `--log_torch` or `log_torch=True`; VizTracer integrates with `torch.profiler` to capture native calls and GPU timings.

What platforms are supported?

VizTracer runs on Linux, macOS, and Windows.

Project at a glance

Active
Stars
7,516
Watchers
7,516
Forks
471
LicenseApache-2.0
Repo age5 years old
Last commit4 days ago
Primary languagePython

Last synced 2 days ago