smolagents logo

smolagents

Run sandboxed code agents with minimal code and any LLM

smolagents lets developers create and execute code‑driven agents in a few lines, supporting any LLM, multimodal inputs, and secure sandbox runtimes such as Docker, E2B, Modal, or WebAssembly.

smolagents banner

Overview

Overview

smolagents is a lightweight library that lets you build powerful AI agents with just a few lines of Python. Designed for developers and researchers, it abstracts away complex orchestration while keeping the core logic under ~1,000 lines, making the code easy to read and modify.

Capabilities & Deployment

The library ships a first‑class CodeAgent that writes its actions as Python code and runs them in isolated sandboxes—Docker, E2B, Modal, or Pyodide+Deno—so generated code never compromises the host. It is model‑agnostic, supporting any LLM via LiteLLM, HuggingFace Hub, OpenAI‑compatible APIs, local Transformers, Azure, Bedrock, and more. Agents accept text, vision, video, and audio inputs and can call tools from the HuggingFace Hub, LangChain, or custom MCP servers. A simple CLI (smolagent and webagent) enables quick experimentation without writing extra boilerplate.

Highlights

Minimal code footprint (~1,000 lines) for full agent logic
CodeAgent writes actions as Python and executes in secure sandboxes (Docker, E2B, Modal, Pyodide+Deno)
Model‑agnostic architecture works with OpenAI, Anthropic, Together, HuggingFace, local Transformers, Azure, Bedrock, etc.
Modality‑agnostic support for text, vision, video, and audio with Hub sharing for agents and tools

Pros

  • Simple API lets you spin up agents with a single import
  • Broad LLM compatibility avoids vendor lock‑in
  • Secure execution options protect host environments
  • Community‑driven Hub sharing accelerates reuse

Considerations

  • Sandbox setup may require external services (E2B, Modal) or Docker
  • Advanced multimodal pipelines can add complexity
  • Limited to Python code actions; non‑Python environments need wrappers
  • Documentation is still evolving; some features lack extensive examples

Managed products teams compare with

When teams consider smolagents, these hosted platforms usually appear on the same shortlist.

CrewAI logo

CrewAI

Multi-agent automation framework & studio to build and run AI crews

LangGraph logo

LangGraph

Open-source framework for building stateful, long-running AI agents

Relevance AI logo

Relevance AI

No-code platform to build a team of AI agents with rich integrations

Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.

Fit guide

Great for

  • Rapid prototyping of AI assistants that need to run code
  • Teams that want to experiment with multiple LLM providers
  • Projects requiring secure execution of generated code
  • Developers looking to share reusable agents via the HuggingFace Hub

Not ideal when

  • Use cases demanding native non‑Python runtimes
  • Environments without Docker or internet access to sandbox services
  • Scenarios where ultra‑low latency is critical and sandbox overhead is unacceptable
  • Users needing a full‑featured orchestration platform beyond single‑agent workflows

How teams use it

Automated data analysis report

Agent fetches datasets, writes pandas scripts, executes them in Docker, and returns a formatted summary.

Web‑based product price scraper

Webagent navigates e‑commerce pages, extracts product details, and delivers price information in seconds.

Multimodal image captioning

Agent receives an image, calls a vision model, generates descriptive text, and stores results in a Hub Space.

Cross‑provider LLM benchmarking

Runs identical prompts across OpenAI, Anthropic, and local models, collecting latency and quality metrics for comparison.

Tech snapshot

Python100%
Makefile1%

Frequently asked questions

How does smolagents ensure the safety of generated code?

CodeAgent executes actions inside isolated sandboxes such as Docker, E2B, Modal, or Pyodide+Deno, preventing arbitrary code from affecting the host system.

Can I use a locally hosted model?

Yes, the TransformersModel wrapper lets you load any HuggingFace model on your hardware and use it as the agent’s LLM.

What tools can I attach to an agent?

Agents accept any callable tool, including HuggingFace Hub spaces, LangChain utilities, or custom functions exposed via MCP servers.

Is there a command‑line interface?

Two CLI commands are provided: `smolagent` for general multi‑step agents and `webagent` for focused web‑browsing tasks.

Do I need to install extra dependencies for sandboxing?

Sandbox backends are optional; installing the `[toolkit]` extra pulls common tools, and you can add Docker, E2B, or Modal SDKs as needed.

Project at a glance

Active
Stars
25,026
Watchers
25,026
Forks
2,257
LicenseApache-2.0
Repo age1 year old
Last commit3 hours ago
Primary languagePython

Last synced 2 hours ago