OneTrainer logo

OneTrainer

Comprehensive UI and CLI for training diffusion models

OneTrainer streamlines diffusion model training with support for dozens of architectures, full‑fine‑tuning, LoRA, masked training, automatic backups, augmentation, TensorBoard, and multi‑resolution handling via an intuitive GUI or CLI.

Overview

Overview

OneTrainer is designed for researchers, artists, and developers who need a reliable, all‑in‑one environment to train diffusion models. It covers a wide spectrum of popular architectures—from Stable Diffusion and SDXL to FLUX.1 and Qwen Image—so users can experiment without juggling multiple tools.

Core Capabilities

The platform offers full fine‑tuning, LoRA, and embedding training, plus masked training to focus on specific image regions. Integrated dataset tooling automatically generates captions and masks using BLIP, ClipSeg, or Rembg. Image augmentation, aspect‑ratio bucketing, multi‑resolution training, EMA handling, and a rescaled noise scheduler further boost model quality. Real‑time progress is visualized via TensorBoard, and a sampling UI lets you preview results during training.

Deployment

Installation works on Windows, Linux, and macOS with Python 3.10‑3.12. Users can launch a polished GUI or invoke the same functionality through a CLI for headless or scripted workflows. Automatic backups capture the full training state, enabling seamless resume after interruptions.

Highlights

Supports 20+ diffusion architectures (e.g., Stable Diffusion, SDXL, FLUX.1, Qwen Image)
Full fine‑tuning, LoRA, and embedding training in one interface
Automatic dataset captioning, mask generation, and image augmentation pipelines
Integrated TensorBoard, EMA, aspect‑ratio bucketing, and multi‑resolution training

Pros

  • Broad model compatibility reduces tool switching
  • Both GUI and CLI provide flexibility for different workflows
  • Automatic backups enable easy resume of long training runs
  • Rich augmentation and scheduling options improve model quality

Considerations

  • Requires Python 3.10‑3.12, limiting use on newer interpreters
  • GPU‑intensive tasks need substantial VRAM
  • Windows installer may need extra system libraries (e.g., libGL, tkinter)
  • Advanced CLI options have a learning curve for newcomers

Managed products teams compare with

When teams consider OneTrainer, these hosted platforms usually appear on the same shortlist.

Amazon SageMaker JumpStart logo

Amazon SageMaker JumpStart

ML hub with curated foundation models, pretrained algorithms, and solution templates you can deploy and fine-tune in SageMaker

Cohere logo

Cohere

Enterprise AI platform providing LLMs (Command, Aya) plus Embed/Rerank for retrieval

Replicate logo

Replicate

API-first platform to run, fine-tune, and deploy AI models without managing infrastructure

Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.

Fit guide

Great for

  • Researchers needing rapid prototyping of diffusion models
  • Artists who want custom fine‑tuned models without scripting
  • Teams requiring reproducible training with automatic backups
  • Developers integrating training into pipelines via the CLI

Not ideal when

  • Users on Python 3.13 or newer
  • Low‑end hardware lacking CUDA or sufficient VRAM
  • Those seeking a lightweight, command‑only trainer without a GUI
  • Projects that require a proprietary license

How teams use it

Custom Style Fine‑Tuning

Create a model that reproduces a specific artistic style across varied resolutions.

LoRA Adapter Development

Generate lightweight LoRA weights for quick style transfer without retraining the full model.

Dataset Expansion with Automated Captions

Automatically caption large image collections using BLIP, then train a model with enriched textual context.

Masked Inpainting Training

Train an inpainting model using automatically generated masks to focus learning on targeted regions.

Tech snapshot

Python98%
Shell1%
Batchfile1%
Dockerfile1%

Tags

trainingfine-tuningloraimage-model-training

Frequently asked questions

What Python versions are supported?

OneTrainer works with Python 3.10, 3.11, and 3.12.

How does automatic backup work?

During training the system periodically saves the full training state, including model weights, optimizer state, and scheduler settings, allowing seamless resume.

Can I resume training after an interruption?

Yes, the saved backup contains everything needed to continue training from the last checkpoint.

Do I need to convert models before training?

OneTrainer includes a UI tool that converts between diffusers and ckpt formats, so conversion is optional.

Where can I find documentation and support?

Visit the project's wiki for tutorials, use the Discussions page for questions, or join the Discord community for real‑time help.

Project at a glance

Active
Stars
2,708
Watchers
2,708
Forks
259
LicenseAGPL-3.0
Repo age2 years old
Last commit2 days ago
Primary languagePython

Last synced 4 hours ago