
Amazon SageMaker JumpStart
ML hub with curated foundation models, pretrained algorithms, and solution templates you can deploy and fine-tune in SageMaker
Discover top open-source software, updated regularly with real-world adoption signals.

Comprehensive UI and CLI for training diffusion models
OneTrainer streamlines diffusion model training with support for dozens of architectures, full‑fine‑tuning, LoRA, masked training, automatic backups, augmentation, TensorBoard, and multi‑resolution handling via an intuitive GUI or CLI.
OneTrainer is designed for researchers, artists, and developers who need a reliable, all‑in‑one environment to train diffusion models. It covers a wide spectrum of popular architectures—from Stable Diffusion and SDXL to FLUX.1 and Qwen Image—so users can experiment without juggling multiple tools.
The platform offers full fine‑tuning, LoRA, and embedding training, plus masked training to focus on specific image regions. Integrated dataset tooling automatically generates captions and masks using BLIP, ClipSeg, or Rembg. Image augmentation, aspect‑ratio bucketing, multi‑resolution training, EMA handling, and a rescaled noise scheduler further boost model quality. Real‑time progress is visualized via TensorBoard, and a sampling UI lets you preview results during training.
Installation works on Windows, Linux, and macOS with Python 3.10‑3.12. Users can launch a polished GUI or invoke the same functionality through a CLI for headless or scripted workflows. Automatic backups capture the full training state, enabling seamless resume after interruptions.
When teams consider OneTrainer, these hosted platforms usually appear on the same shortlist.

ML hub with curated foundation models, pretrained algorithms, and solution templates you can deploy and fine-tune in SageMaker

Enterprise AI platform providing LLMs (Command, Aya) plus Embed/Rerank for retrieval

API-first platform to run, fine-tune, and deploy AI models without managing infrastructure
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Custom Style Fine‑Tuning
Create a model that reproduces a specific artistic style across varied resolutions.
LoRA Adapter Development
Generate lightweight LoRA weights for quick style transfer without retraining the full model.
Dataset Expansion with Automated Captions
Automatically caption large image collections using BLIP, then train a model with enriched textual context.
Masked Inpainting Training
Train an inpainting model using automatically generated masks to focus learning on targeted regions.
OneTrainer works with Python 3.10, 3.11, and 3.12.
During training the system periodically saves the full training state, including model weights, optimizer state, and scheduler settings, allowing seamless resume.
Yes, the saved backup contains everything needed to continue training from the last checkpoint.
OneTrainer includes a UI tool that converts between diffusers and ckpt formats, so conversion is optional.
Visit the project's wiki for tutorials, use the Discussions page for questions, or join the Discord community for real‑time help.
Project at a glance
ActiveLast synced 4 days ago