
Amazon SageMaker JumpStart
ML hub with curated foundation models, pretrained algorithms, and solution templates you can deploy and fine-tune in SageMaker
Discover top open-source software, updated regularly with real-world adoption signals.

Fine‑tune LLMs fast with flexible, scalable open‑source framework
Axolotl streamlines LLM and multimodal model fine‑tuning, offering LoRA, QLoRA, QAT, DPO, and multi‑GPU/Node support via simple YAML configs and Docker/PyPI deployment.

Axolotl is a free, community‑driven framework that simplifies post‑training and fine‑tuning of the latest large language and multimodal models. It targets researchers, ML engineers, and teams who need to adapt models such as GPT‑OSS, LLaMA, Mistral, Mixtral, Pixtral, or Voxtral to custom data, while offering a unified YAML‑based workflow that covers dataset preprocessing, training, evaluation, quantization, and inference.
The toolkit supports a broad spectrum of training methods—including full‑parameter fine‑tuning, LoRA, QLoRA, GPTQ, QAT, preference tuning (DPO, IPO, KTO, ORPO), GRPO, and reward modelling—paired with performance optimizations like Flash Attention, Sequence Parallelism, and multi‑GPU/‑node strategies (FSDP, DeepSpeed, Torchrun, Ray). Flexible dataset loading from local storage, Hugging Face Hub, or cloud buckets makes it easy to integrate diverse data sources.
Deployment is straightforward: install via pip with optional extras, pull the official Docker image, or use the provided PyPI packages on cloud platforms such as RunPod, Vast.ai, or Modal. With comprehensive documentation and an active Discord community, Axolotl enables rapid experimentation and production‑grade scaling.
When teams consider Axolotl, these hosted platforms usually appear on the same shortlist.

ML hub with curated foundation models, pretrained algorithms, and solution templates you can deploy and fine-tune in SageMaker

Enterprise AI platform providing LLMs (Command, Aya) plus Embed/Rerank for retrieval

API-first platform to run, fine-tune, and deploy AI models without managing infrastructure
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Domain‑specific LLM fine‑tuning
Achieve higher accuracy on specialized corpora using LoRA or full‑parameter training.
Multimodal vision‑language model adaptation
Fine‑tune LLaVA or Pixtral on custom image‑text datasets with GPU acceleration.
Quantization‑aware training for edge deployment
Train models with QAT to produce 8‑bit models ready for low‑latency inference.
Large‑scale instruction tuning across multiple nodes
Leverage FSDP and Torchrun to train instruction‑tuned models on hundreds of GPUs.
An NVIDIA Ampere‑class GPU (or newer) with BF16 support is recommended; multi‑GPU setups benefit from NVLink or PCIe.
Use pip with optional extras (e.g., `pip install axolotl[flash-attn,deepspeed]`) or run the official Docker image `axolotlai/axolotl:main-latest`.
Yes, Axolotl supports vision‑language and audio models such as LLaVA, Pixtral, and Voxtral with image, video, and audio inputs.
Axolotl includes QAT, GPTQ, and 8‑bit finetuning via torchao, enabling efficient inference on limited hardware.
Join the Discord community, consult the documentation, or email wing@axolotl.ai for dedicated support.
Project at a glance
ActiveLast synced 4 days ago