
Amazon SageMaker JumpStart
ML hub with curated foundation models, pretrained algorithms, and solution templates you can deploy and fine-tune in SageMaker
Discover top open-source software, updated regularly with real-world adoption signals.

Fine‑tune, evaluate, and run private LLMs effortlessly
xTuring provides a simple API to fine‑tune, evaluate, and deploy open‑source LLMs privately, supporting LoRA, INT8/INT4 quantization, CPU inference, and scalable GPU workloads.

When teams consider xTuring, these hosted platforms usually appear on the same shortlist.

ML hub with curated foundation models, pretrained algorithms, and solution templates you can deploy and fine-tune in SageMaker

Enterprise AI platform providing LLMs (Command, Aya) plus Embed/Rerank for retrieval

API-first platform to run, fine-tune, and deploy AI models without managing infrastructure
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Internal FAQ chatbot with LLaMA 2
Fine‑tuned private assistant that answers company‑specific questions with low latency
Edge deployment of Falcon 7B using INT4
Reduced model size and inference cost, enabling real‑time responses on limited hardware
Perplexity evaluation of GPT‑OSS 20B on custom dataset
Quantitative insight into model fit before committing to production
CPU‑only prototype with distilgpt2
Rapid iteration and testing without GPU resources
Small models (e.g., distilgpt2) run on CPU, but large models benefit from GPU acceleration.
xTuring supports LoRA adapters, INT8, INT4, and combinations such as LoRA+INT8 or LoRA+INT4.
Use the InstructionDataset class with Alpaca‑format JSON or plain text files.
Yes, the library runs locally or in any private cloud environment.
Currently only perplexity is provided, with plans for additional metrics.
Project at a glance
ActiveLast synced 4 days ago