
ChatGPT
AI conversational assistant for answering questions, writing, and coding help
Discover top open-source software, updated regularly with real-world adoption signals.

Run ChatGPT-style AI models locally with complete privacy
Desktop application for running open-source LLMs locally or connecting to cloud providers. Full offline capability with OpenAI-compatible API at localhost:1337.

Jan is a cross-platform desktop application that lets you download and run large language models entirely on your own hardware. Built for users who want ChatGPT-like capabilities without sending data to external servers, Jan supports popular models from HuggingFace including Llama, Gemma, Qwen, and others.
While Jan excels at local inference, it also integrates with cloud providers like OpenAI, Anthropic, Mistral, and Groq when you need them. The application exposes an OpenAI-compatible API server on localhost:1337, allowing other tools to leverage your local models. Model Context Protocol (MCP) integration enables agentic workflows and extended capabilities.
Create specialized AI assistants tailored to specific tasks, switch between local and cloud models seamlessly, and maintain complete control over your AI infrastructure. Built with Tauri and TypeScript, Jan runs on Windows 10+, macOS 13.6+, and most Linux distributions, with GPU acceleration support for NVIDIA, AMD, and Intel Arc hardware. System requirements scale with model size—8GB RAM handles 3B parameter models while 32GB enables 13B+ models.
When teams consider Jan, these hosted platforms usually appear on the same shortlist.
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Offline Document Analysis
Process confidential documents with local LLMs, ensuring sensitive information never leaves your network
Development Environment Integration
Point existing tools to localhost:1337 for AI features without modifying code or paying API fees
Custom AI Assistants
Build specialized assistants for legal review, code generation, or domain-specific tasks with tailored prompts
Hybrid Cloud-Local Workflows
Use local models for routine tasks and switch to cloud providers for complex queries requiring larger models
Jan supports LLMs from HuggingFace including Llama, Gemma, Qwen, and GPT-based open-source models. You can also connect to OpenAI, Anthropic Claude, Mistral, and Groq cloud models.
8GB RAM handles 3B parameter models, 16GB works for 7B models, and 32GB is recommended for 13B+ parameter models. Requirements scale with model size.
Yes. When using local models, Jan runs entirely offline with no internet connection required. Cloud integrations are optional.
Jan runs a local server at localhost:1337 that mimics OpenAI's API format, letting you point existing applications to your local models without code changes.
Windows 10+, macOS 13.6+, and most Linux distributions. GPU acceleration is available for NVIDIA, AMD, and Intel Arc graphics cards.
Project at a glance
ActiveLast synced 4 days ago