
Perplexity
AI-powered search engine and research assistant with cited sources
Private AI search engine with local and cloud LLM support
Perplexica delivers AI-powered answers with cited sources while keeping queries private, supporting local Ollama models and major cloud providers, and offering web, image, and file search via SearxNG.
Perplexica is a privacy‑first AI answering engine that runs on your own hardware. It combines internet knowledge with local LLMs (via Ollama) and cloud APIs such as OpenAI, Claude, and Groq, returning answers that include cited sources. Users can choose between Speed, Balanced, and Quality search modes, select sources ranging from web pages to academic papers, and benefit from widgets for weather, calculations, stock prices, and more.
The project is distributed as Docker images, with a full‑stack image that bundles SearxNG for private web, image, and video search. Installation is a single docker run command, and persistent volumes keep search history and uploaded files locally. Advanced users can point Perplexica at a custom SearxNG instance or build from source using Node.js. All data stays on the host, ensuring that queries and uploaded documents never leave your environment.
When teams consider Perplexica, these hosted platforms usually appear on the same shortlist.
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Academic literature review
Generate summarized answers with citations from scholarly articles and PDFs while keeping research data private.
Internal knowledge base querying
Search company documents and intranet sites, receiving AI‑generated answers sourced from uploaded files and specific domains.
Secure market analysis
Combine web, news, and stock widgets to produce up‑to‑date market insights without exposing queries to external services.
Personal assistant for offline devices
Run local Ollama models on a home server to answer questions and perform calculations without internet‑based data collection.
Perplexica can operate with local LLMs for answer generation, but web, image, and video searches still require internet access to query external sources via SearxNG.
You can connect to local Ollama models or cloud APIs such as OpenAI, Anthropic Claude, Google Gemini, Groq, and others that follow standard chat completions.
All queries, search history, and uploaded files are stored locally; no data is sent to third‑party services unless you explicitly use a cloud model.
Yes, the slim Docker image allows you to point Perplexica at a custom SearxNG endpoint with JSON format enabled.
A machine capable of running Docker (or Node.js for non‑Docker) and sufficient CPU/RAM for the chosen LLM; local models typically need several GB of memory.
Project at a glance
ActiveLast synced yesterday