P

Perplexica

Private AI search engine with local and cloud LLM support

Perplexica delivers AI-powered answers with cited sources while keeping queries private, supporting local Ollama models and major cloud providers, and offering web, image, and file search via SearxNG.

Overview

Overview

Perplexica is a privacy‑first AI answering engine that runs on your own hardware. It combines internet knowledge with local LLMs (via Ollama) and cloud APIs such as OpenAI, Claude, and Groq, returning answers that include cited sources. Users can choose between Speed, Balanced, and Quality search modes, select sources ranging from web pages to academic papers, and benefit from widgets for weather, calculations, stock prices, and more.

Deployment

The project is distributed as Docker images, with a full‑stack image that bundles SearxNG for private web, image, and video search. Installation is a single docker run command, and persistent volumes keep search history and uploaded files locally. Advanced users can point Perplexica at a custom SearxNG instance or build from source using Node.js. All data stays on the host, ensuring that queries and uploaded documents never leave your environment.

Highlights

Cited answers from web, discussions, and academic sources
Support for local Ollama models and cloud APIs (OpenAI, Claude, Groq)
Three search modes (Speed, Balanced, Quality) for performance control
Integrated SearxNG search with privacy‑preserving web, image, and video results

Pros

  • Full control of data, no third‑party tracking
  • Flexible model selection across local and cloud providers
  • Rich source options including files and domain‑specific searches
  • Easy Docker deployment with persistent storage

Considerations

  • Requires own compute resources for local LLMs
  • Initial setup may be complex for non‑technical users
  • Feature set still evolving; some integrations (Tavily, Exa) pending
  • Performance depends on chosen model and internet connectivity

Managed products teams compare with

When teams consider Perplexica, these hosted platforms usually appear on the same shortlist.

Perplexity logo

Perplexity

AI-powered search engine and research assistant with cited sources

Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.

Fit guide

Great for

  • Privacy‑focused researchers needing traceable AI answers
  • Teams that want to run AI search on internal infrastructure
  • Developers who prefer mixing local and cloud LLMs
  • Users who need citation‑backed results across web and documents

Not ideal when

  • Casual users seeking a plug‑and‑play web service
  • Environments without Docker or container support
  • Scenarios requiring real‑time answers from large hosted models only
  • Organizations lacking technical staff to manage self‑hosted infrastructure

How teams use it

Academic literature review

Generate summarized answers with citations from scholarly articles and PDFs while keeping research data private.

Internal knowledge base querying

Search company documents and intranet sites, receiving AI‑generated answers sourced from uploaded files and specific domains.

Secure market analysis

Combine web, news, and stock widgets to produce up‑to‑date market insights without exposing queries to external services.

Personal assistant for offline devices

Run local Ollama models on a home server to answer questions and perform calculations without internet‑based data collection.

Tech snapshot

TypeScript99%
Dockerfile1%
CSS1%
Slim1%
Shell1%
JavaScript1%

Tags

search-enginesearxngsearxng-copilotperplexicamachine-learningopen-source-ai-search-engineartificial-intelligenceopen-source-perplexity-aiai-search-engineperplexity-ai

Frequently asked questions

Do I need an internet connection to use Perplexica?

Perplexica can operate with local LLMs for answer generation, but web, image, and video searches still require internet access to query external sources via SearxNG.

Which AI providers are supported?

You can connect to local Ollama models or cloud APIs such as OpenAI, Anthropic Claude, Google Gemini, Groq, and others that follow standard chat completions.

How is my privacy protected?

All queries, search history, and uploaded files are stored locally; no data is sent to third‑party services unless you explicitly use a cloud model.

Can I use my own SearxNG instance?

Yes, the slim Docker image allows you to point Perplexica at a custom SearxNG endpoint with JSON format enabled.

What are the system requirements?

A machine capable of running Docker (or Node.js for non‑Docker) and sufficient CPU/RAM for the chosen LLM; local models typically need several GB of memory.

Project at a glance

Active
Stars
28,394
Watchers
28,394
Forks
2,993
LicenseMIT
Repo age1 year old
Last commit2 weeks ago
Primary languageTypeScript

Last synced yesterday