
Hiveflow
Visual workflow orchestration for AI agents and automation
Discover top open-source software, updated regularly with real-world adoption signals.

Open-source platform for building production-ready LLM applications
Dify combines visual AI workflows, RAG pipelines, agent capabilities, and model management into an intuitive platform for developing and deploying LLM applications from prototype to production.

Dify is a comprehensive platform designed for teams and developers building LLM-powered applications. It bridges the gap between experimentation and production deployment through a visual interface that requires minimal coding.
The platform offers a visual workflow canvas for designing agentic AI systems, extensive RAG pipelines with native document processing (PDFs, PPTs), and seamless integration with hundreds of LLMs from providers like OpenAI, Mistral, and Llama3. Built-in agent capabilities support both Function Calling and ReAct patterns, with over 50 pre-built tools including Google Search, DALL·E, and WolframAlpha.
Dify provides flexible deployment options: a managed cloud service with 200 free GPT-4 calls, self-hosted Community Edition via Docker Compose, and enterprise editions with custom branding. LLMOps features enable continuous monitoring, prompt refinement, and performance analysis based on production data. Every feature is API-accessible, functioning as a backend-as-a-service for seamless integration into existing business logic.
Ideal for teams moving beyond proof-of-concept stages who need observability, model flexibility, and production-grade infrastructure without building from scratch.
When teams consider Dify, these hosted platforms usually appear on the same shortlist.
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Enterprise Knowledge Base with RAG
Process internal PDFs and documents to build a searchable AI assistant that answers employee questions using company-specific information with full audit trails
Multi-Step Research Agent
Create workflows combining web search, data analysis, and report generation tools to automate competitive intelligence gathering and synthesis
Customer Support Automation
Deploy chatbots with RAG-powered knowledge retrieval and agent tools for ticket creation, monitoring performance through LLMOps dashboards
Content Generation Pipeline
Build visual workflows that orchestrate multiple LLMs for drafting, editing, and optimizing marketing content with A/B testing capabilities
Dify provides three deployment paths: Dify Cloud (managed service with 200 free GPT-4 calls), self-hosted Community Edition via Docker Compose, and enterprise editions with custom branding available on AWS Marketplace and other cloud platforms.
Dify integrates with hundreds of models from dozens of providers including OpenAI, Mistral, Llama3, and any OpenAI API-compatible models. It supports both proprietary and open-source LLMs with self-hosted inference options.
Self-hosted Dify requires at least 2 CPU cores and 4GB RAM. You'll also need Docker and Docker Compose installed to run the platform using the provided docker-compose configuration.
Yes, Dify functions as a backend-as-a-service with comprehensive APIs for all features. You can integrate workflows, RAG pipelines, and agent capabilities directly into your business logic without using the dashboard.
Dify includes 50+ built-in tools such as Google Search, DALL·E, Stable Diffusion, and WolframAlpha. You can also create custom tools and define agents using LLM Function Calling or ReAct patterns.
Project at a glance
ActiveLast synced 4 days ago