
Acunetix
Web vulnerability scanner for automated security testing of websites and web apps
Discover top open-source software, updated regularly with real-world adoption signals.

AI-driven static analysis uncovers remote exploit chains in Python code
Vulnhuntr uses large language models to automatically trace user input through Python call chains, revealing complex remotely exploitable vulnerabilities beyond traditional static analysis.
Vulnhuntr empowers security engineers and developers to discover remotely exploitable bugs in Python applications without writing custom test cases. By prompting a large language model to follow the flow from user‑controlled input to server‑side processing, the tool builds complete call‑chains and surfaces multi‑step vulnerabilities that static scanners typically miss.
The system supports Claude, OpenAI GPT, and Ollama (experimental) as back‑ends, generating a detailed report that includes reasoning, a proof‑of‑concept exploit, and a confidence score. Installation is straightforward via Docker, pipx, or Poetry, but requires Python 3.10 due to parser dependencies. Users supply an API key, point the CLI at a local repository, and optionally narrow the analysis to specific files handling external input.
Security auditors, DevSecOps teams, and open‑source maintainers can integrate Vulnhuntr into CI pipelines or run it ad‑hoc to prioritize remediation of high‑confidence findings such as RCE, XSS, SSRF, IDOR, and more.
When teams consider Vulnhuntr, these hosted platforms usually appear on the same shortlist.
Looking for a hosted option? These are the services engineering teams benchmark against before choosing open source.
Automated security audit of a new Python web framework
Identified hidden RCE and XSS vectors, enabling developers to patch before release
CI integration for continuous vulnerability monitoring
Runs nightly scans, flags high‑confidence findings, and generates PoC for rapid triage
Bug bounty verification for reported exploits
Reproduces reported issues with AI‑generated PoC, confirming severity and scope
Open‑source dependency review before inclusion
Detects remote‑code execution risks in third‑party libraries, informing safe adoption decisions
Only Python codebases are currently supported.
Claude (default), OpenAI GPT models, and Ollama (experimental) are supported.
Yes, an API key for the chosen LLM service must be set in the environment.
Yes, use the `-a` option to target a particular file or subdirectory.
Scores below 7 indicate low likelihood, 7 suggests investigation, and 8+ signals a strong probability of a real vulnerability.
Project at a glance
StableLast synced 4 days ago